Breaking Up Google is Not Enough
Hi friends –
Last month, a federal judge told us what we already knew: Google is a monopolist in the Web search market with a huge market share—95% on mobile and 89% overall.
Being a monopolist is not necessarily against the law, but the judge ruled that Google had illegally maintained its monopoly in part by paying billions of dollars to Apple to make Google the default search engine on iPhones.
Next month, the judge will begin deliberations on the harder question: what to do to restore competition to a market so completely ruled by one player.
The judge seems likely to ban the payments Google has made to ensure that its search engine is the default on Apple phones and in Web browsers such as Firefox. And the Department of Justice is reportedly considering advocating for a breakup of Google that would prevent the company from installing Google search as the default on the Chrome Web browser and Android phones.
But neither of these measures are enough, I argue in my latest piece for New York Times Opinion (gift link). We also need to break up Google’s monopoly over search query data.
What’s the Problem?
The problem with the leading remedies that are being discussed – ending pay-for placement deals and breaking up Google – is that they only impact the distribution of rival search engines. But they do nothing to foster the building of rival search engines.
Building a search engine is expensive and difficult. It requires not just scraping the entire Web regularly, but also knowing how to organize the results so that readers find what they are looking for. Google has a huge advantage in both realms, but particularly the latter.
Google receives nineteen times more queries than all of its competitors put together. Knowing what users are searching for helps Google provide better results. “No current rival or nascent competitor can hope to compete against Google in the wider marketplace without access to meaningful scale,” the judge wrote.
In the face of Google's scale, only the behemoth Microsoft has been able to mount a credible competitor in Bing. (Rivals Yahoo and DuckDuckGo use Bing’s search results). And even so, Bing has struggled because of its lack of access to user data. The amount of data that Google collects about user queries in 13 months would take Bing 17 years to collect, the judge wrote.
What Can Be Done?
To foster competition, the judge needs to address the hurdles that rivals face in building competing search engines.
Google knows that competitors cannot match its quality. One of the more shocking revelations in the trial was that Google had conducted a test in 2020 to degrade the quality of search results. It found a negligible impact on revenues.
“The fact that Google makes product changes without concern that its users might go elsewhere is something only a firm with monopoly power could do,” Mehta wrote.
To get better results, we need to foster innovation in search. And one way to do that is to give competitors access to Google’s search query data. It’s not as preposterous as it sounds. Google already has an API that gives developers access to search data – but the rules for using it are too onerous at the moment (for instance, it can’t be used on mobile).
In fact, the European Union just started a program like this called the Google European Search Dataset Licensing Program that allows European developers to access Google data to build online search products.
That would allow competitors to offer search engines that compete on different metrics – perhaps one focused on privacy, and another on shopping or news.
What Would That Look Like?
In recent weeks, I have been trying out Kagi, a $10-per-month search engine that has no ads and provides a vision of what competition could enable. I can customize results to include or exclude types of content such as listicles and Wikipedia. I can sort results by how fast the website loads.
Kagi is expensive and I’m not sure I’m going to keep using it. But it is a reminder of how completely our online world has been shaped by Google.
The reason that so many websites are so spammy - littered with videos and photos and ads and divided into listicle-style sections - is they are optimized to rank highly in Google search results rather than being optimized for readers.
In a study published earlier this year, titled “Is Google Getting Worse?” researchers found that it was hard to measure the quality of search results because “the line between benign content and spam in the form of content and link farms becomes increasingly blurry.”
But if there were a dozen search engines out there optimizing for all sorts of different features – or if, like Kagi, I could turn the dials and choose my own optimization features — perhaps we could have a web that is more optimized for readers and less optimized for spam.
Because the truth of being a monopoly it doesn’t only confer power, but it also makes you into a huge target. Google has been the single target of spammers and search engine optimizers for decades now, and it seems that it is losing the war against them. To build a better ecosystem, we need to decentralize and establish some hybrid vigor.
After all, wasn’t decentralization the whole point of the Internet?
As always, thanks for reading.
Best
Julia
P.S. I forgot to send out a newsletter about my last New York Times column – about how the public has less information than ever about the political discourse on social media platforms, because the companies have shut down access to transparency tools and right-wing groups have intimidated and harassed internet researchers.