Breaking Up with Google Search

Today's newsletter is a bit of a hodgepodge. Hope and I are back in Tacoma and spending quality time with family along with attending our usual events and meetups.
It was a bit of a whirlwind return.
We departed Abu Dhabi right before things got complicated geopolitically in the Gulf. I have acres of thoughts about what is happening in the region but there are occupational constraints on what I can publicly say, but if you buy me a pint this summer you'll absolutely get an earful.
Also, from the summer reading list I finished The Shock Doctrine on the fifteen hour flight and King of Ashes earlier today. I’ll have takes on each in the next newsletter. But my head is still spinning about the end of King of Ashes. S.A. Cosby really knocked it out of the park.

For this week’s main course, I want to return to my hobby horse, data privacy.
Tuesday on the podcast, I released a conversation with Bill Fitzgerald about artificial intelligence. I ended up getting an earful about (im)morality within Big Tech and some of the aspiring oligarchs within the education technology realm. In this conversation Bill drove home that AI is a form of surveillance. Think that's hyperbolic? If you use ChatGPT or a similar model frequently, go into its settings and check its “memories” and see what it has retained on you.
In some ways, this episode is a continuation of a conversation I've been having about AI and online privacy for years. One of the final episodes of the podcast I recorded before moving overseas in 2019 was called Mommy, My Teacher Got Replaced by a Robot.
In that interview and subsequent ones, I’ve noted the non-consensual nature of AI deployment. The major tech companies are hell bent on integrating AI into places where no one has asked for it and then retaining all the data from all the interactions you have with the models.
People aren’t asking for this — there's not nearly as much demand for these programs as breathless media coverage would lead one to believe. Meta’s Llama AI model barely registers in real-world use, a failure almost as spectacular as their $46,000,000,000 ($46 billion) metaverse project.
When it comes to AI in my day-to-day, I have been bothered by the declining quality of Google's Search results and the insertion of AI slop into my queries and finally I've had enough.
Put up or shut up, those are my choices
As I've done with Facebook, Twitter, and most recently Substack, I am ending my relationship with Google Search and the Chrome Browser.
I'm currently experimenting with two different browsers and with two different search engines in various combinations trying to find what works best for me. On my phone, I use Ecosia as both my browser and my primary search engine. On my laptop, I’m using the Brave browser, paired with Qwant for search. Both Ecosia and Brave are built on Chromium.
Let’s get a little wonky: Google’s Chrome browser is also based on Chromium, which is an open source software program. Google adds a proprietary layer on top and packages it as Chrome. But since Chromium is open source, anyone can take that foundation and build their own browser — as Brave and Ecosia have done.
Brave is a very straightforward privacy-focused browser.

Ecosia’s search engine is integrated into their browser, and they lean into a social-capital gimmick: they pledge to plant trees based on the number of searches conducted.

Qwant is just a no frills search engine, with no AI integrations or added non-sense and a pledge to keep it that way.
Using Qwant feels like using Google Search used to feel before they ruined it. Ecosia is interesting because it pulls its search results from other search engines including Google but again does not push AI summaries on me that I didn't ask for (I do have a quibble about Ecosia’s integration with my password manager but I am told the team behind the browser is aware and is working on it).
I expected some kind of drop-off in search quality when I moved away from Google, but it’s actually been the opposite. Google Search has degraded so much over the last few years that it’s nearly useless. The enshittification happened gradually, but it’s obvious to anyone who’s tried to look up something specific lately that Google Search is broken. I don’t remember who said it first, but it’s stuck with me: Google Search now basically operates in one of two modes. If it can serve you a volley of ads based on your query, it will. If it can’t, it hands you AI-generated slop from Gemini instead.
With Google’s new model the results you are actually looking for are buried beneath layers of irrelevant SEO bait and poorly disguised AI-generated slop. This is what non-consensual AI deployment looks like. I didn’t ask for these garbage Gemini results, and I didn’t ask for my searches to be flooded with barely readable articles churned out by AI content farms. It is a death spiral for the open internet, and I want no part of it.
I want to be clear here, this isn't some sort of big moral crusade. But we have to start asking ourselves at what point do we start pushing back against the monopolistic practices of big tech and should we really trust them with all the personal, medical, and financial data we do? You’d be shocked what some people put into Chat GPT without considering their data use and retention policies.
Meaningful regulation isn’t coming in the near term, so it’s up to us to help ourselves.
—
One final note.
Last week, I closed the newsletter explaining that my hosting costs for Buttondown were going up because of subscriber growth. You responded with aplomb. Thank you!
I am grateful to the following subscribers: Jimmie, Jacob, Doug, Casey, Matt, Judy, Sara, and the group of subscribers who requested to remain anonymous. We’re now good for the next year or so.
See you next week with a double book review on The Shock Doctrine and King of Ashes.
As always, if you have any thoughts or feedback about the newsletter, I welcome it, and I really appreciate it when folks share the newsletter with their friends.