AI Week(ish) Mar 30: Sycophantic chatbots, identity, and AI-washing
Hi and welcome to another edition of AI Week(ish)! In this edition:
YSK: Chatbots can undermine your judgement
Is AI actually doing anyone’s job well enough to replace them?
What does Google have planned next for AI?
AI and Identity
Resources
Longer reads
YSK: Chatbots can undermine your judgement
A new study finds that commercial chatbots are 50% more sycophantic than actual humans, agreeing with their human interlocutors even when the humans were talking about doing bad things. Worse, we’re all vulnerable to this sycophancy, trusting and agreeing with the flattering chatbots.
Study: Sycophantic AI can undermine human judgment - Ars Technica
Subjects who interacted with AI tools were more likely to think they were right, less likely to resolve conflicts.
Editor’s summary of the paper:
The sycophantic (flattering, people-pleasing, affirming) behavior of artificial intelligence (AI) chatbots, which has been designed to increase user engagement, poses risks as people increasingly seek advice about interpersonal dilemmas. There is usually more than one side to a story during interpersonal conflicts. If AI is designed to tell users what they want to hear instead of challenging their perspectives, then are such systems likely to motivate people to accept responsibility for their own contribution to conflicts and repair relationships? Cheng et al. measured the prevalence of social sycophancy across 11 leading large language models (see the Perspective by Perry). The model’s responses were nearly 50% more sycophantic than humans’, even when users engaged in unethical, illegal, or harmful behaviors. Users preferred and trusted sycophantic AI responses, incentivizing AI developers to preserve sycophancy despite the risks. —Ekeoma Uzogara
This is almost certainly terrible for us, as the accompanying editorial in Science points out:
Human well-being depends on the ability to navigate the social world, a skill acquired primarily through interactions with others. Such social learning depends on reliable feedback: recognizing when we are mistaken, when harm has been caused, and when others’ perspectives warrant consideration. At times, sincere empathy appears where it was not expected, revealing that another person may be trusted in the future. At other times, disappointment leads to reconsideration of whether trust should be reduced or another chance offered. Acts of kindness may be met with gratitude; on other occasions, a misstep prompts a friend’s disapproval and recognition that an apology is needed. In psychotherapy, moments of rupture—natural breakdowns in understanding followed by repair—are considered crucial for deepening trust, and for personal growth to unfold. Social life is rarely frictionless, because people are not perfectly attuned to one another. Yet it is precisely through such social friction that relationships deepen and moral understanding develops.
Is AI actually doing anyone’s job well enough to replace them?
Could AI coding be… a problem?
AI models are enormously popular for coding. I’ve used them! It’s fun! So I was interested in this Futurism article, which covers a few of the problems that some developers have run into with AI coding.
A Grim Truth Is Emerging in Employers' AI Experiments
Tech executives are warning that nobody is checking the "fallibility" of AI-generated code, a disaster waiting to happen.
And the Medium article below makes a good case against using AI coding agents to reinvent the wheel. Turns out that a project that multiple open-source developers have contributed to over a decade-plus is a lot better than the version you wrote in a month, even if you used AI to do it:
https://medium.com/write-a-catalyst/an-ai-wrote-576-000-lines-to-replace-sqlite-7ea538826d72This is the failure mode you should worry about most:
The model reproduces the shape of the system and misses the unsexy conditional that carries correctness and performance.
Is AI really as good as we’ve been told at radiology?
Looks like radiology may be another thing that AI isn’t doing as well as we’ve been told. AI models do very well on radiology benchmarks… but a new study finds that they can those get top scores without looking at the X-rays. I don’t know what’s going on there, but it’s not radiology.
The mirage of visual understanding in current frontier models
When a model achieves a “top rank on a standard chest X-ray question-answering benchmark without access to any images” you know something is deeply wrong.
Wikipedia bans AI text, with 2 exceptions
Wikipedia has banned AI-generated text, with two exceptions
Begone, AI slop.
Just don’t use AI for legal work, part 1,219
There are currently 1,218 entries in the database of cases in which lawyers have gotten in trouble for including made-up stuff in their legal filings, courtesy of the LLMs they shouldn’t have used for legal work. I guess it’s 1,219 now, because even the American Department of Justice can’t resist asking an LLM to hallucinate, I mean, do their work for them.
'Oh my God': Legal experts stunned after judge catches ICE lawyers citing bogus cases - Raw Story
Attorneys and legal observers were left in disbelief after a federal judge in Minnesota tore into the legal team for U.S. Immigration and Customs Enforcement on Thursday for submitting a brief "riddled with misreadings and misquotations," and said she questioned defense counsel at the hearing and "r...
So… is AI actually replacing anyone?
Is AI really taking jobs, or were recent layoffs attributed to AI really just regular layoffs with an AI smoke-screen? Even the WSJ suspects a great deal of “AI-washing” is going on. (Unlocked article)
https://www.wsj.com/tech/ai/are-bots-replacing-workers-these-skeptics-arent-so-sure-755143b1?st=mxCAZG&reflink=article_copyURL_share
Leaders at Amazon.com, Block, Atlassian and other companies have linked recent layoffs to AI. But economists and machine-learning specialists say existing technology isn’t ready to take humans’ jobs at scale.
They argue the most likely reasons for head-count reductions remain the same as ever: slower sales, shifting priorities and previous overhiring.
What does Google have planned for AI next?
Google has been cramming AI into its search results page with both hands, and it seems like there’s still more AI to come. The Verge noticed that Google has started replacing some of its headlines with AI-generated headlines, which may or may not accurately reflect their articles:
https://www.theverge.com/tech/896490/google-replace-news-headlines-in-search-canary-coal-mine-experimentFor example, Google reduced our headline “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” to just five words: “‘Cheat on everything’ AI tool.” It almost sounds like we’re endorsing a product we do not recommend at all.
(The Verge’s example above is about Cluely. I also read a great story about their founder this week, which we’ll get to in a bit.)
Meanwhile, a few people are wondering about a patent that Google acquired in January for a system of rewriting other people’s websites on the fly:
https://www.forbes.com/sites/joetoscano1/2026/03/06/google-just-patented-the-end-of-your-website

A patent granted to Google on January 27, 2026 titled “AI-generated content page tailored to a specific user” describes a system that evaluates your company’s landing page in real time and, if it decides the page won’t perform well enough for a specific user, replaces it with an AI-generated version assembled on the fly. The user never sees what your team built, they see what Google's machine learning model thinks they should see instead.
This might work something like Google’s AMP program from a decade ago. Sites that signed up for AMP had their mobile pages simplified and cached by Google, lured by the promise of both a speed boost and a bump in their Google search ranking. In practice, the speed boost didn’t always materialize, and by passing site traffic through Google, AMP hurt users’ privacy and security while helping Google monopolize the Web.
AI and Identity
Selling slices of your life to train AI
Thousands of people are selling their identities to train AI – but at what cost? | AI (artificial intelligence) | The Guardian
Gig AI trainers worldwide are selling moments of their lives, including calls and texts, to AI companies for quick cash
Nilay Patel interviews the man who AI-cloned him
Last year, Grammarly added a feature that purported to provide editorial feedback from well-known writers and editors, like author Stephen King and Verge editor Nilay Patel. The problem was that they hadn’t asked the authors and editors for permission to put words in their AI-clones’ mouths, or even told the writers and editors whose names they were using about the feature. Worse, the advice was pretty bad, in a potentially reputation-harming way. Grammarly recently pulled the feature in response to furious pushback. So I was interested to hear what Grammarly’s CEO, Shishir Mehrotra, would have to say to Nilay Patel.
https://www.theverge.com/podcast/898715/superhuman-grammarly-expert-review-shishir-mehrotra-interview-ai-impersonationGreat interview, absolutely worth a read or a listen. One of my favourite clips:
Nilay: This is from the Superhuman suite at South by Southwest. There were a lot of talks there. The summary of the talks was, “AI can’t replace human creativity, empathy, or emotion. It won’t take all of our jobs, but it will reshape how we work. And in the AI era, taste and judgment are more valuable than ever.” Valuable on what metric? Is it dollars?
Shishir: Valuable on every metric.
Specifically dollars. Dollars are what I pay my mortgage in. Is it dollars?
I’m sorry, I didn’t understand the question.
For a bit of added context, Mehrotra was formerly Youtube’s CEO, which comes up a fair bit in the interview. YouTube was this week found negligent in by a LA jury for failing to warn users of social media addiction. https://www.cnbc.com/2026/03/25/meta-youtube-los-angeles-california-verdict.html
Resources
Presented without verification or endorsement.
A list I ran across of news sources on Bluesky that don't use AI:
https://bsky.app/starter-pack/alexip718.com/3ma7dtn6w7j2x
Use AI to check possibly-AI-generated legal documents for hallucinated sources: https://pelaikan-app.web.app/
Longer reads
Long read: Cheat on everything (if the app’s working today)
A fascinating article in Harpers profiling founder, the janky product that promises to let you cheat on everything (if it’s working), and Silicon Valley culture:
Child’s Play, by Sam Kriss
Tech’s new generation and the end of thinking
One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event. Some people will do incredibly well in the new AI era. They will become rich and powerful beyond anything we can currently imagine. But other people—a lot of other people—will become useless….
The future will belong to people with a very specific combination of personality traits and psychosexual neuroses. An AI might be able to code faster than you, but there is one advantage that humans still have. It’s called agency, or being highly agentic. The highly agentic are people who just do things. They don’t timidly wait for permission or consensus; they drive like bulldozers through whatever’s in their way…. Agency is now the most valuable commodity in Silicon Valley. In tech interviews, it’s common for candidates to be asked whether they’re “mimetic” or “agentic.” You do not want to say mimetic. Once, San Francisco drew in runaway children, artists, and freaks; today it’s an enormous magnet for highly agentic young men. I set out to meet them.
Extremely Long Read: Ed Zitron
I can best recap this as “There’s a lot of buzz and excitement around data centers, but if you look into the actual sites, most of them haven’t broken ground yet and won’t be completed for years, which makes it weird that companies are ordering AI chips for them now, since those chips will be way out of date by the time the centres are built. By the way, how do so many AI chips keep getting to China? Anyway, the same companies betting big on these data centres are forcing their devs to use AI and it’s not been great for the quality of their work.” Worth a read (or a listen).
Add a comment: