The Tyranny of the “For You” Feed
Hi friends –
Once upon a time, in the early days of social media, we were mostly in control of what we saw on our news feeds. We curated the feed by choosing who to “follow” or who to “friend.” Sure, the algorithm sorted the posts from our chosen connections, but we controlled the underlying data.
Then TikTok burst onto the scene with its addictive “for you” feed – a feed that was entirely curated by TikTok. You could choose to “follow” TikTokers, but it did not ensure that they would show up in your feed. Instead, feeds are dominated by TikTok’s chosen viral videos.
Soon, most social media platforms adopted “for you” style feeds that pushed posts they wanted you to discover. This gave the platforms much more control of what users were seeing – and gave users far less control.
In a recent interview, Meta CEO Mark Zuckerberg said a feed is now “a discovery engine of content” rather than a way to facilitate social connection. “The definition of feeds and social interaction has changed very fundamentally in the last 10 years,” he told the Verge in a recent interview.
In my latest piece for New York Times Opinion, I argue that social platforms should give us back control over what we see in our feeds — and that they will themselves benefit from doing so.
What’s the Problem? The rise of the algorithmic engagement-driven social media feed has swamped our feeds with an avalanche of clickbait, ragebait, fake feelgood videos, and all sorts of other distracting, disempowering, and dissatisfying content.
A portion of this drek is outright dangerous: consider the “blackout challenge” videos on TikTok that showed people strangling themselves until they passed out. More than 15 children died as a result of trying the challenge, according to an analysis by Bloomberg Businessweek.
Until recently, tech companies have largely avoided liability for their algorithmic amplification choices because of Section 230, a controversial 1996 law.
What is Being Done? Courts are slowly starting to hold tech giants responsible for some of their algorithmic manipulations in extreme cases like the blackout challenge.
In August, the U.S. Third Circuit Court of Appeals ruled that TikTok could not use Section 230 to shield itself against a case brought by the family of a 10 year old girl who died after attempting the blackout challenge.
Placing the Blackout Challenge on the girl’s feed “was TikTok’s own ‘expressive activity,’ and thus its first-party speech,” Judge Patty Shwartz wrote. The judge rejected the company’s defense that the challenge was made by a third-party and thus protected by Section 230. TikTok has petitioned the Third Circuit to re-hear the case with a broader panel of judges.
Last month, a D.C. Superior Court ruled that Meta could not use Section 230 as a shield against a lawsuit by the D.C. Attorney General alleging that the company’s “personalized algorithms” were designed to be addictive and harmful, using features such as infinite scroll and frequent alerts.
Additional cases are pending across the globe alleging tech company culpability for everything from distribution of non-consensual nude images to hate speech and scams.
This is a significant, and possibly transformative, development that could eventually force the social-media giants to be answerable for the societal consequences of their algorithmic manipulations.
What Do We Want This to Look Like? It is important to be cautious, however, whenever we are talking about government regulation of speech. As a journalist, I am very committed to freedom of expression and I worry about government censorship.
As I have argued in the past, I think a good way to think about this conundrum is to draw a line between speech and conduct. Tech companies should not be liable for users’ defamatory speech – but they should be liable for their own conduct in illegally violating civil rights, product safety, antiterrorism, and other important laws.
In this case, however, I think there is an even easier fix: tech companies can escape the courts’ impatience with immunity by giving us control of our feeds. If we are in charge of our feeds, it will be much easier for tech companies to argue in court that the problems of social media are what we are doing to ourselves - not what they are doing to us.
As I wrote in the New York Times, “It could be a win-win: We get to decide what we see, and they get to limit their liability.”
As always, thanks for reading.
Best
Julia
P.S. I am taking control of my feed by moving most of my activity to Bluesky - a social network that lets me curate my feed, subscribe to custom feeds and adjust my own content moderation settings. Come join me there!