... so I put some tech governance controversy on the platform you were using for your tech governance newsletter.
That's right: Substack, which hosted this newsletter for exactly one issue, is drawing some criticism for its decision to offer large cash advances to, among others, some transphobic and alt-right writers. At the heart of the controversy lies a question: is Substack a platform which hosts general content or a publisher which curates and incentivizes certain types of content?
This question comes up again and again in debates about tech governance, in part due to the influential Section 230. Section 230 marks out the legal liability of services for their content, while Substack's critics are largely arguing about ethics. But law is just the codification of ethics, so in some ways these are just two fronts of the same war: the war over who's voices are heard, and who is held accountable when those voices cause harm.
There's more about Section 230 in the Reading section below. In the meantime, I've moved this newsletter to Buttondown. Now, on with the show.
📰 Richard Stallman returns to the Board of the Free Software Foundation. In 2019, Richard Stallman resigned from the board of the Free Software Foundation, a non-profit he founded several decades ago. The immediate cause was some disturbing comments Stallman made in an email thread about Jeffrey Epstein, but this was part of a larger pattern of behavior. Last week at its annual conference, LibrePlanet, the FSF revealed that Stallman had been invited back onto the Board.
Since then, the entire management staff of the FSF has resigned, as has one board member who voted against reinstating Stallman. An open letter demanding the resignation of the entire FSF board amassed over 2500 signatures, including from leaders in the Free Software/Open Source movement and from prominent organizations like Mozilla and Tor. It's unclear whether this will be enough to reach the FSF board, however. At this point, it seems more likely that the community will fork, with the FSF going one way and the rest of us another.
I've been part of the Free Software movement since 2012, though it's a label I'm less eager to claim recently. It seems like some parts of the community value freedom and freedom alone, whereas I balance freedom with values like accountability, diversity, and sustainability. I don't see this as sacrificing or compromising freedom, but strengthening it. Accountability protects freedom against abuse. Diversity protects freedom against ignorance and oversight. Sustainability provides freedom with sustenance and support. Regardless of what the FSF decides to do, that is the vision of freedom that I'm working towards.
📰 Joe Biden has promised to be "the most pro-union president ever". To the extent that he has wavered on that promise so far, it has been at the intersection of labor and technology. In early January, it was rumored he might appoint Renata Hesse as Assistant Attorney General of the DoJ's antitrust division, despite Hesse's history defending Google from antitrust allegations and advising Amazon in recent acquisitions. (If you're wondering what antitrust has to do with labor, take a moment to learn about monopsony - concentration of buyer power, which includes employers as buyers of labor.)
Last month Biden quietly appointed Seth Harris, the inspiration behind California's Prop 22. Lyft, Uber, DoorDash and other gig economy companies poured millions of dollars into a successful campaign to convince California voters that gig economy workers don't need any of the protections or benefits that other workers do. It's appalling to see Harris hired into a powerful position even as companies respond to Prop 22 by firing masses of workers.
I wrote in last month's newsletter how technology seems to cast some sort of spell on people's judgment - we assume something must be innovative, progressive, or beneficial, just because it's supported by a tech giant or even tangentially related to tech. The truth is that the tech industry can be just as stifling, regressive or harmful as any other industry - and when it is, we shouldn't reward its leaders by making them White House advisors.
📰 In better news out of California, state Senator Connie Leyva has introduced the Silenced No More Act. As Bruce Hahne writes in the Alphabet Workers newsletter, "California law already prevents the use of NDAs to prevent workers from speaking out about sexual harassment and sexual assault. SB331 would expand these protections to cover any act of "workplace harassment or discrimination".
Frankly, I'd prefer to abolish NDAs altogether. To the extent that they're meant to protect proprietary information, this is already accomplished by trade secrets law. In practice, they're used as a way to coerce workers and to control public opinion. Over a third of the workforce is bound by an NDA and in one survey, 15% of tech workers reported feeling silenced by the NDAs they'd signed. I know many colleagues who feel they can't speak out about tech policy issues for fear of violating their NDAs. The Silenced No More Act wouldn't help with most of their situations, but it's at least a step in the right direction.
On September 11th, 1973, a US-backed military coup overthrew the democratic government of Chile, murdered President Salvador Allende, and began a brutal dictatorship that would last for decades. So much was destroyed in that coup that it is inevitable we'll forget some of it. Eden Medina's Cybernetic Revolutionaries is an attempt to rescue part of that history: the story of Project Cybersyn.
When Allende came to power three years before the coup, he began an effort to nationalize foreign-owned and privately-owned industries. Managing these industries was an enormous challenge, one that Cybersyn sought to make easier through more effective sharing of information. In some ways the project was an answer to the socialist calculation debate, which asks: if you remove market competition from an economy, how do you decide which goods and services are most valuable and therefore ought to be produced?
The people behind Cybersyn, including Allende, Chilean public servant Fernando Flores, and British advisor Stafford Beer, viewed the economy as a cybernetic system - that is, as a system driven by feedback mechanisms, constantly adapting in response to new information. They wanted to improve communication, and provide real-time analysis of the data being communicated, so that decision-makers could better control what was being produced.
The question of control is a crucial one: for Cybersyn, for Allende's Chile, and for us today. The democratic socialist vision seeks to empower people to collectively control their economic and civic destinies; Cybersyn tried, mostly unsuccessfully, to distribute some of that decision-making power to rank and file workers.
Cybersyn worked like this: members of the team visited the individual factories that formed the heart of a given industry. They installed telex machines so that those factories could communicate information to a central command center. They also worked with factory managers and with the government-appointed overseer to determine what data should be communicated, which they called indicators. Those indicators were used to create economic models which would analyze the data coming in and provide guidance on whether and how to intervene. That intervention might be made by Allende or a government official, or it might be made by workers at the factory itself.
Medina notes, again and again, the tension between Cybersyn's political goals and how it was implemented in practice. The project was meant to empower rank and file workers, but they were largely excluded from the process outlined above. There was little effort to provide the training and education needed to help workers participate in the process. Elites remained in control.
As Allende's presidency continued, conservative forces inside and outside of the country sought to derail their efforts. Even if Cybersyn had worked perfectly, Medina argues, it was no match for the United States, which was waging economic war: slashing foreign aid to a fraction of what it had been, lowering Chile's credit rating to deter private investment, and pressuring private companies not to do business with Chile. The deteriorating economy led in October of 1972 to a strike by business owners and professional guilds; right-wing squads attacked businesses that refused to strike and destroyed consumer goods to exacerbate shortages.
The October Strike was Cybersyn's biggest triumph. Members of the left mobilized, using factory trucks to replace private trucks, and banding together to share supplies. The telex network that had been installed in factories across the country proved invaluable for quickly sharing information. The partial success of Cybersyn increased its profile within the Allende government, but it was still largely unknown the wider population within Chile and to most people abroad.
That changed in early 1973. The team planned to announce the project through a speech by British advisor Stafford Beer, but a UK newspaper scooped them. Early press described the project as Orwellian; the first article on the subject said Cybersyn was "'assembled in some secrecy so as to avoid opposition charges of 'Big Brother' tactics.'" Ironically, George Orwell was himself a democratic socialist and may well have supported Cybersyn. Or he might not have - much of the harshest criticism came from the left.
Medina recounts how one of those leftist critics "felt that the society would have taken a different stance on Cybersyn had they known about the military coup that was to come and the oppressive Pinochet dictatorship that would follow, '[t]hen it would have been clear we were on the same side and we wouldn't have dreamed of doing it'".
Even with the benefit of hindsight, it is hard to blame those who criticized Cybersyn for failing to embody its ostensible socialist values. Medina documents how idealistic visions of worker empowerment were constantly pushed aside for practical reasons. As Cybersyn gained more attention, the tensions only grew more severe. Many of the project's leaders tried to "de-politicize" the project so as not to alienate those they wanted to adopt and support it.
Would Cybersyn have changed paths and addressed these shortcomings, if given more time? There's no way to know. In September of 1973 the coup occurred, and most of the people who worked on Cybersyn fled or were imprisoned. The project's history was slowly forgotten, and even the field of cybernetics has been marginalized, with most people recognizing only its etymological footprints in terms like "cyberspace" and "cybersecurity".
I cannot help but wonder what the world would have been like if techno-socialism had been allowed to flourish in the 1970s. The internet of today, dominated by private corporate interests, feels like the antithesis of what the Cybersyn team was trying clumsily to build. For all its flaws, the cybernetics movement has always been clear that every technological system is also a social system. The ordering of Google search results, content moderation on Facebook, copyright management systems on YouTube - these are sociotechnical systems with reverberating social effects. As we scramble to grapple with those effects, we must recognize how late our efforts come, how foolish we were not to do this work from the beginning.
Some people have been shouting this from the rooftops for years. Many have no affiliation with cybernetics - their critique comes from experiencing the effects of these broken systems. Unfortunately, those most hurt by dominant systems tend to be those with the least power, making it unlikely their critique will be heard by those who can act on it.
The question, as always, is one of control. Project Cybersyn claimed to be empowering Chilean workers, but the project's implementation didn't give them any control, and any feedback they might have had about the process was discarded. As long as Big Tech is free to discard the people's feedback, any criticism we level against them is without teeth. There must be some sort of ceding of control. These systems which have come to dominate our lives must be brought under democratic oversight.
Or perhaps they ought to be rebuilt entirely. Perhaps we ought to be as ambitious as the Cybsersyn team was, and re-imagine how technology can fit into our society. But if we do, we should be mindful - the conservative forces in the US and Chile who found brutal dictatorship preferable to democratic socialism have not gone away. Let us work to make sure history doesn't repeat itself.
📚 Gender Shades - Intersectional Accuracy Disparities in Commercial Gender Classification by Joy Buolamwini & Timnit Gebru & Actionable Auditing - Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products by Inioluwa Raji & Joy Buolamwini
In Gender Shades, authors Buolamwini and Gebru perform the first intersectional phenotypical analysis of face-based gender classification. Previous work has looked at gender discrimination or racial discrimination, but their analysis looked at error rates in classification of lighter male, lighter female, darker male, and darker female faces. Their analysis of three commercial classifiers (IBM, Microsoft, and Face++) found that all three performed better on male than female faces, better on lighter than darker faces, and that all three performed worst on darker female faces.
As part of this work, they developed the Pilot Parliaments Benchmark (PPB). Unlike other datasets that are collected via face detection algorithms, which can perpetuate basis found in the initial detection algorithms, and that rely on demographic labels, which can fail to capture phenotypic diversity, the PPB was created by sampling pictures of parliamentarians from three Northern European and three African countries and manually labeling their skin type. The PPB is also "user-representative", that is, the number of pictures of each phenotype is roughly equal, rather than matched to the underlying population distribution; this allows each phenotype to be trained to a similar level of accuracy.
Buolamwini and co-author Raji followed up this work in Actionable Auditing. This paper demonstrates the power of targeted audits, but also describes the disincentives currently facing researchers who perform them, who might run afoul of the Computer Fraud and Abuse Act, professional ethical codes, and might face hostile corporate responses. They describe how they followed a process modeled on Coordinated Vulnerability Disclosure, which is used to report security vulnerabilities to corporations.
In Actionable Auditing, the authors compare the performance of their three target companies initially vs roughly a year later. While all three retained the same overall pattern, performing better on lighter and males faces, and worst on darker female faces, all three improved most on darker, female, and darker female faces during that time period, narrowing the performance gap. The authors also compare target performance to the performance of non-target companies Kairos and Amazon. The non-target companies performed significantly worse than target companies, but it's difficult to attribute that to not being audited, as we lack data to show whether or not they improved after publication of the Gender Shades paper.
In both papers, the authors criticize the lack of transparency and accountability of the companies they're auditing, even as they attempt to work around the limits of profit-driven PR and "black box" software. Perhaps frustration with a lack of access helped inspire author Gebru to join Google after these studies were published, but her firing late last year demonstrates that we cannot simply wait for these companies to make voluntary changes. It is up to us collectively to act on the work of Gebru and her co-authors, and hold companies like Google, and the companies analyzed in these two papers, accountable.
By the mid-1990s, judicial decisions had created a perverse incentive for internet services: in Stratton Oakmant v Prodigy Services, Prodigy was held liable for defamatory content because they enforced community guidelines rather than posting everything users submitted. In 1996, Section 230 of the Communications Decency Act was written to reverse this incentive: it gave internet service providers broad immunity for content posted by third parties, and explicitly stated that a good faith effort to edit or moderate content did not revoke the immunity. There were three exceptions to this immunity: content that violates federal criminal law, the Electronic Communications Privacy Act, and the Digital Millennium Copyright Act.
Kosseff reviews Section 230 case law, noting that while the courts consistently granted broad immunity under 230 in the years after the act was passed, by the mid-aughts that immunity began to erode. In 2008 - one of the earliest cases that denied Section 230 immunity - the Ninth Circuit ruled that Roommates.com was not immunized from housing discrimination claims, because their questionnaire "developed" third party content. By 2010 roughly a third of cases were surviving Section 230 defenses. In 2017, Kosseff did his own analysis of 27 cases over a one-year period in 2015/2016, and found that half declined to provide immunity.
While Kosseff's review is thorough and engaging, I found his conclusions quite questionable. He argues that the erosion of Section 230 immunity should be rolled back and the broad interpretation re-adopted, despite the "tragedy and inequity" that it can cause. Those are Kosseff's own words; he also includes a quote from an opinion by the US District Court of DC: "[i]t would seem only fair to hold AOL to the liability standards applied to a publisher or, at least, like a book store owner or library, to the liability standards applied to a distributor. [...] Congress has made a different policy choice." In another case, a judge writes, "the law requires that we, like the court below, deny relief to plaintiffs whose circumstances evoke outrage."
What justification does Kossef give for tragedy and outrage? Innovation! He writes: "From Facebook to Yelp to Snapchat, platforms that rely on user generated content have been among the greatest Internet success stories. Section 230 has allowed that innovation to flourish and thrive."
Certainly we can't go back to perverse incentives that existed pre-Section 230, but I disagree with Kosseff that we should go back to the early days of broad immunity. The exception for copyrightable material written into Section 230 shows that is possible to affirmatively require content moderation at scale. Why is it that we carve out exceptions for bootlegging Marvel movies but not for sex trafficking victims or people taken in by credit card scams? It seems to me that we need to take a more nuanced approach to immunity, one that takes into account the resources available to the provider. A small startup shouldn't be required to moderate content to the scale that companies like Google or Facebook should.
Regardless of Kosseff's conclusion, this review was an excellent introduction to Section 230 case law, and I'm eager to learn more about the topic.
That's all for now. See you next month!