Hello, and welcome to the very first edition of this newsletter!
My goal is to explore the intersection of technology and governance. This includes questions about how we govern technology, and about how technology impacts how we govern. How does the product design of modern software impact community building? Should governments regulate Big Tech? How does the sociotechnical infrastructure of the internet allow or constrain privacy, equality or freedom? What's at the cutting edge of governance tech?
I also will, with some regularity, geek out about a paper or topic or news item that's "just governance" or "just technology". My newsletter, my rules. ;)
I'm still figuring out the structure and content of the newsletter, so please bear with me as these change from month to month. If there's something you especially like, or something you wish I'd include, please let me know!
📰 Google fires ethical AI expert Margaret Mitchell. This comes two months after Google fired Timnit Gebru for criticizing their treatment of marginalized voices within the company. Gebru and Mitchell co-led Google's Ethical AI team, and many of us have long wondered whether internal researchers would really be given the freedom they need to provide substantive critique of their employers. I guess we've got our answer.
📰 Facebook had a brief standoff with Australia over payments to news publishers. While I can't speak to the specifics of Australia's law, a five year effort called the News Media Bargaining Code, generally I support attempts to win back some of the money that Facebook and Google have appropriated from the struggling news industry. There were some positive signs during the five day ban, including the sudden popularity of the ABC News app, which became the #2 app in the Australian app store. This thread by Cory Doctorow provides an interesting perspective.
📰 The Democratic Governor of Nevada, Steve Sisolak, announced in his State of the Union address that tech companies would be invited to form their own local governments in Nevada. Sisolak has released no details yet, so this may just be a publicity stunt, but the labeling of these proposed districts as "Innovation Zones" is one more sign that this is part of the ongoing fetishization of technology, which I talk more about in this month's Deep Dive. Regardless, I have no desire to go back to Company Towns.
I recently finished reading and summarizing the Federal Trade Commission's suit against Faceebook, which was filed in December 2020. You can find my in-depth summary here. It's a surprisingly gripping read, full of secret emails and the occasional redaction to add mystery.
This is just one of several lawsuits against major tech companies. The Attorneys General of 46 states, plus DC and Guam, filed a joint antitrust suit against Facebook on the same day that the FTC did. And of course there are other government agencies, in the US and outside of it, who have been filing suits, passing laws, and more. (For a discussion more focused on Google, see competition policy expert Charlotte Slaiman's recent appearance on The Weeds; I'm still looking for good summaries of what's happening outside the US.)
After years of ignoring calls for regulation, it's gratifying that the US government is finally taking action. Yet I can't help but wonder: why now? Why have politicians been so hesitant to intervene?
To pluck one argument out of the ether: for years, I have heard people say that government is too "slow" and "bureaucratic" to regulate tech. Tech is way more efficient than government, the argument goes. If we let the government regulate, we lose those efficiencies - and all their attendant innovations. This argument is wrong on several levels.
First, it assumes that efficiency is always a good thing, but this is pretty obviously untrue. In the mid twentieth century, IBM was very efficient at tracking people, and this efficiency was used to carry out the Holocaust. (You might think it is a cheap example to bring up genocide, but Facebook's technology has been used in a genocide too.) Governments are sometimes inefficient for good reason: because their decisions could cause immense harm and they want to consider the consequences. Decisions that can harm people should be subject to the admittedly slow process of democratic deliberation and oversight, not the glib irresponsibility of "move fast and break things".
Second, it assumes that governments are less efficient than private technology companies. This is undoubtedly true in some contexts, and just as clearly false in others. For example, despite the efforts of Louis deJoy to hobble the US Postal Service (motivated by the assets he has invested in USPS competitors, which may be as much as $75 million), it remains the best option for delivering mail to many areas, with Amazon and other companies using it to deliver their packages.
Governments are often more innovative than private companies as well. The US Government created the internet, and it funded the key innovations behind many of the companies it now seeks to regulate, specifically Google and Apple. For more on this subject, see Mariana Mazzucato's The Entrepreneurial State.
If our government has proven it has the good judgment to invest in promising new technologies and the practical ability to implement efficient processes, surely we must at least entertain the idea that it can regulate effectively?
Third, the argument assumes that regulation is a barrier to efficiency and innovation, rather than a bulwark. But the effect of regulation depends entirely on the content of regulation and the context in which it's applied. The whole point of antitrust (at least as I understand it) is to recognize that behemoth, industry-dominating companies can cause inefficiencies and other harms if allowed to strangle competition. There's no list I could find of tech industry congressional testimony, but a number of smaller tech companies have testified as to the harms caused by big tech. (See, for example, the testimony of David Heinemeier Hansson, CTO of Basecamp, last summer.)
It's hard to argue a counterfactual, but knowing the number of startups that have been "bought and killed" by big tech - and the even larger and more diverse set of organizations that couldn't find investors because they didn't have the exit strategy "get acquired" - it's easy to believe that the tech industry would be more innovative and more efficient if we'd been regulating all along.
I believe that US government agencies and politicians have failed to regulate before now because they fell for these illusory arguments about the desirability and causes of efficiency and innovation. Only when the harms of tech companies became so manifest that they could not be ignored did the fig leaves fall. And I worry that even if this group of tech giants is successfully regulated and the harms they're currently causing is stopped, we'll face the same problem again in fifteen or twenty years if we keep believing the myths that only big tech can build good tech.
I read a lot about tech and governance and how they overlap. Here are the highlights from this month, plus a few older favorites:
📚 Race After Technology by Ruha Benjamin
Despite what the title suggests, Benjamin's book isn't about how race influences technology, but rather about how race is a technology. She writes,
Human toolmaking is not limited to the stone instruments of our early ancestors or to the sleek gadgets produced by the modern tech industry. Human cultures also create symbolic devices that structure society. Race, to be sure, is one of our most powerful tools - developed over hundreds of years, varying across time and place, codified in law and refined through custom, and, tragically, still consider by many people to reflect immutable differences between groups. (p. 36)
Technology is created to solve problems, Benjamin argues. Race is a tool "designed to stratify and sanctify social injustice as part of the architecture of everyday life [...] a tool to denigrate, endanger, and exploit non-White people" (p. 17).
With this framing, Benjamin's deep and far-ranging survey of technologies with disparate racial impacts takes on a new urgency. We can see these case studies not as simple examples of bias, but of white people knowingly or unknowingly perpetuating injustice, continuing to hone the tool of race.
Benjamin's survey is too extensive to summarize here, and you're better off reading the whole book anyway. But one example that is especially relevant in the middle of a global respiratory pandemic is the case study of the spirometer. Drawing on Lundy Braun's Breathing Race into the Machine, Benjamin recounts how the spirometer, which assesses lung capacity, was created with a built-in "Black button". Black people were believed to have inherently inferior lung capacity, and the Black button adjusted readings to account for that, with the consequence that Black people have to have worse lung impairment than White people to be treated the same. Although the Black button is now gone, many spirometers, now digitized, make automatic racial adjustments that physicians are likely not even aware of, according to an article in the Lancet. This is just one of the many technologies causing the disproportionate number of hospitalizations and deaths of Black people during the Covid-19 pandemic, and we have a moral obligation to dismantle them.
📚 The Code of Capital by Katharina Pistor (link)
First, to prevent confusion: this book is about legal code (and legal-adjacent code), not computer code. Pistor argues that people have always tried to use the law to privatize gains and socialize risk. Long before corporations, wealthy people in England used entails and trusts to protect their wealth from creditors. These systems began as private law which nevertheless relied on the implicit threat of state enforcement. Corporations are part of this same tradition.
Modern financial/corporate law depends on "Conflict of law" rules in countries which by and large defer to the private parties involved in contract regarding which laws apply. As a consequence, most matters are subject to the laws of London or New York, even if none of the parties are in or from those jurisdictions. Modern companies also commonly use "ISDA master agreements" which allow investors to jump the line of creditors in a bankruptcy, a perversion of the idea that investors must take risks in order to reap rewards. As Pistor writes: "it is the shadow of coercive law enforcement that makes the commitments they craft credible and scalable. And yet, many lawyers will go to great lengths to avoid giving a court an opportunity to render a negative ruling on the legal coding they have employed for the benefit of hundreds, if not thousands, of clients [...] They depend on the authority of state law, but they avoid the courts, the law's traditional guardians, for fear they might interfere with their coding work."
Pistor recommends six major changes: ending conflict-of-law rules and requiring capital to follow the laws of the places it resides; banning arbitration for investor-state and company-consumer disputes; removing all legal privileges for capital; allowing people to claim damages against capital ex post; removing all legal privileges for capital; resurrecting the principle that purely speculative contracts cannot be enforced in court; and making law school less expensive, so that lawyers can better serve the public.
📚 Hacking Code/Space: Confounding the Code of Global Capitalism by Matthew Zook & Mark Graham (link)
This paper actually is about software code, or rather about the amorphous combination of physical, legal, and digital codes that make up what the authors call Code/Space. It's a deep dive into Frequent Flyer miles hackers, a community that the authors themselves are a part of.
They write that global capitalism relies on information technology to manage interactions across code/space, but that individuals who interact with those systems have ways to maintain agency by hacking it. These hacks are not equally available to all and, while they have unpredictable results, tend to benefit hackers. (This paper is not naively pro-hacker.) One of the most interesting recurring themes is how careful the hacking community is not to over-exploit loopholes, as doing so can provoke the more powerful entities running these systems to close them.
📚 Social Media and the Activist Toolkit by William Youmans & Jillian York (link)
Social media was credited with helping spark the "Arab Spring", but it's crucial to recognize that social media can serve activists and authoritarian regimes alike. This paper looks at four cases studies of social media usage during the Arab Awakening and shows how the design and governance of social media platforms harmed or stymied activists. These case studies include activists removed from Facebook for violating the "real name policy", even though de-anonymizing could have gotten them jailed or killed and videos of state repression removed from YouTube due to graphic content, possibly dampening protests. In the absence of government regulation, social media companies are left to balance the needs of activists against their own desires for profit, and the former rarely comes out ahead.
📚 Hidden Levers of Internet Control: An Infrastructure-Based Theory of Internet Governance by Laura DeNardis (link)
This paper discusses how the internet's infrastructure embeds political values in the design of its protocols, the registrar and domain name system, and in its backbone. Infrastructure sometimes becomes a proxy for content battles; for example, DeNardis talks about how the US Government can seize domain names and redirect them when websites provide illegal content, including copyrighted material. This is due at least in part to the semi-centralized nature of DNS registration. The increasingly centralized nature of infrastructure also makes it easier for governments to throw "kill switches", taking down the internet in certain areas or for certain times, as Iran did during the 2009 presidential elections and as San Francisco did during the 2011 protests against the murder of Charles Blair Hill by police.
The paper attempts to cover a lot, so everything gets a fairly surface treatment. If you're interested in a deeper dive, try DeNardis' dry but readable Protocol Politics, on the global, decades-long battle over IPv6.
A few quick things that I couldn't fit anywhere else:
📎 Back in June 2019 Lorelei Kelly at the Beeck Center for Social Impact and Innovation released a report Modernizing Congress: Bringing Democracy into the 21st Century and it remains required reading for anyone seeking to understand the limitations that Congress is operating under. Two stats in the report particularly blew my mind. First: in 1789, the average congressperson represented 29,000 people. Now, the average member represents 755,000 people - almost thirty times the number of people. Second: House members are allowed only 18 staff, and that includes constituent services at home in their districts. That's just an absurd amount of work to expect 18 people to do! The report dives into the consequences of these expectations, although it doesn't go too deep into the causes, perhaps due to the desire to remain non-partisan. (Spoiler alert: Newt Gingrich's Contract With America has played a huge role in these kind of understaffing issues.) Have a depressing quote as a takeaway:
“I look forward to the day that Congress has a staff as knowledgeable and experienced as the well-heeled lobbyists who are in their offices every day.”
— Meredith McGehee, executive director of Issue One , former Hill staffer
📎 I'm trying to understand more about the financial system, and I've taken to reading Matt Levine's Money Stuff. In a recent newsletter, he writes about the legal philosophy that "everything is securities fraud". Levine does a better job of explaining than I will, so click through if you're curious, but tl:dr: "securities fraud" means, more or less, defrauding shareholders aka people who hold stock, which is a kind of security. When a corporation does something unethical or illegal that causes stocks to go down, it can be argued that it's a kind of fraud, since shareholders bought stock on the premise that corporations would hold to stated principles.
Levine doesn't like the "everything is securities fraud" approach and I don't like it either, but I think people will keep pursuing it as long as there are no better avenues for holding corporations liable for the extremely damaging things they do. It reminds me a bit of cancel culture, actually. A lot of "cancel culture", which I'm defining here as an attempt to punish people through mass criticism on social media, arises in situations where there is no better way to hold the person accountable. Corporations make decisions that severely harm or even kill people with some regularity, and there's painfully little legal recourse when they do. People are going to yell on Twitter, and they're going to allege securities fraud. If you don't like it, build more accountable systems.
📎 I recently joined Ampled, which is a Patreon-like platform for musicians, but cooperatively owned by artists and workers. It's a really cool organization with very good documentation regarding their governance processes; I highly recommend checking it out and/or becoming a member, especially if you're a musician or a big fan of independent music.
That's all for this month! Feedback, questions, submissions, etc are all very welcome. See you next month.