Follow Up on AI Use Environmental Concerns
I wanted to offer some clarifications and extensions of points I made last week, since I did not mean at all to absolve the AI industry of environmental harm.
My focus in my discussion of AI and the environment was entirely on what the stakes are of classroom use. I do not think guilt or shame over use is particularly productive for educators for a few primary reasons. I'm drawing basic data from Andy Masley's Substack, while adding some of my own arguments about its significance for how we think about ethics in teaching.
The AI build-out is happening, and marginal withdrawals from consumer or educational use will not have any effect on the scale of economic investments in data centers (which are indeed astronomical).
Even if it did, not using a chatbot is not withdrawing from AI or data center use. You would have to withdraw from the web in a more capacious way than is possible for teachers in modern teaching institutions, and certainly impossible for our students.
The actual energy and water consumption of chatbot text prompting is minuscule on three comparative scales: a) your daily water use and carbon footprint, b) the percentage of data center and specifically AI computational draw that is allocated to text output compared to other business functions, c) overall carbon emissions and water use.
Consumer choice environmentalism is basically ineffective. There are significantly high-impact choices: stop flying, solar panels, electric car or no car, vegetarianism, and veganism. Andy Masley offers a chart in the above-linked article comparing personal choices. There are real things you can do to reduce your impact! This isn't it! But even then, we are not dealing with the massive systemic transformations that are the only ways to limit warming. And the risk of any personal consumer choice activism is that it becomes an in-group shibboleth for an elite. Most of these options are not accessible to many (most) people for economic reasons. Teslas are very clearly status symbols, for instance. Imagine trying to create a shame culture around using the fastest-growing application of all time when not using it does not even translate to meaningfully reducing your personal carbon footprint! This is not how we build a sufficiently large coalition to fight climate change; it's how we marginalize ourselves into smaller and smaller in-groups that alienate everyone else.
However, I do not agree with all of Masley's conclusions, nor do I think he has completely closed the book on the AI and environment questions. If his numbers are right, we should pay attention to them and internalize them, but there are still questions and concerns. I want to raise two below.
1: This critique of Karen Hao's Empire of AI (a book I still need to read) and the exchange with her in the comments may explain why I think we aren't quite finished with the discussion. Masley, as far as I can tell from my own (non-expert) look into the studies in question, is correct about the overall water use numbers. Data centers are just not "consuming" a lot of water compared to any other industrial uses (or fucking golf courses), and a lot of the breathless coverage of this has relied on conflating a bunch of things and getting the math wrong. Karen Hao admits in her comment essentially that the math was based on sources, and they seem to have gotten it wrong. But she points to there being a large philosophical difference here, which Masley denies… and well, there is one as far as I can see, and it matters because Masley misses some important things due to his own liberal capitalist environmentalist blind spots.
Karen Hao's larger point is that AI is an imperial project of extraction. Masley's perspective is not AI boosterism, but it is narrowly focused on the pragmatic environmental activist question of "is this a good use of environmentalist energy to be so hostile to AI?" At a certain focal length, I think Masley is correct, at least right now. And he's doubly correct that of all things, potable water use is not the problem here. The New Yorker dealt with this recently, quoting an alfalfa farmer mentioning that his farm uses more water than the new data centers being built nearby. The problem with Masley's account is not the numbers but his value proposition of data centers to local communities.
He proposes we think about water use against tax revenue. All industry needs water, and localities need industries for jobs and tax revenue. Data centers don't offer many jobs once they are built, but they do offer a great exchange rate of water for taxes. The assumption that this is beneficial for local communities is a Global North assumption (which may not even be true for many communities in the Global North) that tax revenues get funneled back into local communities in the form of infrastructure, services, and social safety nets. It's honestly a little striking that he doesn't know that this is so often not what happens in Global South countries that are subject to forms of neo-imperialism. The examples in the argument they are having are Uruguay and Chile. In Uruguay, the national debt is nearly 70% of the GDP. I am not knowledgeable about Uruguay, so the details may be different, but I can break down the pattern of what this debt level typically means for Global South nations and why it matters more than the higher U.S. debt-to-GDP ratio of 120%. If you know the World Bank/IMF and SAP story, you know what I'm about to articulate. For those who don't, here's a rough overview.
The U.S. does not have to service its debt in the same way as many other countries. We have a sovereign currency. Global economic growth is constantly funneled back to the U.S. because dollars are the global exchange currency and the U.S. can (and does) print new dollars. This is not to downplay that printing money too fast causes inflation, which is a problem in the U.S.; nevertheless, the U.S. has a unique capacity to bail itself out that Uruguay does not. Individual states and municipalities can and do have debt crises, but not the feds.
Uruguay's debt is not in Uruguayan pesos, but in U.S. dollars and is almost certainly held by Global North banks in the U.S., London, and Frankfurt. As a condition of continual developmental loans, Global South countries with high levels of debt have been enrolled by the IMF in Structural Adjustment Programs (SAP): essentially, control over a country's finances by external (non-democratic) parties that force austerity and underdevelopment in the name of servicing debt. Austerity is always self-defeating (just ask Greece); rather than reining in spending, you need to spend money to develop the infrastructure that could provide the economic growth needed to get out of debt. One reason the U.S. sustains high debt-to-GDP ratios is that we can bet on growth investments and that 7-10% annualized return on the S&P; even if we don't have the money this year, we will have it next year. SAP imposes austerity that limits the developmental growth of Global South nations, which would sufficiently boost an economy to actually get it out of debt. As a result, Global South countries are often under permanent neo-imperial financial management regimes.
Tax dollars in these countries are extracted to service debt and pay off a local comprador elite that keeps the country beholden to global finance. Individual instances are always full of complexities and exceptions. But we should never be making assumptions that tax dollars are going back into local services and economies, rather than flowing upward to the national elite and global financial capitals. If Masley wants to argue that there are local benefits in Uruguay and Chile to data center construction, he needs to do more than just gesture at tax dollars. Maybe that works for Maricopa or Loudoun County, USA (although probably not in Memphis), but it does not work in South America.
So, while it is important to get the numbers right, and Masley is correct to examine that, there is also a philosophical qualitative question: is this extractive? Local activists certainly seem to think so!
Whether or not it is a lot of water, it's still some water (a local resource) being used by a global capitalist firm without a clear benefit to the locals who also need water. I hope Karen Hao continues this conversation and talks about this difference, because I think it's a genuine blind spot in Masley's argument. I also would like to dream that we can have disagreements like this without getting personal and self-righteous, because I think we leftists are going to need to work together with people coming at it from Masley's perspective. It's worth seeing the conversation through because we can possibly find more agreement than disagreement in the end once we start seeing all the same things. Unfortunately, I do think Masley is getting a little aggressive toward Hao in his latest reply. People online are sore winners!
The larger point left unaddressed, thus, is the moral hazard always implicit in capitalist growth in an unequal world. Harms and risks are localized in communities that don't have equal economic or political power, and as a result, the system is less attentive to those harms and risks (or to call it what it often is, slow violence). In The New Yorker article, you see the worst outcome of a type of cheap anti-AI activism that worsens rather than alleviates these problems. Rich communities protest data centers in their backyard (even though they can probably absorb the increased energy costs), but that means they get built elsewhere, alongside people who have less visibility and control over land use. Environmental injustice is compounded, particularly because capitalists are less likely to follow the same safety standards. I would guess this is part of what happened in Memphis.
2: Growing moral hazard also points toward real problems we may face regarding AI and the environment when it comes to scaling (which I alluded to in my last newsletter). Once again, The New Yorker article gives some details: green and nuclear energy may not be enough to keep up with the exponential demand of data center computing power. Roughly, we can project there's one future with a moderate growth curve of AI development where we can build green infrastructure alongside the growth, and there's another with a much steeper curve where the only way to meet energy demand is new fossil fuel plants. We are currently following the latter trajectory, and we need to follow the former. Data centers may not currently be significant CO₂ emitters, but if growth continues at this rate, they will continually drive demand for more new power sources.
I think Andy Masley might argue with me that this just isn't a significant enough emissions sector to begin with, that its scaling is unlikely ever to reach emissions comparable to, say, steel or transportation and shipping. But the problem is that rapid growth is locking in fossil fuel expansion (and profits) rather than allowing for a slower green energy transition. Whatever percentage of overall emissions, AI data center builds are demanding energy at a rate that forces more energy sector growth into fossil fuels when we need to be slowing it down for green adaptation. Maybe, not having the math and data chops, this is missing the trees for the forest, but my concern remains even after reading many of Masley's pieces about the direction we are heading.
The inflection point here is local permitting, national emissions regulation, tech regulation, sustainable economy adaptations, and all the targets of ongoing climate activism across industries and governments. Fossil fuel, not energy per se, is the problem. I would never come out and defend big tech. What I'm doing is trying to direct anger and protest where it's most productive and help people understand the details as best I can. How do we force tech and economic growth into the boundaries of future environmental habitability? It will still be limiting and banning fossil fuels while increasing energy efficiency and expanding green energy production as fast as we can. In the meantime, there is value in generating friction on overall economic growth so green energy can catch up, but I don’t think refusing to use chatbots is going to achieve that.
Now that I’ve moved from a completely AI-hostile position (which, let’s be honest, was severe cope) to an AI-skeptical but curious position, I’m still trying to get my head around the tech itself and where it’s going, and what it means for the personally important question of whether us teachers keep our jobs. I liked this other New Yorker article as an introduction and overview. I’m probably going to dig into this topic soon as I catch up on some of the major books.