L30 Transmissions logo

L30 Transmissions

Archives
March 10, 2026

Preparing for Takeoff

To summarize my last transmission:
computing power will rule the 21st century- better get a hold of some if you want to have a say in how it all plays out. That's not to say that it ought to. That power tends to accrue to those least worthy of it is a recurring tragedy of our condition, but taking desire for change and transcendence seriously means that we have to act from where we actually are. Personally, I've let the despair that arises from the dissonance between how things are and how they should be wash over me, refused to be consumed by it, left hope behind and moved on to the difficult work of shaping the noise into harmony.

To that end, I'll try to lay a foundation in this transmission for understanding possible economic futures and how the goals that I outlined previously relate to each one. I hope that it can serve as a starting point for thinking and planning at least until the next inevitable paradigm shift makes it obsolete.

Our near term economic future can be roughly divided into 3 plausible scenarios, with one that stands out as the most interesting:

Depression

Overspeculation in AI combines with other factors to lead to another financial crisis. Here, it doesn't seem like there's much that the average person can do to prepare beyond ensuring that they are not overinvested in US stocks, the dollar, or AI technology itself. At least computing power is likely to be cheap for a while if you find yourself with enough steady income to pay for it when the music stops. The research and engineering that could help us break our dependence on cloud capital will slow, but its grip would also be relaxed somewhat. It wouldn't be a bad outcome if it wouldn't cause so much pain, but given the tendency of our society towards privatized gains and socialized losses, it's nothing to aim for.

Stagnation

Energy availability continues to be the bottleneck for AI infrastructure. A single AI query (called inference in the industry) can use 1,000 times more electricity than a traditional web search, and we may soon reach physical limits of what algorithmic optimization, energy infrastructure, and the environment can support. As an engineer by trade, this is the least anxiety-inducing scenario, since it means that we can't simply point AI at every task that could possibly be automated and let it rip. Instead of rewarding those who have the most money to throw at a problem, engineering elegance and efficiency would remain critical. Overall, this is a favorable outcome for digital sovereignty, although not a sustainable one due to environmental impact. Without improvements to energy infrastructure, even a much more modest rate of energy consumption threatens to make the planet inhospitable for those of us who haven't yet finished the construction of our New Zealand bunkers. Cloud capital could be weakened fairly easily by accelerating alternative solutions with AI while the underlying economic dynamics remain mostly unchanged. Current tech giants may falter, but in this case as well it will come at a cost to the majority of people.

Takeoff

The scenario which is somehow simultaneously the most obvious and the most surprising is that both energy production and algorithmic efficiency continue to improve at around the current rate. In the past 3 years, improvements in machine-learning algorithms resulted in a 1,000x cost reduction for AI inference, meanwhile hundreds of billions of dollars are flowing into energy infrastructure. If this plays out as intended, the cost of inference will continue to drop by orders of magnitude, and any task that can be automated will be. Such a change would radically transform the economy, and the transformation would be faster than any we've seen before in history. Since this would require only that things continue to progress as they are now and would directly threaten a massive number of jobs including my own, it seems urgent to explore this situation hypothetically in order to arrive at a rough understanding of how to navigate it and what may lay on the other side. Because it would be fundamentally different from anything humanity has been through before, there is a greater risk of being caught off-guard by it.

Sources

Two weeks ago, researchers Catalini, Hui, & Wu published a paper titled Some Simple Economics of AGI that formally models an AI takeoff scenario in terms of the dynamics that unfold when the cost to automate tasks decreases while the cost to verify the output of automation remains bottlenecked by human biology. As always, it is worth noting the authors' incentives to assume that technological progress will continue unconstrained, but if this proves to be true, then their work is invaluable in preparing us for the coming changes. To avoid drifting away in a euphoric haze of techno-optimism, I will try to ground their analysis in the work of Yanis Veroufakis, an economist who I inadvertently ripped off in my last post with the relationship (credit to @PoeticSuicide for pointing that out):

money <-> power <-> compute

Veroufakis would likely argue that the solutions presented by the authors tend to individualize a collective problem and fail to account for the fact that capitalism has already morphed into technofeudalism, where rents in the form of subscription fees, time, and attention already accrue to the owners of cloud capital. AGI may only make the divide between serf and technofeudal lord even more clear and more difficult to overcome.

The Economics of an AGI Transition

Dropping the formalisms, the paper describes our scenario roughly as follows:

  1. Assuming the cost to automate tasks decreases while the cost to verify output remains stagnant, there will be an increasingly large gap between what AI can do and what humans can reliably check. From my anecdotal experience, I'm already faced with a situation where I can produce software much faster than I can verify that it does exactly what I intended for it to do. This change happened in such a dramatically short amount of time that many have yet to update their thinking to match the new reality. At a larger scale, this dynamic could lead to what the authors call a "Hollow Economy" where vast amounts of resources are spent doing work that provides no value to human beings and ultimately spirals out of our control. To Nick Land, this would appear somehow preordained and also sounds like a lot of fun, but most of us would rather do our best to avoid it.
  2. The cost of work where the output is measurable will be reduced to the cost of compute. For a given domain, if the validation of a product can be fully automated, then full automation of the production process will soon follow, and there will no longer be any human in the loop. As a result, the distinction between skilled and unskilled labor will no longer be as important as the distinction between measurable and unmeasurable tasks. The incentive to make outcomes measurable or even to fake their measurability will increase as companies move beyond just trimming the fat from an organization and into automating entire roles.
  3. As their taste, experience, and judgement becomes codified as training data, experts make themselves obsolete. As the friction that produced their expertise vanishes, there will no longer be any pathway for future generations to develop the understanding needed to oversee the AI systems that do the work. The authors suggest solving this by simulating the friction required to build up intuition with AI training programs, arguing that this could also reduce the time to mastery for some fundamental skills, allowing more time to be spent learning how to oversee AI. It's doubtful that this kind of simulation would produce the same level of mastery that arises from real practice, but it may be sufficient to keep the machine humming along.

In this model, that poetry, philosophy, or art history degree begins to look lucrative in contrast to engineering, where desired outcomes are known, measurable, and increasingly cheap to automate. Trades remain valuable until the point where advances in robotics catch up to the level of automation that we already see in software. More fundamentally, in any domain where results are difficult or impossible to measure, high-quality, human-verified data against which AI outputs can be checked becomes one of our most valuable resources. Interestingly, this is a resource that is produced by every one of us. Veroufakis suggests that the previous decades of mass data collection resemble an enclosure of the epistemic commons. In the same way that a feudal lord would steal land from people and then begin to charge them rent to continue living on it, the leading AI labs have carried out a mass theft of human knowledge and experience by ingesting the entire internet at a time when it contained data which was mostly generated by humans. This dataset represents a ground truth about humanity which will be impossible to replicate. Yes, it was full of noise and errors, but the noise and the errors were human. In a world where most data becomes AI-generated, this is also valuable signal.

This explains why Sam Altman is investing in a company that scans peoples' irises in order to prove the data they generate is coming from a human being. Verification-grade ground truth- the judgement of a trusted human expert that some output is correct, appropriate, and aligned with intent becomes extremely valuable, especially in a world where the path for humans to become experts is eroding.

In response, we must learn to stop giving away our data so easily and for such a low price, in particular as it relates to our judgement and taste. The current generation of platforms prey on the lack of awareness of the value of this data. We can't rely on AI labs to pay the debt that they owe society- we have to begin guarding and collectively utilizing this resource if we hope to have any leverage in the future.

While the most obvious consequence of an increasing gap between what can be autonomously executed and what can be reliably verified is an increase in the value of data that we know to be generated by humans and of real, hard-won human expertise in unmeasurable domains, it also follows that the ability to direct intent will become even more critical than it is today. We can begin to see this from the fact that everyone in Silicon Valley has suddenly become obsessed with the word "agency" and why companies are beginning to reject applicants for being "mimetic" rather than "agentic." The argument is that while AI will be able to complete most measurable tasks, it will never be able to determine what is ultimately important to humanity. Those able to articulate desires that lie dormant in others and follow them through to their conclusion will continue to have a role in production, but only as part of production pipelines that rest on scarce, verification-grade ground truth. If all it takes to fulfill a desire is a single request to a frontier model, then it's effectively worthless, but if the realization of the desire requires input across multiple domains of unmeasurable expertise arising from human experience, then it will continue to have value.

A logical, initial response to this might be to collect more data on yourself, but to keep it for your own private use or for the potential later use of your community. It's striking how the same data collection that feels violating when done by those alien to us can become empowering when we do it ourselves. Keeping a journal is a form of collecting data on ourselves that's rightly encouraged, but we generally wouldn't feel comfortable with letting tech companies own that data. I'd suggest applying the same principle to our biometric and location data, our voices, our taste, images of ourselves, and our artistic creations- all things that we currently tend to give to Big Tech for free. The only data you should put online are the data which you want to be part of a training set for future LLMs. Otherwise, save it for yourself. As with keeping a journal, the value of your personal data increases when collection is sustained over a long period of time. A day's worth of biometric data doesn't tell you much, but a month's worth could contain important signals about your health.

I'll begin acting on this insight myself. First, I will begin using a more private channel to distribute my future writing. I'll continue using this one only when I want my words to become training data. I'll continue working on developing technology that allows me to more effectively collect, secure, and use my personal data for my own ends. I'll share that tech with you once I believe it's sufficiently safe and usable. I hope that eventually it could help all of us to better navigate this brave new world.

Don't miss what's next. Subscribe to L30 Transmissions:

Add a comment:

Powered by Buttondown, the easiest way to start and grow your newsletter.