Daily Log Digest – Week 5, 2026
2026-01-27
Why software work estimations are hard
How I estimate work as a staff software engineer #software #estimates
Just putting it here so that the next time somebody comes along wondering about this, I can point them here.
I’m also going to concede that sometimes you can accurately estimate software work, when that work is very well-understood and very small in scope. For instance, if I know it takes half an hour to deploy a service1, and I’m being asked to update the text in a link, I can accurately estimate the work at something like 45 minutes: five minutes to push the change up, ten minutes to wait for CI, thirty minutes to deploy.
For most of us, the majority of software work is not like this. We work on poorly-understood systems and cannot predict exactly what must be done in advance. Most programming in large systems is research: identifying prior art, mapping out enough of the system to understand the effects of changes, and so on. Even for fairly small changes, we simply do not know what’s involved in making the change until we go and look.
The pro-estimation dogma says that these questions ought to be answered during the planning process, so that each individual piece of work being discussed is scoped small enough to be accurately estimated. I’m not impressed by this answer. It seems to me to be a throwback to the bad old days of software architecture, where one architect would map everything out in advance, so that individual programmers simply had to mechanically follow instructions. Nobody does that now, because it doesn’t work: programmers must be empowered to make architectural decisions, because they’re the ones who are actually in contact with the code2. Even if it did work, that would simply shift the impossible-to-estimate part of the process backwards, into the planning meeting (where of course you can’t write or run code, which makes it near-impossible to accurately answer the kind of questions involved).
In short: software engineering projects are not dominated by the known work, but by the unknown work, which always takes 90% of the time. However, only the known work can be accurately estimated. It’s therefore impossible to accurately estimate software projects in advance.
Intelligence and Wisdom
Why Intelligence Is a Terrible Proxy for Wisdom
Simply put: smart people, by virtue of being very fucking smart, are better at constructing post-hoc rationalizations for beliefs they hold for emotional or social reasons. Everyone does this to some extent. We form impressions and then search for evidence to support them. But intelligent people search more effectively. They find better evidence, or at least better-sounding evidence. They anticipate counterarguments and preemptively defuse them. They build fortresses of logic around conclusions they reached for entirely non-logical reasons, and those fortresses can become so elaborate and well-defended that the person living inside them never realizes they’re trapped.
Philip Tetlock’s research on expert political judgment found that the experts with the most impressive credentials and the strongest reputations for insight performed barely better than chance at predicting geopolitical events, and sometimes performed worse than simple algorithms. The experts who performed best tended to be what Tetlock called “foxes” rather than “hedgehogs,” borrowing from Archilochus’s ancient distinction. Hedgehogs know one big thing and apply it everywhere, while foxes know many small things and adapt flexibly. The hedgehogs were frequently the most intelligent and articulate members of the sample. They also consistently overestimated their own accuracy and failed to update their beliefs when predictions went wrong.
Intelligence, it seems, can produce a particularly fraught form of intellectual pride. You’ve been right so many times before, in so many situations, in ways that others couldn’t match.
Wisdom is knowing what you don’t know.
Wisdom is what tells you to ignore the memecoin // prediction market bet, even though you could construct an excellent narrative explaining why this time will be different. Wisdom is what tells you that your political opponents might have a point, even though you could demolish their arguments in debate. Wisdom is what tells you not to install Clawdbot on your personal device and give it access to your banking details, even though you could become the next Tony Stark.
Intelligence can be measured on tests.
Wisdom is a good deal harder to quantify.
The Age of Pump and Dump Software
The Age of Pump and Dump Software | by Tautvilas Mečinskas | Jan, 2026 | Medium
The usual suspects are covered here: Cursor's browser, Yegge's Gas Town project and ClawdBot (now Moldbot).
I havent used any of them, and I wont bother. Pump and Dump does seem like an apt description. There is a larger story here around how so much of software we are building - and this gets turbocharged in the age of agent assisted coding, we don't think about maintaining them in the long run. I don't view the projects above as long running projects that will still be usable several years into the future.
However, there is an alternate narrative where these software are just playgrounds to experiment with fresh new ideas, and it's all just getting started.
2026-01-28
The Computational Case for Hypocrisy
The Computational Case for Hypocrisy - by Aditya Kulkarni #evo-psych #evolution #psychology
Training massive AI models like Gemini or ChatGPT is an exercise in brute force. It costs hundreds of millions of dollars and requires server farms the size of industrial parks. The result of this process is a “Base Model”—a frozen, complex network of mathematical weights that “knows” how to predict the next word in a sentence.
Humans have an equivalent “Base Model,” too.
It resides in evolutionarily older decision systems that operate largely outside conscious processes. Just like an LLM, this biological base model was pre-trained on a massive dataset: millions of years of evolutionary trial and error. Its weights are heavily optimized for a specific set of survival outputs: Consume high calories. Pursue mating opportunities. Dominate rivals.
As AI researchers have discovered, it is almost impossible to subtract from a neural network. If you take a fully trained neural network and try to force it to “unlearn” a core concept—or aggressively “retrain” it on new, contradictory data—you trigger a phenomenon known as Catastrophic Forgetting. Because knowledge in a neural network is distributed across billions of connections, you cannot simply isolate and delete a specific bad behavior without unraveling the rest of the system. If you force the model to unlearn “aggression,” you might accidentally degrade its ability to navigate terrain or recognize faces.
When AI researchers want to “fine-tune” an AI model to learn a new, specific behavior, they often use a technique called Low-Rank Adaptation (LoRA).
Instead of melting down an AI model’s neural weights and recasting them, researchers have discovered that simply attaching a small, thin layer of new parameters on top of the model allows you to change its behavior. It is a lightweight mask that sits over the heavy, deep machinery. This “Adapter Layer” intercepts the output of the frozen model and steers it in a new direction.
To change the behavior, you don’t touch the foundation. You build an addition.
Evolution likely arrived at the same architecture…
Instead, evolution built a “LoRA Adapter”—the neocortical Press Secretary. This adapter doesn’t stop the impulse from firing; it layers a transparency over it. It translates the raw signal—“I want to eat this cake”—into the socially acceptable output: “I am carbo-loading for a run.”
The Press Secretary evolved because “re-training” the amygdala is practically impossible. It would be like reshooting an entire movie just to translate the dialogue into French. You don’t fly the actors back to the set; you just add dubbed audio. Hypocrisy is the adapter layer that allows a Paleolithic brain to operate in our modern civilization.
we've created a society where artists can't make any money
we've created a society where artists can't make any money #writing
I began to realize that many of the essays I read—in prestigious and well-know. magazines—were edited and written and fact-checked by people barely able to make a living from their work. Many magazines were labors of love; others were underwritten by a generous donor or a government grant. (The London Review of Books, I learned, operates at a loss: £27 million since the magazine was founded. It’s thanks to a former editor’s family trust that they’re able to continue publishing.)
The writer W. David Marx’s latest book, Blank Space: A Cultural History of the Twenty-First Century (November 2025), argues that this narrative of decline is true—that art and culture are less innovative than before. I wanted to review Blank Space because Marx’s first two books (Ametora and Status and Culture) were exceptionally good…and because I wanted to understand if I agreed with him. Were things really getting worse? And did the question of money—how little of it there seemed to be, how precarious cultural labor was—have something to do with it?
You can read my review essay below, or on Asterisk Magazine’s elegantly designed website
2026-01-30
Grindcore
Grindcore is the new hustle culture #technology #work #culture
In Silicon Valley, long hours have fused with a monastic male wellness aesthetic
But the “grindcore” lifestyle has taken on fresh intensity against the backdrop of a frantic San Francisco AI arms race, and growing anxiety among AI labs that a rival — or worse, China — might be the first to achieve AI supremacy.
It is not just tech bosses pushing the trend. Founders and engineers are jumping at the chance to broadcast how hard they are toiling. In September, dozens took to social media to announce their participation in what was dubbed the “great lock-in” of 2025 — in other words, spending the final three months of the year rejecting work-life balance to produce their most valuable labour yet.
Intriguingly for a world known for its badly dressed nerds, this narrative has been fused with a monastic male wellness aesthetic.
Instead of downtime enjoying the Californian sun and surf, grindcore adherents should fill the remainder of their day with workouts, Paleo diets and Chinese peptides. Many are embracing “manosphere” culture propagated by Maga-adjacent influencers that preaches antifeminism ideals and physiognomy.
“The current vibe is no drinking, no drugs, 9-9-6 [working from 9am to 9pm, six days a week], lift heavy, run far, marry early, track sleep, eat steak and eggs,” Daksh Gupta, the 23-year-old co-founder of an AI start-up, told the San Francisco Standard recently.