Daily Log Digest – Week 35, 2025
2025-08-30
Anki Mastery Course
Anki Mastery Course | The AnKing
Someday I will master the intricacies of Anki
Learn everything you need to know about using Anki in a comprehensive series of lessons and video tutorials designed by the original AnKing team. Read about why we made this course here!
The Way to Cofeee
The Way to Coffee is a passion project that has been running for 10+ years to promote dedicated specialty coffee shops, roasters, importers and farmers worldwide. It’s a resource platform for thousands of coffee and travel enthusiasts and closely monitors recent developments in the coffee industry to provide up-to-date, relevant content. On The Way to Coffee you find city guides featuring the best specialty coffee shops worldwide, brew guides and helpful tips to take your coffee business to the next level.
2025-09-04
You're Not Interviewing for the Job. You're Auditioning for the Job Title
You're Not Interviewing for the Job. You're Auditioning for the Job Title
I once read that "a complex system usually reflects an absence of good design." It's brilliant. True. And if you're prepping for a system design interview, forget it immediately.
In real-world engineering, simplicity is king. In interviews, complexity is currency.
Job interviews aren't assessments. They're auditions for a job title: The Architect Who Solves Hard Problems™.
You're not being evaluated on whether you can build the described system efficiently. You're being evaluated on whether you can perform the role of someone who could theoretically build Google.
I'm not advocating dishonesty, I'm acknowledging reality. Interviews are a ritual, and rituals have rules. Here's how to navigate them:
Separate Performance from Practice: Playing the interview game doesn't make you a hypocrite. It makes you pragmatic about a broken system. You can excel at interview theater while still being a principled engineer once you're hired.
Learn the Sacred Texts: Study distributed systems patterns even if you'll never use them. Memorize the CAP theorem even if it's mostly irrelevant to your daily work. Practice drawing architecture diagrams that look impressive on whiteboards. Think of it as learning a foreign language you'll only speak during interviews.
Embrace the Tropes: Always start discussions with "At scale, we'd need to consider..." Mention monitoring and observability early and often, even for simple systems. Add redundancy everywhere, even for non-critical components. Use the magic words that signal competence in interview-land.
Then Drop the Act: Once hired, advocate ruthlessly for simplicity. Be the voice of reason who asks "Do we actually need this complexity?" Use your hard-earned credibility to push back against unnecessary over-engineering. This is where the real engineering work begins.
The “Selvedge” of Knitwear
What is Loopwheeled Cotton? All About Loopwheel Goods #selvedge #cotton
The AI Jobs Crisis - Translator Edition
AI Killed My Job: Translators - by Brian Merchant #ai #jobs #automation
To wit: After I put out the call for AI Killed My Job stories, I heard from a lot of translators, interpreters, and video game localizers (essentially translators for in-game text, design and dialogue). Of all the groups I heard from, translators had some of the most harrowing, and saddest, stories to share. Their accounts were quite different from those described by tech workers, who were more likely to lament managements’ overuse of AI, a surfeit of dubious code in digital infrastructure, hasty layoffs, or the prospect of early retirement.
In an interesting—and rather telling—wrinkle to the AI boom story, many translators noted that generative AI didn’t usher in any revolutionary improvement to already-existing technologies that have been used to automate translation for years. Long before AI became the toast of Silicon Valley, corporate clients had been pushing lower-paying machine translation post-editing (MTPE) jobs1, or editing the output of AI translation systems, though many translators refused to take them. Others said Google Translate had long been able to essentially what ChatGPT does now.
Yet many describe a dramatic disruption in wages and working conditions over the last two years, coinciding with the rise of OpenAI. Though my sample size is small, these stories fit my thesis that the real AI jobs crisis is that the drumbeat, marketing, and pop culture of "powerful AI” encourages and permits management to replace or degrade jobs they might not otherwise have. More important than the technological change, perhaps, is the change in a social permission structure.
Not one but two accounts detail how many translators dismissed ChatGPT at first, because they’ve heard companies tout many automation technologies over the years, all with limited impact—only to see the floor drop out now. And it’s not that ChatGPT is light years better than previous systems (lots of post-AI translation editing is still required), it’s just that businesses have been hearing months of hype and pontification about the arrival of AGI and mass automation, which has created the cover necessary to justify slashing rates and accepting “good enough” automation output for video games and media products. Everyone else is doing it, after all.
2025-09-05
LLMs are slot machines
Pluralistic: LLMs are slot-machines (16 Aug 2025) – Pluralistic: Daily links from Cory Doctorow #llms #ai #coding
Glyph proposes that many LLM-assisted programmers who speak highly of the reliability and value of AI tools are falling prey to two cognitive biases:
The "availability heuristic" (striking things are easier to remember, which is why we remember the very rare instances of kids being kidnapped and killed, but rarely think about the relatively common phenomenon of kids dying in boring car-crashes); and
The "salience heuristic" (big things are easier to remember, which is why we double-check that the oven is turned off and the smoke alarms are working after our neighbor's house burns down).
In the case of LLM coding assistants, this manifests as an unconscious overestimation of how often the LLM saves you time. That's because a coding program that produces a bug that you have to "futz with" for a while before it starts working is normal, and thus unmemorable, while a coding tool that turns a plain-language prompt into a working computer program is amazing, so it stands out in your memory.
But that's not the only way in which an LLM coding assistant is like a slot machine. Reg Braithwaite proposed that AI companies' business model is also like a casino's, because they charge every time you re-prompt the AI. He writes:
When you are paying by the "pull of the handle," the vendor's incentive is not to solve your problem with a single pull, but to give the appearance of progress towards solving your problem.
But there's an important difference between an intern and an LLM. For a senior coder, helping an intern is an investment in nurturing a new generation of talented colleagues. For a reverse-centaur, refining an LLM is either an investment in fixing bugs in a product designed to put you on the breadline (if you believe AI companies' claims that their products will continue to improve until they don't need close supervision), or it's a wasted investment in a "dense intern" who is incapable of improving.
AI Psychosis
Found in this article by Ted Gioia: Our Shared Reality Will Self-Destruct in the Next 12 Months
In this new degraded world, we will see these six behavior patterns from everybody, even (or especially) those who under other circumstances would be well integrated into their communities:
Skepticism: If events can’t be validated, I can’t give credence to anything.
Aloofness: If everything gets called into question, I have no basis for shared communal actions.
Silence: If discussion no longer resolves anything, I have no purpose in speaking.
Indifference: As I lose connection with people and events, I lose interest in them.
Distrust: In a world without shared reality, no expert or institution can earn my total trust.
Hostility: As these traditional connections break down, it doesn’t take much to set off conflicts and violence.
Lithuania and The Digital Euro
In Lithuania, the Digital Euro Is No Longer Theory — It’s Infrastructure
According to data from the Bank of Lithuania, the country is almost entirely dependent on international card schemes for everyday payments. Neither Lithuania nor 13 other eurozone countries currently have a domestic card system — a dependency that European officials describe as a strategic vulnerability.
In 2022, the EU paid an estimated €1 billion in card fees to U.S. providers. Lithuania alone handles nearly all of its digital transactions through Visa, Mastercard, Apple Pay, and Google Pay.
“That kind of reliance on external infrastructure isn’t sustainable,” Lasmanis says. “Especially when you consider that geopolitics now includes cables, chips, and payments.”
The goal is not to eliminate cash, but to create a parallel, digital means of payment: one that is free to use, widely accepted across the eurozone, and capable of functioning even during internet outages or political instability.
Key features include:
Offline payments, even without mobile or data signal
Free basic services for individuals, including transfers and point-of-sale payments
No commercial data harvesting, with strong privacy guarantees
Programmable capabilities for governments (like automatic tax refunds or disaster relief)
The plan, currently in the preparatory phase, is to roll out basic infrastructure by 2027, with a full rollout by 2030. The ECB emphasizes that it has no commercial interest in transaction data and that privacy will be “as close to cash as possible,” particularly for offline transactions.
“It’s designed to be neutral and foundational,” says Christine Lagarde, ECB President, in a recent speech. “A public option for digital money.”
The evils of social media
what the evils of TV reveal about the evils of social media
The important thing to remember about “engagement” is that it started out as a metric for “attention” but has since become the target. Now people make ragebait and clickbait just to generate engagement to go viral, giving us all content we would rather not see. This is an inherent problem of social media that didn’t exist on TV, and affects our cultural conversations at large.
I feel like echo chambers have been talked about ad nauseam, but the fragmentation of content consumption is probably an equal threat to our collective well-being. People have come to expect tailor-made videos on their “For You Pages,” creating a more individualistic culture where everybody wants to be the “main character,” and reducing our sense of community with one another.
Today, social media makes those decisions for you; the implication is that the algorithm already knows what you want to see. Even when it seems like you do have choices, like with long-form content on YouTube and Netflix, the choices that are presented are algorithmically predetermined. The simple difference of not being able to choose what channels you’re watching means you’re playing a less active role in shaping your own identity.
The instant access and connectivity completely changes our interaction with the medium, engendering a greater sense of immediacy and further blurring the line between media and reality.
…
The phone is the culmination of the other dangers—it’s simultaneously designed to be engaging, and personalized, and remove agency. It makes sense that social media as a whole mirrors these attributes.
Online Disinhibition Effect
Insulation Makes Artists and Assholes - by Josh Zlatkus
Taken together, these protections make digital life a textbook case of evolutionary mismatch. Humans evolved in small groups where every word and action carried physical, emotional, social, and reputational consequences. When those consequences are diluted, distorted, delayed, or erased online, people unsurprisingly act like jerks. Psychologists even have a name for it: the Online Disinhibition Effect.
Well, the distortion—or downright absence—of social feedback from online environments creates something of an incubator for behavior that would not be viable in the face-to-face settings humans lived in until very recently. This is why I was not surprised that Musk might have become his Twitter personality: having successfully product-tested a new personality online, he felt comfortable bringing it into the real world. Twitter gave him the chance to try out a side of himself that may not have gotten off the ground otherwise. The same has been true for millions of people on thousands of platforms worldwide.
Viewing human behavior through the lens of self-interest can feel bleak, especially if you were raised to believe people are naturally selfless. But to me, it isn’t depressing. It’s clarifying. It gives society a clear goal: create conditions that channel selfishness into cooperation. In other words, get the incentives right.
Evolution has already solved much of this puzzle. Emotions like anger, empathy, shame, and gratitude both advance the selfish gene and hold groups together. They represent a blueprint for “selfish cooperation.” As we continue to build new environments, we should be careful about tampering with these ancient levers. Predictable results follow when people can act anonymously, with no reputation at stake, or when they exist disembodied, with no risk of a punch in the face.
By scrambling the old checks and balances on behavior, the Internet helped engender a version of Elon Musk—and countless others—that would never have existed otherwise.
Is ADHD Real
Went back and read this article again: Is ADHD Real? - by Josh Zlatkus - Living Fossils
Mostly because this article showed up on HN: Notes on Managing ADHD | Hacker News
After having read so many Living Fossils articles, this article and the comments on HN seemed so ham-fisted. Instead of going deeper into the subject matter, everyone is just posting "hacks".
Here was one sensible comment tho
Further, of course ADHD has a biological cause - human beings are biological beings so every human behavior has a biological cause when you come down to it. But the implication that proscriptions[sic] drugs designed based on a deep and verified understanding of the mechanisms of ADHD is completely false - ADHD drug prescription, like all behavior-altering drug prescription, is based on just "bucket chemistry", maybe-educated guess work. Which isn't implying drugs don't work for some people. But I think it's important to be clear the various drugs aren't ADHD cures in the way that antibiotics are cures for infection. But again, I support the right of people want ADHD drugs to have them. But I think drug use shouldn't be automatic.
Below are the quotes from the Living Fossils article that I found useful, especially the evolutionary approach to overcome ADHD like symptoms.
To me, the evidence is clear and the logic straightforward. ADHD isn’t a “disorder” of the person as much as it is of the modern world and its expectations. People with ADHD are probaby part of a normal spectrum, living in an abnormal and unfortunate (for them) world. We could even say that the modern world preys on the distractible. The easier it is to grab a piece of someone’s “mindshare,” the better for those who can monetize it.
Finally, remember that the reason a spectrum of distractibility evolved is that in some situations it will be good, and in others bad. High distractibility or impulsivity isn’t bad in general, just in specific circumstances. The way the environment has changed since hunter-gatherers roamed the earth has been in the direction of rewarding those who have lower distractibility and less impulsivity. But each of us has the power to shape our environment to some extent. For example, travel, socializing in big groups, and certain kinds of jobs might all benefit from higher distractibility and more impulsivity. These traits will obviously interact with other dimensions of personality, e.g. introversion/extroversion, but by themselves will thrive in some situations as they detract in others.
There is no doubt, though, that people vary in the dimensions that ADHD tries to measure. So what’s a person to do who is (relatively) highly distractible, inattentive, hyperactive, and/or impulsive?
The evolutionary approach is typically much more straightforward, practical, and realistic than alternatives. Instead of assuming that there is something wrong with the person, it locates the problem between the person and their environment. And it locates the solution there, too. Here are a few solutions that seem like easy pickings to me:
Develop a healthy lifestyle:
Exercise (especially in natural settings)
Sleep more or less, depending
Watch diet (sugar and caffeine in particular)
Meditate (one of the easier ways to reacquaint yourself with slow thinking)
Reduce or eliminate routine distractions:
no phones in schools; fewer notifications on phone; keep phone silent and hidden if you can; delete time-consuming apps
close out of email, or pause it, for meaningful chunks of time at work
make anything analog that you can (print out recipes, read physical books)
Create focus:
lean into structure and routine (read every day during the same block of time, make the same lunch throughout the week)
prioritize long-form activities (walk with a friend, clean entire apartment)
Lower expectations:
be OK with doing less