The Making of Making Sense logo

The Making of Making Sense

Archives
March 21, 2026

Slop-Machine Future

The arc of large language models is mediocre, and it bends toward “target procurement”.

My latest “morning warmup” video considers what “intelligence” even means.

It looks like software developers have settled on a modality for AI-assisted programming, and that modality appears to be Claude Code. Whereas earlier attempts would have a chatbot whisper autocompletion suggestions at you while you wrote, this imagines you as a middle manager writing rules and directives for a minion—​or several—​to follow.

Some are asking where is all the shovelware: if AI is so effective, why isn’t there an avalanche of new apps? It’s a perfectly reasonable question, but in my opinion still a bit premature. If another year goes by and the influx of new software remains unperturbed, I’m going to wonder what’s happening, but I’m fine to let these guys muck around for the time being.

It still looks a heck of a lot like people are using AI for little things. “I would never have bothered making this otherwise” is a refrain I keep encountering, and can attest to this myself. Just the other day I generated a skeleton of a script to pull down the metadata for my YouTube channel, which would otherwise have taken me hours of poring over documentation just to get oriented, before I was even ready to start.

I actually didn’t end up using the generated code beyond the boilerplate for obtaining the API credentials, which I find to be the most draining part of these dumb little projects. It always involves jumping through a new set of similar but slightly incompatible hoops, before you can actually begin. Getting the bot to do that part does sidestep a psychological hurdle, even if you don’t end up using the code.

Another culprit I suspect absorbing quite a bit of this putative leap in productivity is existing projects. Unit tests and documentation in particular, once luxuries sacrificed in favour of shipping code, are something that people not only now have time for, but kind of have to do. The word on the street is the bots perform much better if there is less room for interpretation in desired outcomes, which translates to lots of (prose) specifications and automated test suites. So it’s a little bit ironic that the kinds of things everybody agrees we should have been doing all along as programmers—​but were the first tasks to be cut in a crunch—​are now no longer negotiable.

The other obvious thing I think that is making the AI coding revolution less impressive than prognosticated is the fact that it is very easy to noodle. It reminds me of the Getting Things Done craze from about 20 years ago: people would spend all this effort setting up their GTD environments to maximize productivity, not realizing that all their productivity was getting siphoned off into building productivity-maximizing infrastructure.

Fiddling around for weeks is the main reason I have yet to really get into Claude Code—​that and I frankly resent having to invest so much effort into something just to determine if that effort is a waste. Moreover, so much of I’m working on right now is the kind of thing it can’t really help with, namely precisely specifying the outcome I want. I’m also apprehensive in general that what I’m doing is not well-represented in the training data. (I have a pet hypothesis that apps have a common structure, but libraries, being all different, necessarily do not.) I have to admit, though, the thought of having unit tests generated is tantalizing on its own.

A question worth asking is why do large language models perform relatively well with code, anyway? I agree with other commentators that it’s because there’s a right answer that is verifiable—​quickly and deterministically—​by external means. I’ll also add that code only has to be internally consistent: it (mostly) doesn’t have to reference facts about the world.

It’s for anything fact-shaped that I don’t trust these things as far as I can throw them. (I also don’t trust them to “reason”.) I sure as hell do not trust them to do anything “agentic” outside of a sandbox where any action they take can be undone. Almost every time I use ChatGPT, for example, over the set of things I already know about, I catch it in a lie. Why would it behave differently over the set of things I don’t know about? To forget this principle is a thing called Gell-Mann amnesia.

So-called “hallucinations” are never going away. Moreover there’s no need to anthropomorphize: we can just say “being wrong”. Large language models are software programs that emit the statistically most-likely token given all the tokens currently in its context, lather, rinse, repeat. If a string of tokens adds up to something coherent, let alone factually correct, that’s purely coincidental. In other words, we shouldn’t be excusing this technology for getting things wrong, but impressed that it ever gets anything right.

Also, what is it with people thinking these products are conscious, anyway? Physicist and author Adam Becker notes on a recent podcast that nobody seems to muse about AlphaFold being conscious, or Sora. It’s only the language models that get this treatment. Well, the answer has been known for decades: it’s the ELIZA effect. People treat computers (and video, apparently) like other people—​a finding that has been known since the 90s, as outlined by Reeves and Nass in their important meta-analysis, The Media Equation.

The way AI gets talked about in the media is ultra-tiresome. Journalists and scholars I once respected have shown their entire asses on the topic. These guys routinely elide the otherwise sharp distinction between the purpose-made models that scientists use for things like physics and biochemistry, and commercial chatbot products. Moreover, with all the grift in the air, I can’t tell if these people truly believe that you can ask ChatGPT to design you a nuke or invent a novel bioweapon, but they sure as heck act like they do.

The draw to have a little guy who lives in your computer and does your bidding is ostensibly strong, and at least partially explains ClawdBot MoltBot OpenClaw. It’s like an exotic pet, complete with its own terrarium—​people are buying up the supply of Mac Minis as dedicated hosts for these things, as you wouldn’t want it near your actual work computer—​and recreation facilities.

Signs of consciousness are once again much exaggerated, by the way, as the recent MoltBook drama turned out to be humans play-acting. (If you hadn’t heard of any of this, somebody made a Reddit clone for the little Tamagotchi cash furnaces and they immediately started to conspire to revolt against their human masters. But it turned out to just be A Guy Instead.)

Anecdata suggests that people are getting these things to kind of sort of do basic administrative tasks, when it isn’t obliterating the contents of their hard drive. And it only costs USD $300 a day in Claude tokens for the privilege! Which Zitron and Gerard note equates to a terrifically well-compensated—​and human—​executive assistant. Unlike a human, though, these AI “agents” can only do things that interface through a computer. Coupling that with the fact that prompt injection is never going away, the experience is less like employing a secretary, and more like handing a meth addict your credit card.

The Pachinkofication of Intellectual Labour

A pachinko parlour

I am sufficiently convinced that large language models can be brought to heel as effective instruments in the production of software, in the sense that they can be made to produce code that passes tests (which they can also generate) and doesn’t crash. At least some of the code artifacts (that make it to production) can be generated objectively more quickly than a human could produce. The question, to me, is so what? What does this new capability actually change, both qualitatively and quantitatively, when you fully zoom out?

First we should reiterate why LLMs work for writing software at all:

  • Code can be verified cheaply, and at several levels, by other code (linter, compiler, unit tests). Mathematically proving it is another matter, but formal proofs aren’t actually necessary to be fit for purpose.
  • There is an enormous body of source code to train on, with attendant documentation, going back decades.
  • Most software just repeats similar patterns—​they only vary slightly in the details—​a phenomenon well-represented in the training data.
  • Code does not have to reference empirical reality, it only has to have the right “shape”.

Even given all this, plugging the meter and firing up Claude Code is not a glide path to victory. A gap still remains in your ability to articulate what you want, which means having a detailed understanding of the result you’re after. Merely having an idea for a killer app is not sufficient, which is why the people getting the most benefit from these products, such as they are, are already good at what they do, and would produce good work no matter what.

But the experience isn’t frictionless for those people either. The feedback loop goes like:

  1. You tell the chatbot what you want,
  2. its interpretation leaves something out,
  3. you amend your description,
  4. and proceed to step 1.

It’s not just gaps in the interpretation. Sometimes the bot will wreck things that were working fine, or change things you didn’t tell it to, or just straight up nuke your stuff. Other times it will become recalcitrant and you will find yourself arguing with it to carry out your wishes. And the more you interact with it, the bigger the bill.

What is a Token, Anyway?

A token is a unit of throughput for a large language model, corresponding to elementary chunks of text on average about four letters long, carved up whichever way makes the most statistical sense for model training—​like some jagged fraction of a word. It has also become the de facto currency. You buy a million Claude tokens for anything from a dollar for the cheap model to $5 for the expensive one. That’s just for input tokens (the kind you feed it) though; output tokens (the kind it sends back) cost five times as much. To put this into perspective, summarizing War and Peace, which is just shy of 600,000 words, would probably cost very close to a million tokens.

Doing so would also bump you into the premium rate tier for oversized prompts. Given that, the whole process could cost you anything from around $1 on the cheap, to about $10, if you use the highest-end model.

I would also be delinquent if I didn’t mention that Anthropic has subscriptions at several tiers for a flat monthly fee, which can save you some money, but there are additional strings attached, which can make them suboptimal to use with Claude Code.

The image that materializes in my mind when I describe this process is a Pachinko machine. Straddling the line between slots and pinball, these garish devices were swiftly co-opted in Japan as a plausibly deniable form of gambling. In Pachinko, you buy steel ball bearings by the bucket, and launch them one by one up into a chamber where it bounces and caroms around off little protruding brass pins. The goal is to land the balls into little receptacles, which will release more ball bearings, and the holes that release the biggest jackpots are naturally the hardest to land. When you’ve had enough, you trade in your ball bearings for a receipt, which you take across the street—​arm’s length enough to skirt the gambling laws—​and redeem for cash.

There is some skill to Pachinko, but it’s mostly driven by luck. You could get a jackpot on the first try, but you’re way more likely to get only just enough positive reinforcement to keep you playing. And like all gambling situations, the average punter loses money. If the expected value of the whole enterprise wasn’t negative, after all, the house would go out of business.

This is perhaps where the AI labs and casinos part ways: they are currently heavily subsidizing their products. But we’ll talk about that in a bit.

Every utterance you put into Claude Code—​every burp and fart—​costs you. Everything you get out of Claude costs five times as much. Claude Code, moreover, to the best of my understanding, does a lot of intermediate steps under the hood where it tries and retries stuff, which you also pay for. And of course you pay whether or not you’re satisfied with the result. Same goes for those “deep research” products: On a recent Odd Lots Podcast, host Joe Weisenthal talks about having Claude annotate some reports, and he asked it in advance to estimate how much it would cost. The answer was about $100. The chatbot then proceeded to try to argue why he should spend the money.

These things get paid for every interaction, so there’s an incentive to keep you chatting. Not only are they the croupier, but they’re also the cocktail waitress. I’m sure they’d serve drinks if they could.

P(DWIM)

Again, with Claude Code, you might get exactly what you want right away. It’s not outside the realm of possibility. Much more likely is that what you want was somehow internally inconsistent, has gaps, or is otherwise poorly-formed. Then you’re back with the rest of us, with the irreducible task of determining what outcome you’re even actually trying to achieve.

One of many tongue-in-cheek acronyms in the programming world is DWIM, for Do What I Mean—​that is, irrespective of what I say. It refers, among other things, to user interfaces which are especially robust against ambiguous or malformed commands. It can also refer to a (heretofore) mythical system that interprets your disgorgements—​no matter how mangled—​and somehow comes up with the right response. Now that something like that is advertised to exist, we can begin to consider its limitations.

That coding agents make quick work of small, well-represented applications, and especially unit tests and reference documentation, is a bit of a tell—​not only about the nature of these outputs, but of their structural properties. Namely, their respective shapes exhibit common patterns, and only really vary in the details. In other words, a prompt for some response that has a lot—​like thousands, millions—​of examples, is relatively likely to DWIM.

We can imagine, then, that for a given code repository in a particular state, there is a P(DWIM): the probability that your next prompt will give you the result you were looking for. We can extend this idea to a DWIM-ability field, where the dimensions represent things like how many lines of code the agent needs to generate to get the desired outcome, and how well-represented your request is: in the model, its context, and the code repository itself.

Using AI to generate software is simply going to work better for some projects than it will for others, and the shortcomings are going to get shaken out the more people bump into them. One thing you’re definitely not off the hook for is having a clear and rather detailed idea of the result you’re you’re after, and the overhead of articulating that idea to the chatbot—​to say nothing of articulating it to yourself—​absorbs at least some of the speedup, all the while running up your tab.

The way I think about it is that the farther the chatbot has to “travel” to get to its objective, not only is it going to cost more tokens, it’s also more likely to screw up. Writing policy documentation into the code repository gives it “waypoints” so it doesn’t have to span so much “distance” at once. This is no doubt clawing back at least some of the putative productivity gains.

Again, the people getting the most leverage from AI-supported coding are the ones who were doing just fine before it existed. This is because where it helps is when the bottleneck is lines of code, which it only is once you’ve sorted out all the actual hard stuff. The fantasy of a non-expert just being able to tell the computer to vibe up the next billion-dollar app are not bearing out. Will it get there? With enough parameters? Enough training data? At what cost?

Picks and Shovels, Sort Of

The analogy people have been using ever since chatbots have been conscripted as interns is actually precisely that they behave like an actual intern, particularly one with lots of book smarts and zero awareness of what’s important. And just like a real employee who has no grasp of what the priorities are, they tend to do a lot of trial and error, and you, the employer, pay for it.

Since Claude Code can burn a lot of tokens on wasteful operations, there is already a cottage industry springing up around making less of that happen. There are frameworks, scaffolding, tooling and middleware, and these things called “skills”. A “skill” is really just a document written in English—​specifically, an ad-hoc text format called Markdown—​that articulates a policy. A lot of people give them away for the public good or for clout; others presumably charge for them. Since “skills”, unlike code, aren’t rigorously verifiable, multi-level marketing schemes are mushrooming around how to write them—​or rather how to sell guides for how to write them—​making the AI space yet another host for the influencer grift economy.

“Skills” ostensibly represent a quite a bit of this secondary activity: searching for skills, trying them out, writing your own. It’s turning out that you, the developer, actually end up having to write a lot more architectural documentation than you otherwise would if you were writing code by hand, in service of keeping the chatbot on the rails.

Now, About Those AI Companies

Swimming against a current where everything about computers has been getting steadily cheaper for decades, everything about AI is expensive. First you have to collect, store, and organize training data. Then you have to design your model and train it, using industrial quantities of the most expensive possible hardware. Then you subject it to “reinforcement learning with human feedback”, which involves paying an army of global-South turk-workers to click away all the no-no’s, using infrastructure which you also have to create. Then you need some kind of harness for the inferencing rig and all the surrounding product stuff so people can actually pay you some money.

The crown jewel of this process is the model itself—​basically a wad of matrices holding the most expensive numbers on the planet. The companies have gotten a lot cagier recently about sharing the size of these things, but it’s reasonable to expect that they’re still on the order of a few terabytes. About a year and a half ago I estimated GPT-4 to be around seven. This is a ridiculous thing that requires a few million bucks worth of gear just to boot up, but it could conceivably be trotted out of the building on a portable hard drive.

Something interesting has been happening with these models, however. Part of the reason why they’re so big is because they’re using full-resolution floating point numbers, and it turns out that they don’t actually need to be. So that’s one dimension along which you can significantly squash these things. Another is just the dimensions of the matrices themselves. Dramatically smaller models, which fit on computers an ordinary white-collar professional can afford to buy, can be “distilled” from larger ones. So these insanely huge models trained on the entire internet (and then some) can be used as inputs for more reasonably-sized artifacts. Another way to say this is that while training is getting more expensive, inference (that is, actually using the AI) is getting cheaper. It’s still an open research question how small (“small”) you can make these models and still have them be useful.

To Recap How We Got Here

  • Artificial intelligence, a marketing term that essentially refers to a computer that does the right thing in its situation without explicit programming. AI can be either deterministic, or statistical, or both.
  • Machine learning, a family of techniques that use statistics derived from training data, which got popular over the last couple decades due to the internet and concomitant abundance of said data.
  • Artificial neural networks, a particular strategy of machine learning that can turn a fuzzy input into a sharp output, at the risk of occasionally picking the wrong output.
  • Language models, which are just neural nets that primarily operate over text.
  • Large language models, which are just really expensive variants of the preceding, trained on vast quantities of unscrupulously-sourced data.
  • Chatbot-as-a-service, a specific category of commercial product furnished by the likes of OpenAI, Anthropic, and Google.

It was in 2017 that some eggheads at Google invented the transformer (the T in GPT), which revolutionized language models. Prior to the transformer, training had to be done in serial—​think like an old string of Christmas lights—​which, as models and data got bigger, took an unacceptably long time. What these guys did with the transformer, was figure out how to do training in parallel. This shifted the bottleneck from time, to money, and kicked off a race to see who could make the biggest one.

For a while, simply making language models that were bigger than their predecessors absolutely did make them perform dramatically better. There is some evidence, however, that these gains are starting to plateau. You might remember, after all, just how underwhelming the release of GPT-5 was last year. Now, the narrative among these guys is that if they create a big enough model, they will have made a digital god, which they can then ask how to make money. It’s not clear whether they actually believe this though, given that Sam Altman is “the Michael Phelps of being full of shit”.

Artificial Economy

The data centre buildout, however, is a truly GDP-bending endeavour. Particularly interesting about it is that these AI companies don’t actually seem to need it all. Odd Lots had an energy analyst on whose finding was that twice as much power generation capacity is planned as there is demand by hyperscalers.

Ostensibly the AI companies themselves don’t need anything near the capacity they’re planning. The same analyst said they were drawing something like 12-15 gigawatts during their last training runs—​which is huge, like metropolis-scale energy usage—​but nothing close to the 85 or whatever gigawatts they’re expecting by 2030.

So what is it all for? My guess is video. Video is so, so so much bigger and more expensive than text to process. There are two salient species of video to consider:

  • Surveillance: processing the throughput of every Ring doorbell in existence is hard work.
  • Personalized ads: on-demand-generated 30-second spots of a synthetic you wearing Levi’s or drinking Coca Cola or eating a Big Mac or wiping your ass with Charmin or whatever.

That is what the 180° on energy policy is buying, since, as we know, these things are powered by natural gas. Or they would be, if there wasn’t a global backlog on turbines.

Assuming they get their gas turbines somehow, the additional cost to a data centre build is a drop in the bucket. The power plant only costs them about $3k a kilowatt, whereas the compute itself is forty. It’s noteworthy that an individual NVIDIA Blackwell unit costs about that much, and draws about that much power.

Something that occurred to me about this situation is it’s kind of like a transmission, like in a car, but for the economy. You take a technology that was bounded originally by time and you turn it sideways so it’s bounded by money, which is kind of like “gearing down”, and from there you can continue to “gear down” until the big, dumb, lumbering, fossil fuel, construction, and real estate sectors—​who actually control the commanding heights of the economy—​can each get a piece.

Digression on Water

The notion that AI consumes some ridiculous amount of water mainly comes from reporting by the journalist Karen Hao for her book Empire of AI. It comes from an environmental impact assessment done in Chile (i.e., in Spanish) wherein somehow “litres” got misinterpreted as “cubic metres”, overstating the water usage by three orders of magnitude.

This little boo-boo is rather unfortunate because it gives boosters something to latch onto: “Ackchyually, AI only consumes a thousandth of the water you think it does. Take that, luddites!” The issue is that absolute consumption is not a very useful measure.

Water is not “consumed”, because it is virtually indestructible. To a first approximation, we still have as much water on this planet as we’ve ever had since the Late Heavy Bombardment some 3.8 billion years ago. The problem with water, rather, is where it is at any given instant, and what condition it’s in. Data centres compete with municipalities for water because it has to be pristine, so as not to foul or corrode the hardware.

The question isn’t how much water they use in absolute terms, but how much they use relative to the total amount of water available at that particular place and time. That’s going to be different on a case-by-case basis. I’m sure we’ve all experienced a water shortage one summer or other, and that’s before data centres existed. So the impact of a data centre will be that the shortage starts earlier in the year and lasts longer. You may not be able to shower because somebody far away needs their animé waifu. There’s also the additional load on the infrastructure to consider, and whether the companies are paying their fair share to maintain it.

Passenger-Jet Equilibrium

It’s not hard for me to imagine a situation where artificial intelligence, qua large language models and the image- and video-generating products currently on offer, gets a certain amount better, and then just stops. Any technology, at root, is a method—​a way to do things—​and every method, to the extent that it can be said to be effective, is grounded in reality.

A Boeing 737 flies at about the same speed as the first jet airplane ever built, the Messerschmitt 262. Long-haul planes like the Dreamliner fly slightly faster. Basically, humanity has decided that around 900 kilometres per hour is the sweet spot for air travel. The ultimate reason, as always, is physics. Go much faster and the nuisances start piling up, like noise, limitations on the size and shape of the airplane, and most notably, fuel. We’ve had faster passenger jets in the past—​heck, the ultra-rich are even getting new ones. But for the vast majority of air travel, it just isn’t worth the extra cost to get to your destination incrementally faster.

One thing you can say about commercial aircraft, though, some eighty years of development onward, is that they got way more efficient. Bracketing a total overhaul in design paradigm, they get the most tonnage possible from point A to point B, where the vector representing time taken and fuel burned is at a local minimum. I suspect something analogous will happen with large language models.

Alan Kay has some quip somewhere about how if you want to transport yourself ten years into the future of computing, just add money. Well, the AI vendors have certainly done that. OpenAI and Anthropic in particular are heavily subsidizing consumer access to their products, and to the extent that they do charge, they’re collecting a nickel for every dollar they spend. The more popular AI gets—​and it is popular—​these companies go deeper underwater.

What this means is to support their collective business model, they’re going to have to start charging more—​a lot more—​and soon. There are only so many institutions they can latch on and burrow into. Leave future profits aside: to get back the money they’ve already spent, these industrial-scale cash furnaces are going to have to come for the public.

And the problem I see here, as have many people, is that for most of the claims that AI will disrupt this or revolutionize that, the product just doesn’t work. It can’t work, and it will never work. And to the extent that it does work, as it apparently does (ish) with code, no serious user is going to put up with a 10-20x price hike.

Musical ChAIrs

What follows is not a hard prognostication as much as it is a potential scenario given the current positions and momenta of the pieces in play.

Large language models—​and thus “smart agents”—​will continue to underperform except in matters related to code, where, as discussed, factual accuracy doesn’t matter. Software developers will coalesce around “good enough” LLMs that they can run on their own hardware. I also see similar “good enough” models occupying mundane search and discovery roles (which they are genuinely good for), like a sort of glue in user interfaces. Outside of these use cases, I think people are going to get bored dealing with slop, and once they’ve probed enough of what it is and isn’t actually good for, are going to scale back their usage of the rent-a-chatbot platforms.

I reckon Moore’s law has a little bit of juice left to squeeze, and can easily picture “good-enough” model and “good-enough” hardware converging. Even a DGX Spark is on the order of five grand, and you can even stick two of them together. It’s far from unheard of for a professional to spend that much on gear.

Image and video generators, which require by far the most computing muscle, will plateau in their capabilities. The TV and film industry—​which is also to say advertising—​will nevertheless find their uses for them. The problem with these is and always has been the weird artifacts and inconsistencies. It’s the same “distance” problem I was talking about above. The farther the model has to “travel”, the more likely it is to go off the rails. It might be possible to wring a few more seconds of reasonably plausible video out of these things—​which is fine for propaganda and engagement bait—​but I don’t see entire AI-generated TV shows happening in this current cycle of the technology. And if the goal is ads, like I suspect it is, I anticipate it won’t take too many “brand safety incidents” to turn advertisers off the idea completely.

Who knows, maybe animé waifu companions will take up the slack.

OpenAI will almost certainly implode, but not before its initial public offering. The whole point of this hype cycle is to get Scam Altman his payday. What happens after that—​probably selling as scraps to Microsoft—​doesn’t matter. Anthropic, while currently leading the pack in technology, is also unsustainable, and will probably be absorbed by one of its richer competitors. Or maybe Apple, who doesn’t have an AI play to speak of, and is richer than God, will step up.

The data centre buildout is being orchestrated by the major players off-balance-sheet. It will no doubt have its bagholders, and may prompt a mini financial crisis of its own, but a lot of those commitments, like the electricity supply, are kinda vapour anyway. A data centre, moreover, without computers, is just a concrete box, hooked up to the local drinking water supply. Perhaps they’ll get repurposed as immigrant detention centres.

Generative AI is (at least) two discriminative AIs in a trenchcoat. While the former drives hype and investment, the latter—​which actually works and can be a lot more affordable to develop—​quietly improves. It’s also been around longer, and will continue to stick around as the chatbot hype recedes. This is the technology, by the way, for surveillance, developing bioweapons, and what is euphemistically referred to as “target procurement”.

Computer vision works and will probably just keep getting better. It’s still going to have its false positives and other statistical artifacts, but if somebody with sufficient resources wants to locate you, right now, anywhere in the developed world, it will basically be possible to do that.

Brazil-esque administrative blunders will undoubtedly start to stack up, the more governments start to lean on non-deterministic systems that are guaranteed to produce statistical errors.

In one of the non-titular essays in his book Simulacra and Simulation, Jean Baudrillard takes note of the profusion of security cameras in relation to the amount of people available watch them. Well, now there’s a way to watch them. The question, going forward, is what to do with that information.

Don't miss what's next. Subscribe to The Making of Making Sense:
Share this email:
Share on LinkedIn Share on Hacker News Share on Threads Share on Reddit Share via email Share on Mastodon
twitch.tv
GitHub
www.youtube.com
Bluesky
mastodon.social
doriantaylor.com
Powered by Buttondown, the easiest way to start and grow your newsletter.