Critiques of quantification and its social function (including by me) are not hard to come by. Some are annoying and hackneyed, but some are thoughtful and serious (you can judge for yourself where I fall). Theodore Porter in Trust in Numbers argues that quantification is a way of governing “at a distance,” without specific local knowledge of the objects of governance. “Objects” is the right term; it’s an objectifying worldview and we need only look at the last 200 years of imperial history for a glimpse of what the corresponding forms of governance look like. Bruno Latour’s 2004 book Politics of Nature, while not specifically about quantification per se, does touch on this objectifying function. According to Latour, institutional, capital-S Science constructs an object called “Nature” using its specialized objectifying techniques, then uses Nature to “abort politics.” His major concern is that Nature is an anti-democratic construction; he doesn’t treat “Science” and “quantification” as interchangeable or the same. So I think Latour might actually agree with me in the measured defense of quantification that I want to advance here. I want to go against the somewhat (and I do mean somewhat) fashionable grain and offer that quantification and quantitative methods fulfill an important democratic function within our institutions of public science.
For a piece I’ve recently been working on about “polyvagal theory,” the pseudoscientific theory of nervous system regulation that is all the rage right now, I went and read the actual research literature on it. Because the authors of this literature reported their findings (badly) in a research journal, and because I understand the techniques and conventions of generating and reporting research results, I could quickly and easily evaluate their methods and results to reach the conclusion that the theory is wrong and the paltry therapeutic modalities based on it do not work. Quantitative methods, in other words, offer a transparent process for evaluating certain types of claims of public import. Take another example: the dreaded power calculation. I struggle to help clinical researchers with power calculations for grant or IRB applications on a daily bases. The struggle arises not because I don’t know how to do a power calculation (I do) but from a disjunction between how we each understand what a power calculation is for. Investigators tend to believe that the primary purpose of a power calculation is epistemological. They tear their hair out because there’s a section about power analysis on the IRB application and they think they need to do one no matter what in order to show the reviewers that they’re guaranteed to get an answer out of their research that is “right” or “true.” I, on the other hand, see the primary purpose of a power calculation as related to public accountability. Power calculations are often infeasible or inappropriate in reality. The actual purpose of the power analysis section of the IRB application is twofold. First, to demonstrate that you understand your data well enough to know if a power calculation is even needed, and second, if it is appropriate, to demonstrate using transparent and easily evaluable concepts and language that you have designed your study such that it will yield a meaningful result. These purposes are among the ways that public, democratic oversight of the research process is instantiated through the research process itself. Before staking a claim on research participants’ time or taxpayers’ money, you have to show – again, using transparent methodological steps that anyone with the right knowledge can evaluate – that these public resources you’re asking for won’t be wasted on something ill-reasoned, poorly designed, and destined to fail.
The critiques of the 1970s and even the 1990s could afford to be somewhat indulgent, allying techniques of quantification with sinister political and ideological purposes, and they are right as far as it goes, but only as far as it goes. In our meager intellectual environment, there’s far too much transposition of older frameworks onto current events as a substitute for sustained critique. For example, reading RFK Jr. exclusively through the lens of 19th century eugenics is somewhat useful for narrow rhetorical analysis, but by imposing a coherent ideology onto an incoherent, contradictory, and opportunistic figure, this maneuver actually obscures more of the global meaning of RFK Jr.’s actions than it clarifies. Similarly, transposing poststructuralist science criticism onto current events risks rehashing well-developed critiques of quantitative practices at the expense of seeing clearly how those quantitative practices work, and why certain actors seem to hate them so much.
Let’s stay with MAHA as a first example. One very common way to talk about MAHA is as a popular movement that is anti-science in the sense of anti-intellectual and anti-elite. This is to cast MAHA as a fundamentally populist phenomenon. I think this is wrong. Attached to the institutions of government like lampreys to a fish, the MAHA movement is exsanguinating the structures of scientific evaluation and oversight, down to the administrative bodies and deliberative procedures of the health bureaucracies. Since the actual technicalities of scientific oversight involve a lot of specialized education and expertise, there is a strong temptation to identify MAHA as populist insurgency fueled by resentment and ignorance. What, though, is the actual purpose of this centralization of technical expertise? The federal health bureaucracy works the way it does (imperfectly, to be sure) to prevent the public from being scammed by unscrupulous actors or having their tax dollars wasted on research with no demonstrable public benefit (at best) or on enriching a handful of Substack posters (at worst). The purpose, in other words, is to safeguard the public interest in the conduct of public science, and quantitation is a big part of that. MAHA identifies here not with the average American patient or consumer of health information as the populist framework would suggest, but with the scammers. They feel entitled to scam without interference, and the structure of technical oversight around public science in the United States is a major source of interference. The aesthetics of populism or class resentment are in this case, as they are in national politics, an alibi for a deeply anti-democratic, authoritarian impulse. Tracy Beth Høeg’s purpose, for example, is to override the democratic processes of science and decide that your child, whatever your wishes, can’t get important vaccines, but that her buddies in the glucose injection affiliate marketing space can help themselves to your tax money via Medicare and Medicaid reimbursements. Subverting the processes and standards of rigorous science is a way of subverting the democratic principle that public science funding should support things that are demonstrably – through a rigorous and transparent process of demonstration – useful to the public, rather than buying ring lights for TikTok grifters. MAHA is anti-science in the specific sense of being anti-democracy.
As a second example, let’s look once again at Emily Oster. She’s such an irrelevant pick me in MAHA world that it’s easy to forget that she was a very influential person during the Biden administration. Her work represents the technocratic-liberal version of the same hostility towards the democratic aspects of science; her gloss on this hostility leans into the elitism of expertise rather than away from it. With the delusional hubris of the economist, whose mastery of folk-divination procedures like the construction of “utility curves” anoints them supreme clergy of all science, she tries to subvert the democratic processes of public science from within. Oster’s specialty is the nominally data-based “freakonomics” explanation of some phenomenon of public interest, however obviously stupid and absurd – just consider her idiotic (and career-making!) “hepatitis B and missing women” thesis from 2005, touted to high heaven by the innumerate pundit class before being completely refuted and all but officially retracted within three years of its publication. At least 2005 Emily Oster was concerned about hepatitis B, even if she was using it to argue that discrimination and violence against women surely can’t be why so many of them die. Emily Oster in 2026 is full bootlicker. After RFK Jr.’s HHS overrode decades of scientific consensus and demonstrated public benefit to reverse guidance on the universal birth dose of the hepatitis B vaccine, Oster tried to kiss the ring, writing that “shared decision-making” (i.e., making the birth dose a personal choice rather than a public good) is actually fine for women who are hepatitis B negative. Which is actually the crux of the whole issue. Oster uses her special access to quantitative techniques to deny science on its own terms, specifically the parts of science that are truly public and universal. Her targets, like the vaccine schedule or health guidelines for pregnancy and the peripartum period have all been developed through a painstaking deliberative process; the methods used to arrive at the guidance are periodically updated and, being quantitative, are transparent. Transparent is not the same thing as accessible to lay people, so enter Emily Oster in her unforgiving business casual to “crunch the data” and tell you why there’s actually a secret exception lurking in there, why the risks of drinking raw milk or not vaccinating your baby for hepatitis B are low, actually, why science in the public interest isn’t really in your interest. If you kick the tires of her analyses, they deflate; because she has to live and die by the quantitative sword, her claims are vulnerable to critique in a way that MAHA claims aren’t. But in whose interest does she work? She may think CDC bureaucrats are paternalistic, telling pregnant women infantilizing things like “drinking alcohol is not a good idea,” but CDC bureaucrats work in the public interest. Emily Oster does not. She writes a newsletter on Substack, is a professor at a private university, and works as a public intellectual in various capacities that are variously supported by private foundations with right-wing politics.
Finally, let’s talk about Stephen Macedo and Frances Lee, authors of the supreme piece of shit In Covid’s Wake. I don’t pretend to have read the book and I freely admit I have no plans to read it; many people have done so and addressed its shortcomings more exhaustively than I care to. Regardless, I want to briefly talk about them. As I understand, Macedo and Lee want to show that US Covid policy was a failure not because so many people died, not according to any concrete (quantifiable) definition of failure, but because Macedo and Lee didn’t like having to deal with non-pharmaceutical interventions implemented during Covid as if they were common nobodies. Regular NPCs, and not dunderheaded professors of political science! They whine about members of the “laptop class” (by which they mean public health officials and researchers) not out of concern for social equality or grief for the profound social inequalities in the impacts of Covid, but out of an affronted sense of betrayal. Members of the “laptop class” betrayed themselves and (more importantly) they betrayed Macedo and Lee by – once again – following scientific procedures as if they were truly universal, making scientific recommendations in line with publicly available data in an attempt to mitigate harm to the public. All of it. As if science policy were meant to do universal things like minimize disease and death across the board! No, for Macedo and Lee, that type of science is unsupportable, and the function of science is to manage the underclass (interestingly, implicitly coded as “patients” in their argument). Why else are they so exercised about the unfriendly reception that the Great Barrington Declaration got? They stamp their feet and cry that it should have been taken seriously, but that’s not what they mean. (It was in fact taken very seriously, it’s just that nobody with any sense liked it, because it was fucking stupid.) What they mean is: the “experts” who cooked up the GBD (the SatSCAN guy, a dotty writer of weird erotic fiction with a sinecure at Cambridge, and disheveled jackal Jay Bhattacharya) should have been empowered to impose their recommendation of “focused protection” on the entire country, because they think they would have liked that fantasy version of the pandemic response more than what we actually got (a time-limited rollout of non-pharmaceutical interventions). The public experts whose expertise was actually relevant to the issue at hand, and who were, crucially, actually accountable to the public? Their expertise, their science, doesn’t count, and our government failed at the Covid response because it failed to implement an unworkable recommendations of a Koch-financed libertarian think tank over the considered recommendations of experts who work for the public. It’s not worth wondering how two political scientists know so little about science policymaking. The book is not about science policymaking or its social aspects. The book is one of many attempts to manufacture consensus that democracy is a contaminant in science and that democratic processes embedded in science policy lead inevitably to failure.
What all three of these examples share is hostility to the democratic elements of science as such, democratic elements that I think are most clearly embodied in the quantitative framework of public science. We don’t want to overstate these elements; Latour and Porter and many others are right. But even the limited and imperfect way in which our institutions enshrine science in the public interest is intolerable to all these fucking people, and that tells us something about what is going on here. It’s not contrarianism, or inexplicable hatred of the truth. It’s perfectly explicable, perfectly predictable hatred of transparency, accountability, and democracy.
You just read issue #93 of Closed Form. You can also browse the full archives of this newsletter.