Melodic-Ambient logo

Melodic-Ambient

Archives
About Me/Past Writing
July 16, 2025

Honor Studentism: Educational Underpinnings of Delusional Beliefs in the American AI Industry

Throughout my time in high school, I was under the thrall of the grading system. I attained first in my class, but diary entries from after high school sometimes wax about the emptiness of the system and various academic competitions.

(2010/08/22 (summer before college)) “And that leads to math team, which I mostly did because it was fun, and something I was good at competitively (hahaha). I guess you got "problem solving" skills out of it, but their use is so hard to find in everyday life, because it's mostly a passive benefit you get from them. Not like learning to rollerskate, where you see the benefit right away. It was just a fun experience with friends”

(Aside: It’s funny to see me mention something athletic when thinking about Angeline Era’s action vs. my misgivings with number-focused games.)

I was a typical honor student, perhaps with a few more misgivings than most… Recently, I keep looking at various tech-related or ecological fiascos and having the thought “honor students are to blame!”. Of course, it’s not true, but there may be some grains of truth to it… which is the point of this post.

I was wondering where the motivation for technology like OpenAI, DeepSeek, and the like all began. Why is there a belief about Artificial General Intelligence underpinning the founding of OpenAI? Who and what’s to blame? Without reaching too far back in the chain of history, I think one answer is “problems are caused by honor students produced by standardized testing-focused education”.

Like any technology with a few benefits, there’s a long list of downsides to ChatGPT-esque tech: on the personal level, ChatGPT-induced psychosis. Or, artistically, fake music and art crowding out opportunities for humans. Or ecologically, the increased production of hardware to train LLM models, or the energy needed to train such models. Or, socially, the ways that LLM tech is being integrated into a lot of computers — and thus, its ongoing integration into organizations from corporate to public, despite a variety of negative, anecdotal outcomes.

I read a short 1998 talk by academic Neil Postman the other day on technological change. It’s an approachable and relevant read for 2025, talking about the downsides to technological change, from the printing press (nationalism) to viewing standardized testing as a technology which has affected how we conceptualize learning:

The greatest impact [on American education in the 20th century] has been made by [men in Princeton, NJ, who invented the technology of the standardized test, which has] redefined what we mean by learning…

And on collegeboard’s website from 2024, we still have quotes like

When used in context, the SAT helps colleges fill their early pipeline with a diverse range of students who can thrive on their campus.

The way the CollegeBoard describe themselves sounds like a tool for stuffing bags with fertilizer, soil and seeds. Though some US colleges are test-optional, the stats show that test-taker demographics are again reaching pre-pandemic levels.

You can think through any technology and the pros and cons. Smartphones come to mind: compare Steve Jobs’ bright-eyed keynote with the reality we have today: for its minor conveniences, we also have the downside of addictive shortform video, doomscroll-fostering social media, or people looking at their phones during dinner. Thanks, Steve!

“I’m the smartest in the room”

Underlying this is that “move fast and break stuff”/technolibertarianism/californian ideology sort of underlying tech workers in Silicon Valley, but if you look underneath that idea, it’s the belief that “I’m the smartest in the room, I’m probably right, we need to always throw the old system out completely.”

I’m sure you’ve encountered this kind of attitude in your working life. Maybe you are this person. I’ve most often met this kind of person when I was a Computer Science major at the University of Chicago. 18-22 year olds, philosophizing grandly about technology and life, despite only having just made it out of a decade of state-mandated schooling!

I bring up standardized testing (and its long history in the USA) to think about how the USA fosters this attitude.

On the personal level, this kind of “I’m right” attitude is certainly in part due to some adverse childhood experiences that have not been worked through. But plenty of people work through these issues and become reasonable people. What about the others?

The belief that one can be the smartest is problematic, as society has quantified smartness in the form grade and test scores. People believe that smartness is measurable via these numbers, which is linked to how much information and data you can acquire. It’s a definition of smartness for a computerized information age, rather than considering smartness through other avenues like interpersonal interaction, age or experience.

This belief is clearly reflected in the use of AI Benchmarks (like “Humanity’s Last Exam”), which, interestingly, score various LLMs’ performance on standardized tests for computers, as if they are students in a math competition. This kind of intelligence measure is a matter of finding and cross-referencing the right data, never the kind of complicated questions that pop up immediately with interpersonal things: questions like “How do I handle this conflict between two close friends?” or “How do I navigate talking with someone who is in a new religion?” or “How do I include a newcomer into a group conversation?”.

When a child is raised in an educational system that prioritizes high test scores and STEM fields, they grow up internalizing things like “technology is the most important”. When someone’s value system and reason for existence is predicated on aspiring to help create that new technology, they may seek out opportunities to innovate for the sake of it, or without considering the implications — maybe through computers, smartphones, or LLMs — even if this may lead to a lot more problems than solutions: dead, cancelled apps, broken software, etc.

People raised in this system often have value systems and hobbies centered around maximizing points or performance, they likewise may have trouble seeing the value in pursuits that don’t have a lot of numbers or data associated with them (arts, senior care, etc). In viewing the world as a system of investment and return, things as innocent as tourist spots pop up (“the best return on my time!”), viewing games as “content per hour”, even friend hangouts “not worth my time to just see one or two people!”

It is very easy for these aspirations be coopted by nationalistic influences. Just consider AI whiz-kid Alexandr Wang’s quote to Trump: “America must win the AI War”. This guy could have been my underclassman in math team or something. Man, give me a break. I find pro-USA asian immigrants slightly fascinating, and sad. The USA education system takes in Asian Americans (like myself), and outputs bizarre twists like a 2nd-generation Chinese immigrant adopting a nationalist stance against a country he’s only 1 generation removed from.

Being raised in USA education and its rigid tethering to a single closed community is a kind of unreality in itself, so as a result people (especially STEM) are quite vulnerable to stories about understanding the world that feel similar to military or sci-fi stories “AI War,” “AI Apocalypse”. On the whole, there’s perhaps not enough opportunity to let children escape the rigid shell and framework that the education system puts around them. If we view the education system, suburban upbringing as two layers of “shells” put around a kid (like a mecha), then leaving that education system is a matter of exchanging those shells for different ones (viewing the world in market terms, etc.)

Delusions and AI Industry

I believe a lot of people working in AI hold delusional beliefs. Let me expand on this use of “delusional”, because I’m not using it colloquially in the more individualistic sense (e.g. schizophrenic delusions). In Lisa Bortolotti’s Why Delusions Matter, she talks talks at length about what constitutes a delusion. Here’s a paraphrase from page 26. A belief is delusional IF:

  • The belief is considered implausible by some (“AI will take over the world”, “The world is a simulation”, “AI will make ___ better!”) (Note this implies delusions to carry a degree of subjectivity)

  • The belief is unshakeable — the delusional person may engage in debate over the idea, but will not give up the belief even with counterarguments/counterevidence. E.g., like how when doomsday comes and nothing happens, the believers will invent a reason as to why it didn’t.

  • The belief is has centrality to the person’s identity: that is, it’s very important for the person in relation to their world.

In particular, delusions are very hard to give up when they are “shared in well-defined communities” (e.g. Tech Twitter, Silicon Valley Companies), because to abandon the delusion is to both admit one has made a mistake, but also to be forced to adopt a new way of viewing the world, and more importantly — “has powerful implications for our social life, political and religious affiliations, and interpersonal connections.” (page 69).

For example, if one of my beliefs — that videogames can be artistically powerful and only made by humans — were to somehow be totally disproven, I would certainly be shaken to the core. But I’m able to maintain this belief in part because of the other creators and players I see and interact with on a day to day basis. To accept my belief as a delusion would be to negate each and every one of those relationships to some extent.

So, to give up a belief — like “AI will take over the world” in favor of a more nuanced look at how computers should interact with other systems, or a questioning of technology’s power in the world — is to give up on the belief that technology is the most important thing in the world, and that other forms of existing — which technology workers may have no experience in! — might be more important.

But to give up this belief is to admit mistakes. And American culture of grading, scoring, and a belief in “natural smartness” punishes making mistakes. (To bring up Alexandr Wang again, the discussion of him often mentions inheriting his intelligence from parents.) The entire belief of scientific or artistic genius, in some ways, exists to hide the fact that people who are good at these things make mistakes all the time. (Ironically isn’t that the experimental method in science???) To speak from experience, honor students are praised for their performance all the time, but it always has the undertone of being a natural smartness, rather than the result of mistake-making and study. I have many anecdotes from people finding out I got into music-making after high school where seem to have trouble grappling with the thousands of hours of practice that went into the skill.

There is a true irony in that ChatGPT’s creators hold a belief in the fear of AI damaging humanity — as if humanity is approaching this potential huge disaster! — yet, the advancements with ChatGPT have led to its adoption by the general public, which now faces very real, immediate and concrete problems (check out this report from this month on children and AI usage), not just sci-fi-esque hypotheticals.

Outside of education, one wonders if these children were just shown more love or praised in different manners, perhaps the kids would not have to turn to grand narratives to motivate their being.

Numbers as Technology

Numbers, too, are a technology. They’ve led to many nice things, but it’s worth keeping in mind the negatives: pushing us further into the realm of numberly abstraction: addictive digital worlds, gambling-esque financial abstractions leading some to ruin, simulated therapists, remote wars… For every fresh 22-year old seeking meaning through numbers, there’s plenty of adults ready to use these graduates for their ends. The end result is that kids are raised to end up partaking in economic or technological conflicts between corporations or governments. (Perhaps the most blatant in AI would be OpenAI vs. DeepSeek, but you can extend this generally to American economy vs. Chinese economy… etc.)

Fine I’ll explain this. In this anime, “Evangelion”, a bad dad makes his son get in a scary robot to fight battles the son doesn’t quite understand. The point I’m making is that whether or not a STEM major has good intentions there’s still a chance they feel or are forced into working in roles that have destructive potential

I sometimes rant on social media about numberism in games, the way so many games center around making a number go up. Or, how games often show players numbers on the screen so they can understand the state of a game (HP, EXP, etc). It would take another essay to unpack my thoughts, but numbers often impose a certain kind of abstraction of reality into a game that either bores or doesn’t sit right with me. I don’t necessarily view numberism as evil game design, but it does reflect a society in which numbers motivate us from very young ages. And at the very least numberism carries with it the idea that you can represent reality through numbers, like city simulations or fighting.

Thinking about numbers, it’s no surprise that some leading AI people (Sam Altman, Liang Wenfeng of DeepSeek) come from business-heavy or finance backgrounds where they live around large numbers. It is as if this kind of numerical atmosphere, how it’s based on gambling via investment and profiting, further prepares one’s ideological soil for the kind of beliefs required to work in AI: beliefs how knowledge can be measured numerically, the models for representing knowledge to computers, beliefs in what ecological costs are “okay”, abstracting work out to large numbers of faceless people to eventually make your GPT’s test performance go up.

But it’s not just Americans who are going to have to face the problem of these unshakeable beliefs around AI and technology, since we’ve exported this culture worldwide, with similar education systems growing workers around the world. If someone has a unshakeable AI belief, can it be fixed? Theoretically, but if their beliefs are truly unshakeable to the point of delusion, then it isn’t really a matter of simple debate.

Some people have just stepped into another realm of being: by this I mean they have wholeheartedly accepted the worldview provided by grades/numbers/scores-centered education, and utilize this worldview in their day-to-day work which results in concrete damage at the very end of it, from depressed ChatGPT kids to child mineral miners. Instead of working on problems (e.g. city/urban design in the USA) which lead to kids who feel they need to use ChatGPT, these people instead choose to pursue sci-fi-esque internal narratives where the goal is simple: make the numbers go up.

So how do you stop society forming more of these people?

If you are one of those people, it’s up to you to question and think about the ways numbers enter and influence your life.

Other than that, one starting point is the work being done on educational reform. It’s the education system that fosters these beliefs, in the first place. Ironically, AI usage by students has made pronounced the superficiality of memorization-and-test-based education. But it’s not enough to let an unexpected side effect of a busted tool change education… It’ll still be up to educators to structure and decide what education will look like.

Teaching to the test can be seen as equivalent to letting the Netflix/Spotify/Twitter algorithm pick what you read, watch, or listen to. In the same way that artists should encourage fans to explore history of media (or history in general), educators are already developing and implementing pedagogies focused more on individual students, or making lessons more relevant to the world/history/the student.

In other words, I think the goal is to rescue people from flying off into the “dimension of numberly abstraction”, trying to keep them instead more tethered to the Earth, their bodies, history, and others. (There’s an irony with how obsessed with space some tech people are…)

I’ll leave this with some hopeful notes: people across time have known that standardized testing and the factory model of education are problematic. Americans had misgivings in the 1930s , and in postwar Japan (50s/60s), Japanese teachers and parents even led protests to prevent Japan adopting the American model of national standardized testing. I thought this was an interesting photo — before Japan wholly adopted the American postwar model of education, there was pushback against it! Solutions are there, if we dig them out of the past (or find people already working on them in the present.)

A protest against standardized testing in Hoya, Japan (near Tokyo), 1963. Credit 小林つね子家所蔵、西東京市図書館提供

Don't miss what's next. Subscribe to Melodic-Ambient:

Add a comment:

Share this email:
Share on Bluesky
Bluesky
Powered by Buttondown, the easiest way to start and grow your newsletter.