Doing their hype for them
Defeatist, second-hand hype goes to college
By Emily
I find that one of the most frustrating kinds of AI hype is when people who are actually in a position to use their own expertise to push back instead give in to the FOMO and do the hype for tech companies. Today's case in point is a recent article in The Chronicle of Higher Education ‘AI Will Shake Up Higher Ed. Are Colleges Ready?’ (February 26, 2024). CHE positions itself as "the nation’s largest newsroom dedicated to covering colleges and universities" whose "newsroom is home to top experts in higher education who contribute to the ongoing conversation on the issues that matter." And so I would expect coverage that is starts from a deep understanding of what it is that educators do in the nation's colleges and universities.
Instead, this piece seems to portray the claims of AI companies and the totality of higher education practice as equally worthy of consideration ... and then finds the practice of higher ed lacking (emphasis added):
On one point, there is nearly unanimous agreement from sources The Chronicle spoke with for this article: Generative AI, or GenAI, has brought the field of artificial intelligence across an undefined yet critical threshold, and made AI accessible to the public in a way it wasn’t before. These technologies are now poised to shape broad swaths of the knowledge economy, and the wider work force.
But GenAI’s role in higher education over the long run remains an open question. The sector as a whole has yet to demonstrate that it can adapt and keep pace.
The first paragraph there is pure hype. How do we know the threshold is critical, if it's undefined? Why should we believe claims of (inevitable) reshaping of the knowledge economy and workforce?
A clearer-eyed view of what has happened in the last two years is that a few companies have amassed enormous amounts of data (mostly taken non-consensually) and the capital to buy the computing resources to train enormous models which, in turn, can be used to output facsimiles of the kinds of interactions represented in the data. So we have the ability to get text on demand that looks like legal contracts, or looks like medical diagnoses, or looks like therapeutic conversations, or looks like a news article, or looks like scientific research papers. But it actually is none of those things, because in all cases the textual artifact isn't really the point; the point is rather the thought processes and relationship-building that led to and follow from the textual artifact. (The sort-of exception here is legal contracts, where the textual artifacts are very much the point, except that the whole task is designing a textual artifact that meets the needs of the parties entering into the contract. Those needs usually extend well beyond "a text that has some nice legalese in it and otherwise looks like a contract.")
It adds insult to industry to say that higher ed should be (and is failing at) keeping pace with this purported rapid progress. To suggest that higher ed is about training students to produce the form of all of those things fundamentally misses the point of higher ed (or all education, really). The article almost gets this, but then frames it as just a mismatch between higher ed and "AI", again as if these two kinds of human endeavors were in any way comparable:
Higher education and the field of artificial intelligence, though, are fundamentally mismatched in a number of ways. AI and GenAI technologies are maturing rapidly, while colleges are historically slow to evolve. Institutions have also traditionally tied much of their value to teaching critical thinking and problem solving — skills that, at face value, are not synonymous with AI, and that such technologies could even impede.
This article also provides a nice compact illustration of two tropes of AI hype: that "AI" will solve all problems and that if we don't jump on board, we (or here, our students) will be missing out. AI hype is both tech-solutionism and FOMO:
Proponents of the continued integration of GenAI point out that these technologies could be a lifeline for colleges. Institutions might use them to operate more efficiently as colleges are forced to do more with less. They might prove their value by training students for an economy that’s witnessing burgeoning employer demand for AI and GenAI skills.
We've said it many times on the MAIHT3k podcast and I'm sure we'll come to say it many times again: Just because you've identified a problem (here, lack of public financial support for higher ed) doesn't mean an LLM is the solution.
Also frustrating, as usual, is that the only dissenting voices quoted in the piece don't contradict any of the hype claims. The reporter doesn't seem to have asked anyone if the tech really does work as advertised. Instead, the dissents remarks like that from Emelia Probasco (of Georgetown University’s Center for Security and Emerging Technology) who is quoted as saying "Could this create another level of haves and have-nots? That would be the equity issue." Suggesting that the real problem is who doesn't have access to the technology is missing the point.
The most valuable comments in the article come fairly far down where the journalist quotes (1) Kofi Nyarko, director of Morgan State University's Center for Equitable AI and Machine Learning Systems, about the problems of instructors using notoriously unreliable "AI-detection" tools (to police student use of the tech) and (2) Ondrea Wolf, Director of QEP and Assessment at El Paso Community College, about data privacy concerns when students are encouraged to use ChatGPT and thus send OpenAI their data.
There's plenty more hype in the article, too, including the claim that "AI" (a term that I've said elsewhere is just marketing, but here is being used as if it legitimately describes some set of technologies) has been around since the 1950s. The article also claims, with no hedging, that some scholars doing peer review "now use GenAI tools to get reader-friendly synopses of confusing chunks of text, or to identify existing research that’s been overlooked." It is an unfortunate fact that people are using text synthesis machines in the production of peer reviews and as search engines --- an abrogation of scholarly duty and a misapprehension of what the tech can do, respectively. And there's the typical conflation of lots of different technologies with "AI" or "GenAI", including a quote about text-to-speech and speech-to-text being beneficial DEI tools. Sure. But they aren't "AI" or even "GenAI". It's galling to see the benefits of specific (well-scoped) technologies being used to color the presentation of the "everything machines".
The central claim of the tech companies selling LLMs is that any work that people do that results in text artifacts is just "text in-text out" and can therefore be replaced by their synthetic text-extruding machines. The best response to that claim is not "oh no, we can't keep up" but to take pride in one's work---the totality of one's work---and push back: Characterizing a task as a mapping from inputs to outputs might be appropriate for machine learning. But if we adopt that as a way to understand what people do, we are complicit in the dehumanziation so rampant in the production and sales of so-called "AI". I expect better from people involved in higher education and people reporting on higher education, and I hope you do, too.
For more on AI hype and higher ed, check out Episode 26 and our wonderful conversation with Chris Gilliard.