Thoughts and concerns about ChatGPT and chatbots
Attention to chatbot tech has taken off in recent months, with the widespread release of ChatGPT in late November. The software has shown itself capable of producing written work at a level better than previous chatbot efforts, spurring a rush by various companies (including Google and Microsoft) trying to incorporate similar AI tech into their own software.
However, ChatGPT has also raised concerns about such technology’s use. Some schools have already moved to ban the use of chatbots, out of fear of students using them as a shortcut in writing papers. Some scientific publications, including the people who own “Nature,” are also putting limits or bans on chatbots.
There are also concerns about how chatbots will affect writing jobs. Not helping is that the tech produces mediocre results compared to a human writer; some chatbot-produced articles have also contained numerous factual errors, raising concerns about amplifying misinformation.
My concerns about chatbots
Case in point: CNET, a long-time tech site, has come under heavy fire for using chatbot tech to publish multiple articles (credited under a generic “CNET Money Staff” byline) without informing readers. Said articles also contained numerous errors and examples of plagiarism, yet still were green-lit by CNET anyway. CNET failed to clearly notify readers about much of this, until Futurism, The Verge, and other websites started investigating. Some have placed the blame on CNET’s owner since 2020, Red Ventures. (Besides CNET, Red Ventures also owns ZDNET, Healthline, Lonely Planet, and Bankrate.) For me, all of this makes it hard to trust CNET for the foreseeable future.
CNET’s troubles with chatbots haven’t slowed down interest by other companies in using AI-generated content; Buzzfeed says it plans to use such software to generate its quizzes. This points to one problem with using chatbot technology to replace entry-level writers: said tech doesn’t have to be good, just “good enough.” Especially if one is more concerned about raking in Google ad revenue/visitor traffic versus being the next New York Times.
Another concern is the use of ChatGPT and the like in making it easier to spread misinformation, where “good enough” is just fine. Given how much misinformation has spread about COVID-19 vaccines, masks, etc., it’s easy to imagine chatbot tech being a boon for such: “According to Totally Legit Medical Site, an article by Anne Nonymous says masks and COVID-19 vaccines cause cancer!” On a related note, there are also examples of previous chatbots learning to write racist content, so I imagine bot-written articles about Black Lives Matter or the like will be “fun.” (*Sigh*.)
I’m not optimistic
I’d like to assume there are some positive uses for ChatGPT and the like. (If you know of any, please list them in the comments.) However, thanks to CNET and a few other examples, the main things that now come to mind:
Penny-pinching companies figuring they can automate basic writing, avoid paying a bunch of entry-level writers a “princely sum” of a nickel a word, and just have a few editors edit the work to look presentable—or at least less bot-like enough to mollify or fool Google’s SEO rules/any wary readers. (“No way it’s a bot… would a chatbot jokingly reference a film like ‘Dracula 2000?’”) Never mind the increased workload that’d place on editors.
A boom in politically far-right articles generated by chatbot tech. Not that the Breitbarts of the online world had any problems generating content before, but having chatbots will certainly make cranking out material a lot easier.
I also wonder if it’ll make it harder to get started in some writing careers, if companies using chatbots impacts entry-level jobs or internships.
As for this blog, I don’t plan on using chatbot tech to create any material. One of the reasons I started this blog is so I can have an outlet for my writing; using a chatbot would undermine such. Also, quality over quantity still matters.
Image by Alexandra_Koch from Pixabay