AI week Jan 28th: How the bad guys are using AI
Hi! Thanks for joining me for this week's AI week. This issue is going to highlight some of the ways bad actors are already using AI, and ways they're expected to use it more.
- Bad guys using AI: Cybercrime, Nazis, deepfakes, and the military
- AI application of the week: Scientific fraud detection
- Davos, plus Google layoffs
- Longreads
This week's issue is about a 10-minute read.
How the bad guys are using AI
1. Cybercrime
Futurama: Mugger and Andrew, partners in crime
The UK's National Cyber Security Centre put out a report last week predicting that as cybercriminals adopt AI, we'll see more frequent and more painful cyberattacks of all flavours, including ransomware attacks, phishing, etc.
We're already seeing the start of this: IT security firm Kaspersky surveyed the ways criminals are already using AI on the darkweb. Skip down to the Longreads section for the full article, but as a brief summary, ChatGPT and other LLMs are making phishing and malware easier in several ways:
- ChatGPT is helping less-expert scammers and hackers
- Cybercriminals are selling "jailbreaks" for LLMs like ChatGPT, that is, prompts that will make ChatGPT forget it's supposed to be on the side of the angels
- You can also buy hacked ChatGPT accounts, so that you can steal API access to ChatGPT to carry out your attacks
- They're also selling access to "evil ChatGPT" -- LLMs like WormGPT, XXXGPT, or FraudGPT that don't have any guardrails, and are perfectly happy to help you with DDoS attacks, X-rated stuff, or phishing scams
TL;DR: Now that any idiot can write a phishing email with perfect English or launch a ransomware attack, expect a lot more cybercrime.
AI will increase the number and impact of cyberattacks, intel officers say | Ars Technica
Ransomware is likely to be the biggest beneficiary in the next 2 years, UK's GCHQ says.
2. Nazis
Know who else is using "evil ChatGPT"? Nazis.
Founder of Neo-Nazi Group the Base Instructs Followers to Use 'Uncensored' AI
A large language model by the same company previously produced instructions on how to kill someone or carry out ethnic cleansing.
3. Deepfakes
AI fakes are getting really easy to use, and that's a problem. Here are a couple of malevolent deepfakes I ran into this past week alone.
Fake Joe Biden says, skip the NH primary:
Robocall with artificial Joe Biden voice tells Democrats not to vote | Ars Technica
Fake Biden voice urges New Hampshire Democrats to skip tomorrow's primary.
And fake Jennifer Anniston went viral on Reddit. She wants to give you a laptop for only $10!
https://old.reddit.com/r/ChatGPT/comments/19dv1dg/the_scam_youtube_ads_are_getting_better/
Speaking of unauthorized AI-generated imitations of people:
Update: Carlin’s estate is suing over Zombie George
A couple of weeks ago, I wrote about Zombie George Carlin, an AI-voiced comedy set that used George Carlin's name, likeness and voice. (The set purported to be AI-written, but as I said, it didn't sound like it. Thanks to the lawsuit, we got confirmation that the set's author was indeed humans pretending to be an AI.) George Carlin's daughter was not amused and his estate is suing.
George Carlin estate sues over fake comedy special purportedly generated by AI | AP News
The estate of George Carlin has filed a lawsuit over a fake hourlong comedy special that purportedly uses artificial intelligence to recreate the late standup comic’s style and material.
Relatedly, Iceland is considering laws against deepfaking dead people after their national broadcaster pissed everybody off by making a deepfake New Year's video of a beloved, but unfortunately dead, Icelandic comedian.
4. Military adoption
Whether or not a nation's military is a "bad actor" depends partly on whether or not it's being used against you. Regardless of one's point of view on any particular military, though, it's easy to foresee a ton of possible unintended consequences for military adoption of AI in general.
Still from the opening sequence of Terminator 2
So I was not thrilled, this week, to learn that not only has OpenAI dropped its ban on military use and warfare, but ex-Google-CEO Erich Schmidt is working on a project to build military drones that use AI for visual targeting. What could go wrong?
The company has been developing a mass-producible drone that uses artificial intelligence for visual targeting and can function in zero-comms environments created by GPS jamming.
As a bit of context, the excuse here is that Russia can jam Ukraine's drones, so clearly the world needs autonomous killbots.
Eric Schmidt’s Secret Military Project Revealed: Attack Drones.
The former Google CEO has been quietly working on a military startup called White Stork with plans to design “kamikaze” attack drones.
OpenAI abandoned another promise this week as well: Wired reports that they've abandoned a long-standing transparency pledge to make its governing documents, financial statements, and conflict of interest rules available for public review. (By the way, there's a nice summary of all the problems OpenAI is currently juggling in Gary Marcus’ Substack.)
Extra uh-oh
This seems like a good time to remember that AI is bad at identifying non-white people. Makers of facial-recognition systems have been working on this problem, but haven't got it licked yet.
AI application of the week: Scientific fraud sleuth
Scientific fraud is a major problem that goes all the way to the top; even researchers at prestigious places like Harvard Medical School have been accused of fraud recently.
Scientific fraud that gets found is sometimes really easy to detect once anybody looks for it: images taken from other papers, images crudely manipulated with copy and paste, data manipulation, and so on. Yes, sometimes actual scientists submit actual papers that use the same image several times, just rotated, resized, and copy-pasted a bit.
Source: Proofig
But this kind of fraud only gets caught when somebody looks. Probably the best-known person-who-looks is Elisabeth Bik, a researcher-turned-scientific-consultant with an uncanny ability to spot image manipulations (check some examples out on her blog). Unfortunately, there's only one Elisabeth Bik, and there are thousands of papers published every year.
Well, now there's an AI for that. The journal Science announced this week that it's going to use Proofig, a semi-AI-powered tool, to scan submitted papers for duplicate images. The process isn't fully automated, because papers often contain legit duplicate images, for eg. to show detail, so all possible matches have to be run by a human (which is good, because the vast majority of papers don't contain fraud).
Of course, as the Ars Technica article below notes, Proofig can only detect one type of scientific fraud... so it's important for the scientific community to keep checking each other's work.
All Science journals will now do an AI-powered check for image fraud | Ars Technica
It will only catch the most blatant problems, but it's definitely overdue.
Davos: Wary about AI, but also, full steam ahead
Davos was all about the AI awesomeness last year. This year, the tone was more muted. Some quotes:
- OpenAI CEO Sam Altman: ChatGPT is "not good at sort of like a life and death situations."
- Salesforce CEO Marc Benioff: "We don't want to have a Hiroshima moment."
Global Elites Suddenly Starting to Fear AI
The Davos glitterati seems to have changed their tune about AI at this year's World Economic Forum — but they're still firing people for it.
Despite the pessimism, a quarter of the CEOs at Davos said they planned to cut staff and replace them with generative AI.
Some of those efficiency benefits appear likely to come via employee headcount reduction—at least in the short term—with one-quarter of CEOs expecting to reduce headcount by at least 5% in 2024 due to generative AI.
That's from a survey PwC conducted at Davos. (This was #4 on my list of [10 predictions for 2024,] by the way.)
Google: already working on those layoffs, but do they actually have a plan?
Google seems determined to be PwC's next case study: they've laid off more than a thousand people since Jan 10th, and CEO Sundar Pichai said to expect more layoffs as they continue their pivot to AI. I believe this started late last year when Google reportedly replaced some of its advertising staff with AI.
Google is also letting go thousands of contractors who were training its search algorithm, even though a study released earlier this month found that Google's search results have gotten worse over the last year (specifically in the product rating area, where the search results have been taken over by SEO spam.)
Meanwhile, Google engineer Diane Theriault, who at presstime still had a job at Google, said in a widely-circulated LinkedIn post that Google has no vision and no particular direction for AI. Quote:
Right now, all of these boring, glassy-eyed leaders are trying to point in a vague direction (AI) while at the same time killing their golden goose. Given that they have no real vision of their own, they really need their subordinates to come up with cool stuff for them.
Google Insider Says Bosses Have No Idea What They're Doing With AI
A software engineer at Google called out executives out as being "profoundly boring and glassy-eyed" while trying to pivot to AI.
Longreads
1. Darkweb deepdive
This Kaspersky story about all the ways the darkweb is using ChatGPT is really interesting and very readable:
https://dfi.kaspersky.com/blog/ai-in-darknet
2. How to make autonomous driving safer
How can you prove that a self-driving car is safe? Quantify how much it will screw up.
“Even though we don’t know exactly how the neural network does what it does,” Mitra said, they showed that it’s still possible to prove numerically that the uncertainty of a neural network’s output lies within certain bounds. And, if that’s the case, then the system will be safe. “We can then provide a statistical guarantee as to whether (and to what degree) a given neural network will actually meet those bounds.”