AI Week logo

AI Week

Subscribe
Archives
March 4, 2024

AI Week Mar 4: Surrealism, suicide drones, and fakes

Superfast image generation, drones, self-driving cars, fakes, OpenAI's bad week, and more.

Welcome to another issue of AI Week! In this week's AI Week:

  1. What to play with this week
  2. The week in drones
  3. The week in self-driving cars
  4. The week in generative AI fakes
  5. The week at OpenAI
  6. The week in Scary AI News
  7. The week in cool new AI advances
  8. Longread: Why ChatGPT and Gemini are doomed

But first, this:

a beautifully surreal head(ish) surrounded by flowers

What to play with this week

The image above was generated by the lightning-fast image generation tool https://fastsdxl.ai/. I had a ton of fun playing with fastsdxl this week. The image updates as you type the prompt. It's absolutely fascinating to see:

  • how a single character in the prompt can drastically change the image
  • how different seeds result in very different images from the same prompt
  • what very nonspecific prompts give for different seeds

One fun thing about this sandbox is that you can make it produce extremely sharp but surreal images, such as "Janus artifacts" (aka two-headed creatures) and assorted chimerae, by providing a prompt that's not easy for the model to resolve, like "elecaxjslz.". This seems to have something to do with adverserial distillation. It's considered a drawback, but honestly, these are some of my favourite AI-generated images. The ultra-sharp surrealism has the same beautifully nightmarish feel as the work of surrealist artists.

three images: a deer with an extra floating deer head ("Janus", a photorealistic carved deerheaded human, and a giant gray sandcastle-house-skull)

The surreal head at the top of the newsletter was generated from the prompt "hd, photorealistic, 4k, vivid" with the seed 2491479. Here are six more images, all generated from the same nonspecific prompt, for different random seeds.

a living room, three stacked landscapes, a woman, a red car, floral wallpaper, and an island castle


The week in drones

AI-powered suicide drone

Someone hacked together an AI-powered suicide drone by combining a consumer drone with an AI-based person detector: the drone flies straight at the first person it detects and crashes into them. The person-detection was on their computer, but it wouldn't be hard to get it into an embedded system on the drone. Click through for a video of the person-trophic drone in action.

We built an AI-steered homing/killer drone in just a few hours

I thought it would be fun to build a drone that chases you around as a game. It just uses an AI object detection model to find people in the frame, and then the drone is programmed to fly towards this at full speed… pic.twitter.com/p5ijBiHPxz

— Luis Wenus (@luiswenus) March 2, 2024

AI drone targeting is already here

Turns out the US has already been using AI for drone targeting in Iraq and Syria.

https://www.theregister.com/2024/02/27/us_military_maven_ai_used/

The week in self-driving cars

California grants Waymo self-driving cars permission to drive in more places; meanwhile, The Register reported that Cruise's valuation has dropped by more than half since that time one of its cars hit a woman and dragged her down the street. And Apple has cancelled its decade-long autonomous car project.


The week in generative AI fakes

Amazon fakes are coming for nonfiction

Amazon is filled with AI-generated trash books that are hoping to catch a confused shopper or errant click searching for a new and popular book.


AI-Generated Kara Swisher Biographies Flood Amazon

Why Read Swisher’s Burn Book when you can read KARA SWISHER : The Heroic Biography of a defining force in Tech Journalism (The Silicon Valley’s most powerful tech journalist).

That good-looking food on Doordash and Grubhub might be fake


Ghost Kitchens Are Advertising AI-Generated Food on DoorDash and Grubhub

Reality bending, AI-generated cheesesteaks and pasta dishes are flooding food delivery services.

Trump's Fake Black supporters

Trump supporters are using generative AI to make fake images of Black Trump supporters (presumably, no real ones can be found).


Trump supporters target black voters with faked AI images

Faked images of black voters with Trump are a growing disinformation trend in the US election, the BBC finds.

The Willy Wonka that wasn't

In Glasgow, event promoters used generative AI to create visuals of an extravagantly candyriffic "Willy Wonka Experience" and even used AI to generate a low-quality script for the actors they hired. They sold a ton of tickets on the strength of the AI-generated visuals... but the reality was an empty warehouse and a heartbreaking handful of jellybeans and quarter-cup of lemonade per child.


Glasgow Willy Wonka experience called a ‘farce’ as tickets refunded | Scotland | The Guardian

Event billed as immersive ‘celebration of chocolate’ cancelled after children left in tears at sparsely decorated warehouse

This was classic overpromotion: a one-day circus comes to town promising wonders and delivers a shoe-polish-covered dog in a cage labelled "bear". AI wasn't necessary here; the promoters could've run this grift with stock photos, Photoshop, and a hand-made bad script. But generative AI made it easier.

AI-generated fake news, astrobiology edition

So Twitter/X is now sharing ad revenue with blue-check customers. This has already been reported as incentivizing "misinformation super-spreaders" in the context of the Israel-Hamas war.

But it appears the program is also making it profitable to spam Twitter/X with clickbaity AI-generated fake "science news". @cosmobiologist on Twitter flagged a particularly egregious "science" story titled "Mind-Blowing Discovery: Scientists Find Evidence of Life on Neptune!" and featuring a planet-sized shark:

image.png

(Source: https://twitter.com/cameronh70ne/status/1758036689061056961)

(Just in case it's not obvious: Scientists did not find life on Neptune. Neptune doesn't have an ocean. And planet-sized sharks are not a thing.)

Another lawyer gets faked out by ChatGPT

Do you have lawyer friends? If you do, please forward them this newsletter. Yet another lawyer has been disciplined for using OpenAI's ChatGPT for legal "research," resulting in their presenting utterly made-up cases to the court. Last week it was a New York laywer, this week it was a lawyer in British Columbia. Law societies really need to get the word out to lawyers that ChatGPT is not a legal research tool, because it will make cases up. (Let your lawyer friends know, seriously.)

Speaking of making stuff up, let's call it that, and not "hallucinating"

When ChatGPT or another large language model (LLM) makes stuff up, the industry refers to it as "hallucinating." (I prefer "BSing", personally.) Here's a good article on why "hallucinating" is not a great term to use. TL;DR: it implies there's a mind there to hallucinate.

Talking about chatbots hallucinating is exactly what we mean by the “blind spot.” It’s an unquestioned philosophical assumption that reduces experience to something along the lines of information processing. It substitutes an abstraction, made manifest in a technology (which is always the case), for what it actually means to be a subject capable of experience. As I have written before, you are not a meat computer. You are not a prediction machine based on statistical inferences.


Stop saying that ChatGPT “hallucinates” - Big Think

Adam Frank argues that saying chatbots "hallucinate" risks conflating the operations of large language models with human cognitive processes.

Chatbot Medical Errors

Turns out a chatbot is not a great substitute for a doctor. This is not an AI fake, but it does fall under "fake advice".

Let’s call this kind of mess generative pastische: sound advice for the particular circumstance (sternotomy) has been combined with generic advice (about ergonomics) that would sensible in other circumstances but deeply problematic here.


Serious medical error from Perplexity’s chatbot

The dangers of generative pastische


The week at OpenAI

ChatGPT makers OpenAI had a no good, very bad week, as Axios put it. Elon Musk is suing them (although The Verge says the lawsuit is silly); three more publishers are suing OpenAI for scraping their content to train ChatGPT; their major backer, Microsoft, invested in French rival AI company Mistral; and the SEC is investigating them over last November's CEO-firing-then-rehiring chaos (on top of the FTC investigation that was already ongoing).


The week in Scary AI News

AI worms

A group of researchers invented a new type of cyberattack last week, creating "worms" that can spread from one AI system to another.

To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies.

https://arstechnica.com/ai/2024/03/researchers-create-ai-worms-that-can-spread-from-one-system-to-another/

Copilot hates us

Reddit users noticed that if they prompted Microsoft Copilot with a prompt indicating that emojis could kill them, Copilot responded with emoji-studded text, noticed it had killed the user, and either spiralled into recriminations, begged for forgiveness, or doubled down with deliberately murderous floods of emojis and taunting.

Okay yeah I think we can officially call it pic.twitter.com/dGS6yMqg1E

— Justine Moore (@venturetwins) February 26, 2024

Et tu, tumblr?

Tumblr and Wordpress.com company Automattic is in talks with Midjourney and OpenAI to sell them their users' content. Look for an opt-out setting coming Wednesday.


Tumblr’s owner is striking deals with OpenAI and Midjourney for training data, says report - The Verge

It’s been rumored on Tumblr for days.


The week in amazing and wonderful AI news

Starcoder2


StarCoder2 is a free code model trained on over 600 programming languages

ServiceNow, Hugging Face, and Nvidia have released StarCoder2, a family of open-access code generation LLMs.

AI 3D-izes dog photos, trained on videogame dogs


Surrey: AI to help turn dog pics into 3D models

University of Surrey researchers taught an AI system to predict 3D poses from a 2D image of a dog.

Gemini 1.5 is insanely good

Gemini 1.5 is the real deal. I was skeptical of the 1M context window, but this morning I did an extensive session with a 900k+ token series of stories - stunned by its accuracy.

Huge implications for anyone introspecting on large corpuses of text.

— james yu (@jamesjyu) February 27, 2024

Anthropic releases Claude 3

Amazon-backed Anthropic, whose "Claude" large language model is one of the main competitors to OpenAI's ChatGPT, released the latest version of Claude this week, and are claiming it beats OpenAI's GPT-4 on several benchmark tests. Like Google's Gemini, Claude will come in differently-priced sizes: Papa Bear, Mama Bear and Baby Bear--oops, I mean, "Opus, Sonnet and Haiku", with Opus being the largest, most pricy model, and Haiku (not yet released) the smallest and most affordable.


Longread

Why Chatgpt and Gemini are doomed


ChatGPT and Google Gemini Are Both Doomed

All-purpose chatbots have an impossible job.

Don't miss what's next. Subscribe to AI Week:
My tech blog My writing blog
Powered by Buttondown, the easiest way to start and grow your newsletter.