AI Week for Monday, November 12th
Hi! This is Natalka with the very first AI Week. Thanks for checking my newsletter out! Please send me your feedback, either here or anywhere else we're in contact.
Last week in interesting AI news:
- AI-generated war images getting passed off as real
- The FTC weighs in on AI
- ChatGPT for Xbox? Generative AI is coming to Xbox games & devs
1. "Adobe is selling fake AI images of the war in Israel-Gaza".
That headline is from https://www.crikey.com.au/2023/11/01/israel-gaza-adobe-artificial-intelligence-images-fake-news/.
Adobe Stock provides a stock-image marketplace. Sellers can upload AI-generated stock images as long as the use of AI is disclosed. The issue here is that while Adobe clearly marks AI-generated images as "generated with AI", those who buy the stock images are under no obligation to do so. The result is that a number of smaller news websites have used photorealistic AI-generated images of a missile attack that never occurred, without marking them as AI-generated fictions.
The lesson here is that we can't depend on media or users to voluntarily identify AI-generated images. One thing that would help protect against losing this information by mistake would if stock marketplaces like Adobe added the "generated with AI" tag to the image file itself, similarly to the way that EXIF metadata tags are incorporated into the file. Of course, that information can be edited, but having it attached to the image could help reduce the confusion caused by an editor's mistaking an AI image for a real one.
#2. FTC press release about their position on generative AI
The US Copyright office has invited comments on "copyright law and policy issues raised by artificial intelligence systems." The FTC has weighed in with a comment of their own.
In Comment Submitted to U.S. Copyright Office, FTC Raises AI-related Competition and Consumer Protection Issues, Stressing That It Will Use Its Authority to Protect Competition and Consumers in AI Markets | Federal Trade Commission
In a comment submitted to the U.S.
What does the FTC have to do with AI? In their press release, they express concern that training AI on an creator's work, expression, or likeness may be a kind of unfair competition, or an unfair or deceptive practice. They also express some concerns about reputational damage to creators and devaluation of their work, and several concerns about potential harms to consumers: deception, privacy violations, automated discrimination/bias, and scams.
These concerns seem very well-founded to me. Here are a couple of examples of that kind of thing we've seen recently. Regarding consumer deception and reputational damage to creators: consider the fake AI-generated books on Amazon, which slap a real author's name onto an AI-generated book-length text. Regarding privacy violations: the FTC press release notes that they have "been using its existing legal authorities to take action against illegal practices involving AI, citing consumer protection examples including allegations that Amazon and Ring used highly private data they collected to train their algorithms while violating consumer privacy."
As Publishers Lunch noted in its reporting on this press release (While Copyright Office Considers, FTC Is Already Concerned that Generative AI Is Unfair Competition for Creators - subscription required), the FTC also took the time to state clearly that it has both the authority and the intention to take action against "illegal practices involving AI," as in the example above.
By the way, I asked Claude, Anthropic's beta AI, to summarize the FTC's press release for me:
Here is a summary of the key points from the press release:
- The Federal Trade Commission (FTC) submitted comments to the U.S. Copyright Office regarding artificial intelligence (AI) and its implications for competition and consumer protection.
- The FTC expressed concern about potential harms to consumers, workers, and small businesses from how companies are developing and releasing AI tools.
- The FTC highlighted risks like privacy violations, discrimination, deception, and scams that could arise from irresponsible AI use.
- The FTC noted that AI copyright issues intersect with competition and consumer protection concerns. For example, creators may face unfair competition, and consumers may be deceived about AI-generated content.
- The FTC stated it can take action against illegal AI practices under existing laws. There is no "AI exemption" from current laws.
- As an example, the FTC previously alleged Amazon and Ring used private data to train AI while violating privacy.
- The FTC will monitor AI impacts and vigorously enforce laws to protect competition and consumers from deceptive or unfair AI practices.
- Overall, the FTC emphasized its role in oversight of AI systems and willingness to use its full authority to address harmful AI activities. Consumer protection and fair competition regarding AI are top priorities.
What do you think of this summary?
LLMs like ChatGPT and Claude have an unfortunate tendency to "hallucinate" (i.e. make stuff up), a problem that Anthropic admits applies to Claude too. Fortunately, I don't see anything in this summary that's flat-out wrong. However, I didn't like that it omitted the FTC's concerns about reputational damage to creators and devaluation of their work. I'll quote from the FTC's press release to counterbalance that omission:
"Conduct that may violate the copyright laws . . . may also constitute an unfair method of competition or an unfair or deceptive practice, especially when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works, reveals private information, or otherwise causes substantial injury to consumers[.]"
#3. Generative AI coming to Xbox games & devs
Xbox and Inworld AI partner to empower game creators with the potential of Generative AI
Partnering with Inworld is an important step in our journey to empower game developers.
The press release announces a partnership between Xbox and a generative AI company I've not personally heard of, Inworld AI.
##Sidebar: What is "generative AI" again? "Generative AI" refers to the whole class of machine-learning applications that, well, generate outputs. ChatGPT, which generates text, is generative AI. Stable Diffusion, which generates images given a prompt, is generative AI.
There seem to be two totally separate goals for this partnership: (1) to to allow Xbox developers to work faster, for e.g. by giving the "AI co-pilot" a prompt and getting back a full script, a dialogue tree, a quest, etc; and (2) to incorporate AI characters into the game itself, so that players can interact with an AI-backed non-player character (NPC) the way they can interact with ChatGPT.
InWorld AI's built a business around goal #2, so goal #1 might be coming from somewhere else. The basic idea of using generative AI to speed up dev work reminds me of another Microsoft tool, GitHub CoPilot. Microsoft bought GitHub about five years ago; GitHub is a free-to-use software repository, and GitHub CoPilot is a premium feature that's kind of like ChatGPT for code.
There were plenty of negative reactions to this announcement from gamers and creatives on Twitter, focusing on (1) concerns for dev jobs, especially given the recent rounds of layoffs; (2) concerns for game quality; (3) dismay at the prospect of losing the emotional connection that gamers feel for the human-written and human-voiced characters in games.
Backgrounder: "Game AI"
As an aside, the article on developer.microsoft.com opens with a claim that Ms. Pac-Man used AI:
I remember when Ms. Pac-Man was chased by her ghostly enemies, Inky, Blinky, Pinky, and Clyde. Though that was not the first AI in games, and it was a simple rule-based form of AI, those ghosts showed the world the endless fun that could be born of a reactive algorithm. Since those early days in the arcade, game creators have incorporated game AI to build living worlds, dynamic stories, and complex characters.
This felt really bizarre to me, coming from a non-gaming perspective. In academics and in the general discourse, "AI" generally refers to some kind of machine learning algorithm, such as large language models (LLMs). Ms. Pac-Man is... not that. Ms. Pac-Man used a traditional algorithm, i.e., a programmed series of if-then statements, to guide each of the ghosts around the screen. Procedurally generated worlds, stories and and music aren't "AI", either. However, and this is important, they are "game AI". I'll let Wikipedia explain for me:
The term "game AI" is used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general, and so video game AI may often not constitute "true AI" in that such techniques do not necessarily facilitate computer learning or other standard criteria, only constituting "automated computation" or a predetermined and limited set of responses to a predetermined and limited set of inputs.
TL;DR: "game AI" is not the same machine-learning AI that the article's author (Haiyan Zhang, the General Manager of Gaming AI at Xbox) is discussing in the rest of the article.