Imperfect Reminders about Generative AI (Links Fixed, sorry)
Imperfect Reminders about Generative AI (Links Fixed, sorry)
This month we got a newsletter in two parts.
PART A: I did a lil thing
Often in lulls of creative inspiration or paid freelance work, I try and fill my days with learning a new thing by trying to make a new thing. This month I have been learning svelte, a website framework, by creating an imperfect reminder app for tasks that are only sort of time-sensitive. By imperfect, I mean that it doesn't have notifications, alert sounds or accurately show how much time is left on each timer. What it can do is loop your reminders, so it's very good for things that happen repeatedly but don't need to be done exactly on time. Currently, I have it set as my homepage, and use it to remind me to water different plants, change my bedsheets and cancel free trials. I'm not saying it's the best thing I ever made, but I learned a lot. Give it a go and feel free to reach out if you have any problems.
PART B: My concerns with Generative AI
In the last few months, there has been a lot of talk about AI and generative Art. Namely, the internet has been much hyped over DALL-E 2's impressive image generation and Twitter has been awash in DALL-E Mini's more modest images generation (an open-source project looking to replicate OpenAi's DALL-E model). When I tried out OpenAi's GPT-3, I was blown away by the kind of results it could create, although not always perfect and sometimes comically bad, and it seems what GPT-3 has done to generative text, DALLE-2 is doing to images.
Before going into some potential impacts of generative media, I want to quickly mention one important thing about how these AI models are made. They are trained of us, our writing, our drawing, our photos, our data. Largely it's data we have chosen to give up for free, as in Creative Commons Licences. A great example of this is how millions of photos uploaded to Flickr have been used to train facial recognition software (See this film). I personally feel that it's OK to use this data to train things, the internet should be a free and wild place, where images and text can be remixed and reinvented. What I have an issue with companies using this free content to create things they sell back to us. In general, it seems most people agree that it is bad that Facebook takes your 'free' data and uses it to sell adverts. I believe we should apply the same logic to these AI companies that take our heartfelt creative outputs, wholesome family photos and abstract Wikipedia articles and create for-profit tools, that are most likely going to negatively impact our lives.
OpenAi's is one of these companies, and DALLE-2 is one of these tools. Not only is not open-source (publicly accessible—anyone can see, modify, and distribute the code as they see fit) but, as Alex J. Champandard sees it, the research paper they (OpenAi) published to legitimise the science behind their model 'is not research it's advertising.' As it doesn't include basic information to reproduce the work, the standard in the computer science community. So why does OpenAi not make all their tools open source?
In their Risks and Limitations Paper, they say, 'It is harder to prevent an open-source model from being used for harmful purposes than one that is only exposed through a controlled interface.' Which begs the question, "if they think the thing they've invented is so dangerous that they're terrified of giving unfettered public access to it, why the fuck did they invent it?" And OpenAi are right to be terrified about the impact Generative AI can and most likely will have.
I just started reading Peter Pomerantsev's 'This is not Propaganda' (only a chapter in but highly recommend), and in it, he lays out how misinformation and trolling propagate online. He tells of the Russian 'Internet Research Agency' and how the people working there are instructed to post comments on local newspapers, write middle-class mystic healing blogs or tweet popular accounts, all laced with specific messages or sentiments to convoy from management. All to say, even though misinformation and propaganda online are pretty bad right now, there is still a human limitation. Now imagine if troll farms were using the same tool, OpenAi's GPT-3, that allowed me to generate a small dictionary of made-up words. However, they won't just be generating text, with the next wave of AI Generation tools, they'll be creating generated images to illustrate their talking points and videos to flood YouTube on any given topic. I don't think it's hyperbolic to say that right now, or at least in a few years, one person could give their generative AI tool a prompt such as "Tweets explaining why 'Dinosaurs never existed'" and have 100,000 convincing and unique tweets ready to post. No human writers needed. Even if you don't believe those tweets or think they are obviously fake, we will all be censored by the noise they create.
This is obviously a very bad scenario for all of us but on a less political level, a noticeable effect of all these generative tools is the vast amount of shit content that will invade our digital spaces. I'm talking completely AI generated websites that don't answer the question you googled, vast amounts of creepy AI generated Kids videos on YouTube, and many many news articles about 'cool' art that was made with AI, with most only being labelled 'cool' or 'interesting' because they are made with AI. Kinda like a sculpture could be seen as interesting because it's made with excrement, even though the sculpture's form and impact are ineffectual and a bit shit. The odd way that a lot of AI generated images take written language to prompt its creation, and even more written language to explain its significance. (As someone that has and will continue to work with generative AI stuff, I include myself in here)
One possible solution, that came to my non-computer science mind, to a multitude of AI generated content's creation and use (I haven't even mentioned the massive issue of bias in these systems), is to open source these tools. This may be counterintuitive, but by encouraging the opening of sourcing these tools, others can more easily create tools and systems that can spot and call out AI generated content or spot biases. However, I am sure there are differing opinions on this subject.
So in conclusion, I am left wondering; Is this technology going to make the world better, or is it just 'Cool' like flying cars and going to Mars? And is the fact it's 'cool' still reason enough to pursue this technology?
That's all from me this month, thanks for reading and I hope you have a great week,
Fred
-
My Website: https://www.fredwordie.com
My Book: https://www.bigdatagirl.com
My VC: https://ventually.xyz