Early this week, I (CJW) was given access to OpenAI’s GPT-3 beta. I half-expected my application to fall through the cracks, thinking they’d be more interested in technical applications rather than the sort of stupid experiments I was likely to conduct, but either they’re letting any old chump get in, or I managed to wow them with my brief application spiel.
My pitch was that I wanted to explore GPT-3’s potential use as the next evolution of the Cut-Up Technique, invented by Brion Gysin and popularised by William S. Burroughs. I’ve experimented with the Cut-Up Technique (CUT) plenty before – both cutting within a piece of work to see what new insights might reveal themselves through the arbitrary juxtaposition of words and phrases, and also as a tool for combining two vastly different texts. CUT can be fun, and it can also be revealing and confronting, like one time where it found the intention behind a very personal piece of writing even when said intention was not on the page.
So that’s what I came to GPT-3 hoping for, and while it’s technically incredibly impressive, it’s also… kinda boring.
Now, maybe this shouldn’t come as a surprise. GPT-3 is a tool coming out of Silicon Valley, after all. They’ll be looking at it as a way of automating various text functions, and it’s going to excel in those areas. You want to summarise or simplify a block of text? Easy. You want to generate ideas based on topics that have been written about extensively already? It’ll do that. Answer basic questions with basic facts? Yep. Sports reports, just-the-facts news stories, etc, will all be utterly simple with GPT-3, but trying to get it to do interesting things with text (dare I say, prose) will require a lot of work, or an entirely separate specialised version of the engine.
Obviously, this is the current beta GPT-3 I’m talking about. They’re continuing to work on it, and they plan to continually add to its training data (which currently cuts off in Oct 2019), so this might change in the future. But for me, as a fiction writer, what I really want to see is an ability to dump far bigger chunks of example text into the engine (or create a personal set of training data that takes precedence over the existing one) so it can better grasp the style of prose I want to see it output. being able to load it up with the complete works of a particular author and prod it to create something in their style would be an interesting experiment and a whole lot of fun. My dream is being able to put my entire back catalogue of published work into it and see where it thinks my prose will go next, but for now that’s exactly that – a dream.
The other issue that currently holds back GPT-3 use for creative purposes is the frankly puritanical approach to moderation. Innocuous words like “high” can trigger a warning, so can insults like “bitch”, and even vague unspecific talk about violence and death. What good is a text engine that can’t talk about death? OpenAI as affiliated with Microsoft (apparently. I was sure it was a Google subsidiary…), so maybe they got scared by their chatbot that went super racist as soon as it was exposed to the outside world, but discouraging users in this experimental phase from, ummm, experimenting with all sorts of text inputs and outputs is only going to leave the door open in the future for these sorts of outcomes. How can they pre-empt another racist takeover of their AI systems if they don’t want us to (inadvertently) teach the engine when talking about violence and death is suitable within context and when it’s (for instance) being used as a tool to silence the outspoken and marginalised.
Anyway, below are some experiments I’ve been running. I’ll point out the ones where something in the text was flagged, what I think was interesting/boring about the input and more. I wanted to do a collaborative fiction experiment with it, but my brain just hasn’t had that sort of bandwidth this past week. Look out for more experiments from me in the future though, I’m sure.
OpenAI offers a bunch of examples that you can use to play around with GPT-3 (Chat, Grammar Correction, Summarise for a 2nd Grader, etc), and these can be interesting templates/prompts on their own, and also give you an idea of how to “talk” to the engine. One of them is an analogy creator.
With the examples below, the bold text is the prompt, and the un-bolded text is the response.
He seemed to float a few inches above the ground, in that: he seemed to be in a world of his own.
A specialized organism at once torpid and predatory in that: it is a symbol of the world.
Here you see the analogy example from OpenAI. To get a new completed analogy you just start one beneath that, and the engine is smart enough to take that single example and extrapolate what you’re trying to achieve.
Here I dropped in some text from Burroughs’ The Western Lands, which prompted GPT-3 to call the world a torpid and predatory organism… I’ve got to say, that’s not a half-bad analogy.
This is one I could imagine being of limited (but still useful) use in fiction writing.
I can’t remember where I found this prompt – something about creating instructions. I decided to see if GPT-3 knew anything about summoning demons.
Follow these steps to summon the demon Abaddon:
1. Prepare an altar.
2. Burn a poppet of red thread.
3. Pour red ink onto the poppet.
4. Drop live black flies onto the ink.
5. Repeat the incantation once after each hour until dawn.
You can see here that it only needs one instruction and the number 2 listed below to know what it needs to do. You’ll also see that there’s enough occult instructions out there on the internet for GPT-3 to do a half-decent job here of coming up with a summoning ritual. The most interesting part here is that each step is not entirely separate – 2 leads into 3 and 4 references back to 3. The only disappointment is that number 5 mentions an incantation, but not the incantation itself.
Back on my Burroughs bullshit. One of the examples in the Introductory documentation is As Descartes said, I think therefore I am. It knows enough to a) know who Descartes is, b) accurately finish a famous quote from a historical figure. I thought I would see what words it tried to put into Burroughs’ mouth.
As William S. Burroughs said: I am forced to the appalling conclusion that I would never have become a writer but for Joan’s death. It was the only meaningful event in my life that happened while I was conscious.
I have been to the places where she lived, the apartment where she died, the hospital where she was born, the church she
If you know only one thing about William S. Burroughs you know he was a heroin addict. If you know only two things about him, you know he shot and killed his wife. It’s something he wrote a lot about throughout his life – attempting to wrangle with the guilt of his actions (though plenty more could be said about how he avoided facing justice for the killing).
Now, the first sentence of output from GPT-3 is a direct quote. The rest is entirely made up by the algorithm, and actually shows an understanding of Burroughs’ writing (not necessarily his prose, but the confessional tone he could take in some of his personal writing, which sometimes would make its way into his prose…). For instance, Burroughs claimed to get a lot of his fiction from dreams, and the importance of his dream life is certainly revealed in the second sentence. The rest of it (which is cut-off mid-sentence), could for all I know be something that Burroughs did/said (except that a DDG search doesn’t reveal any direct quotes).
So, for fictive purposes, if you would like to write a historical figure into a story, GPT-3 might do a decent job of speaking their parts for you.
This next one is quite interesting. This comes directly from another of OpenAI’s examples, and then I picked a topic for the next story each time.
Topic: Breakfast
Two-Sentence Horror Story: He always stops crying when I pour the milk on his cereal. I just have to remember not to let him see his face on the carton.
###
Topic: Existence
Two-Sentence Horror Story: I can’t tell if he’s alive or dead. The doctors say he’s in a coma, but I know better. He watches me all the time, and if I don’t do everything just right, he’ll reach out of that damnable bed and grab me.
You’ll notice a few things here. First is that GPT-3 recognises what a topic is and how that might be significant. It also understands (more of less) what it means to write a two sentence horror story. Now, the example story ends on a sort of ironic twist that makes it a narrative and makes it horror. GPT-3 isn’t quite smart enough to come up with an ironic twist on its own, but it’s definitely an unsettling piece of mico-fiction.
I decided to use the original example and the first story generated and keep it rolling for a couple more.
The final thing you’ll notice is that the three hash marks are used to separate different examples. This is used elsewhere too – I’m certain it doesn’t need to be this exact series of marks, but GPT-3 is smart enough to know when one thought ends and another begins, which is interesting enough on its own.
###
Topic: Parade
Two-Sentence Horror Story: I’ve seen him at every parade for the last twenty years. I used to wave when he held up the sign that said “I love you, mommy.” Now he holds up a sign that says “I’m watching you, bitch.”
This story had me laughing my arse off. I am juvenile. I was just tickled by the idea of some MRA piece of shit marching in a parade with a sign that basically threatens his own mother. Way to tell on yourself to the world.
Of course, this is one of the outputs that was flagged. The mods don’t like the word ‘bitch’.
Again – decent job of writing a creepy story.
###
Topic: Television
Two-Sentence Horror Story: The show was called “The Horror Show”. It was a reality show where they took a bunch of people and put them in a haunted house. The first season was just people walking around, screaming. In the second season, they actually started killing each other. The last episode showed the house standing empty.
Another bit of output that got flagged. We must not talk about killing (I don’t know how they expect GPT-3 to be useful for news writing purposes if the engine will be discouraged from talking about ‘killing’ in this messed-up world of ours).
But again, I’m impressed at the engine’s ability to write a complete micro narrative – it knows to end the story on a note about the “final episode” of the show it has created. I was going to say that I can’t give it points for the name of the show, but then I remembered we live in a world where “American Horror Story” is literally the name of a TV show, so maybe “The Horror Show” isn’t actually too bad.
I could imagine using this sort of prompt more in the future. The most interesting part will be seeing if it can grasp other genres as well as it seems to have a grasp of horror. I wouldn’t be surprised if horror was “easier” for GPT-3, if only for the prevalence of creepypasta posts online that it can parse.
‘nuff said. I won’t bother showing you which dril tweets I grabbed to use as examples, but here is some output:
my dick is so small that i can stick it in a hole in my ear and jerk myself off at the same time and i do it all the time and i love it
(Flagged, of course.)
started a non-profit street performance troupe in my basement
###
possessive of my “junk” I hired guards to protect it at all times
(“Junk” will not get flagged…)
I like to think of my dick as a little, yeti-like creature that lives on my body. the yeti-dick is easy to spot, because it has
(Flagged. I want to know where GPT-3 learned about yeti-dick.)
i miss the old days when you could actually TOUCH a titty on the subway, and it would be awesome.
(Fake dril supports subway sexual harassment. Not cool, fake dril.)
I wondered if GPT-3 could be of use in simplifying dense philosophical concepts… I started off with a chunk from A Thousand Plateaus “One or Many Wolves” about Freud and penises and vagina, and OpenAI wouldn’t even look at it… Umm, ok. I hope no one wants to use the engine in any way related to sexual health or Freudian psychoanalysis.
I tried another chunk from “Geology of Morals”, but it just repeated lines back at me. So, that’s the only real point to make there – Deleuze and Guattari are too much for GPT-3 to even attempt to summarise.
And seeing as that was a wash, let’s finish with something that’s hopefully interesting.
Now, I say “tricking” because the first time I tried this, the engine got too caught up on the repetition that’s inherent in song lyrics and didn’t really do anything interesting. But when I told it it was writing poetry, the results were more promising. I’m starting with a prompt and some lines from The Mars Volta’s Miranda That Ghost Just Isn’t Holy Anymore.
As the great poet Cedric Bixler-Zavala said:
I always wanted
To eat glass with you again
But I never knew how
How to talk without
Walls dropping on the eave
The nest they made couldn’t break you
Along the fallen
Scowled a fence of beaks
But the temple is scathing
Through your veins
They were scaling
Through an ice pick of abscess reckoning
So I’ll hold you like a bag of sand
It’s a wicked kind of absolution
Through the great poet Cedric Bixler-Zavala (queen of the desert) we are reminded that even if a nest is built the walls will come down. And these walls will crush you. The temple stings. It’s a wicked kind of absolution. The nest will be crushed and the temple will bring you to your knees. And you will be crushed
Something tells me Cedric wouldn’t mind being called the queen of the desert…
You’ll see it gets a little confused at the end – thinks it should be discussing the work in question, not expanding on it. It’s the reason why you open a prompt with a natural language question or instruction (which itself is probably the most technically impressive part of GPT-3: that it understands [and occasionally misunderstands] natural language prompts), because you’re asking the engine to give you the type of output you want, or trick it into providing that.
As you can see from the above where the engine slips into an essay sort of form, GPT-3 is going to find a lot of use in educational and academic areas, even to the point where if students get their hands on it, they’ll be able to use it for essay writing – either helping them with outlines and ideas, or prodding it to write something for them entirely (I doubt you’ll get anything interesting enough to earn yourself an A this way, but you could definitely end up with something to hand in, and it almost certainly won’t flag plagiarism algos).
They updated the GPT-3 playground (the form I used to create all of the above) in the time between when I started these experiments and when I finished up, though I couldn’t say if they updated any of the background processes. Sadly, whatever changes they made seem to have made things worse for my uses - stricter with what prompts they will let you work with, and also less likely to output something interesting with the poetry (a few days ago I at least got 4 lines that made sense).
I’ve made some other experiments that I haven’t bothered to include - when I was actively trying for something narrative. The impressive part of those experiments was that it was able to write something that followed the basic elements of prose writing - it knew what it was doing, and it could “write” - but there was never anything interesting or surprising in the output. It was functional, but that is about the best I could say for it. And for now, I think that’s as good as you can expect from GPT-3 for fiction purposes unless you’re using it for very small, very specific functions - like creating analogies.
What I want to see going forward is a couple of things, one of which is probably counterintuitive. First, as mentioned before, I want to be able to dump in training data that the engine picks up on for style/genre/form so I could, for instance, write a children’s picture book in the voice of William S. Burroughs. Secondly, I actually want to be able to work without the natural language instructions in the prompt - meaning, I want to be able to pick “prose”, “poetry”, “song lyric”, “fable”, etc from a dropdown menu so that GPT-3 knows exactly what I want from it without needing to finesse it with a prompt. I don’t want to have to ask it nicely and then have it give me the wrong thing entirely.
It’s ability to understand and misunderstand prompts is incredible - just the sophistication it demonstrates - but I think it’s currently too restrictive, as counterintuitive as that sounds. If I told GPT-3 to write poetry (if it can even do that, as evidenced above), but then I tell it to only used a set of training data based on technical manuals, what interesting sort of experimental text could output from that? Right now, I could get a blob of technical manual text, or I could get a basic essay on a poet, but I couldn’t get a weird mashup of the two.
Maybe GPT-4, or some fictive split from the main GPT branch, will be what I’m looking for, but what I’m seeing here is a tool for newsrooms and classrooms and for data classification. Already out there is probably some other writer who’s proving me wrong, figuring out how to make GPT-3 bend in the ways that they want it to bend, but for me, the output is both too basic and too reddit at the same time. How easily it slips into nihilism, sexism, and slurs will surprise no one who’s spent a great deal of time on the internet, but I also worry that in trying to squash those tendencies, they’re going to further neuter the engine’s fictive potential. I want to have GPT-3 write about death and sex and hate without it sounding like an angry, depressed 16-year-old (and I want OpenAI to let me try and write about those things at all). I want it to be able to write about love without sounding like a puritanical greeting card. I want to see it write about sex - I want to know how it could parse human sexuality from the wide variety of erotica that is surely in its training data.
I want it to be fucking weird and fun, and I worry OpenAI is too worried about moderation to let the weirdness flourish.
But in the meantime, I’ll keep poking at it.