Conference Highlights
Everything Is True
Ada Hoffmann's author newsletter
I'm back from the Fourteenth International Conference on Computational Creativity. It was a whirlwind of a week. My scholarly contribution was minimal, but it exists. My ability to engage - socially, professionally, critically - felt higher than it's ever been. My spoons, alas, were not infinite. I have a lot to unpack about how this went and about what I want for the next few years in my scholarly career (it still feels presumptuous to assume that I have things like a "scholarly career"), but I am tired and I don't want to unpack it now. What I want to do is jot down a few memorable moments and share them with you. Not an exhaustive account of the week, but a highlight reel.
ICCC has always been a weird little misfit of a scholarly community. It's not where people go when they are building LLMs and disrupting the world. There's a lot of discourse about what exactly the community's goals are, or should be - but in reality there are multiple goals. There are people trying to do a cognitive science and test hypotheses about creativity. There are people who are sentimental about Strong AI and who sincerely do want a computer to become an "autonomous creator in its own right." There are people just mucking around and "making things that make things" because it's fun; there are people waxing philosophical about what creativity means in the first place. I like it, personally - it feels cozy, eclectic, sometimes a little annoying in a way that endears me.
Anyway, highlights.
Monday
We have a workshop on "fictional abstracts." We were invited to submit the abstracts for imaginary research papers that we envision being published 15 years from now, with a special focus on ethics and sustainability. (I submitted three abstracts, but the organizers asked me to pick just one.) We split into small groups and answer questions about the abstracts as a group - what are the commonalities, what are the differences, what values are being expressed?
I feel like the organizers wanted more about environmental sustainability, but there's relatively little of that - just mentions. What shows up a lot more is a concern about sustaining human creativity. What will be the role of human artists and writers in 15 years? What will be the role of creativity in human experience? How will generative AI systems change those roles?
Tuesday
We have a book launch. A fellow named Rafael has been working for 25 years on a storytelling system called MEXICA, and now he has a book about what he's learned. Rafael is on the cognitive science side of things, wanting to test hypotheses about how humans tell stories by modeling them on the computer. His systems’ outputs aren't as surface-level fluent as an LLM, but every single step of their reasoning is transparent and controllable.
"These kinds of systems are still important," he says, frustrated. I feel like I understand him more now than I ever did.
Wednesday
We have a paper session on language and storytelling. For two hours, people come up and show us the text-generation systems they've been working on. It's jarring because every single paper is about using one of the GPT models, fine-tuning them or combining them with another algorithm for a specific result. Everything here is pretty consistent with the literature review in my own paper, in which the number of researchers in this community who are working with LLMs and commercial image generation systems vastly outstrips the number of researchers engaging with the ethical issues inherent in those systems at all. Later there will be sessions called "Evaluation" or "Climate Change, Diversity, Equality, and Inclusion" which do engage with those issues at great length.
My paper is in the "Evaluation" session. For short papers, we only have 5 minutes to present. I've worked on boiling my message down to 5 minutes. I have 5 slides. One is a title page. One is a collage of recent headlines. One is a meme ("Are We The Baddies?"; this gets a nervous laugh.) One is a pair of charts side-by-side, showing the amount of us working with LLMs vs the amount of us who make any mention of their social effects. I actually get through it without going overtime.
(Here's the whole paper, if you want to read.)
Afterwards a grad student comes up to me. He says he thinks ICCC should have ethics requirements for its papers the way bigger conferences, like the ACL do. I get very excited about this. I have no idea how to do it. I ask my more experienced co-author, Dan, how we should do it. Dan is a bit harried because he is the chair of the entire conference this year and has other shit to do. He says we should do it by writing a paper next year. I guess this is how academics do everything.
Thursday
I am very tired. I usually have a brain crash after a presentation like this. Emotionally this round of brain crash is actually not as bad as past years. I'm not, like, curled in a ball in my dorm room hating myself. Physically, though, I'm still so tired I can't think.
I've noticed there are several different cohorts of researchers at this conference. There are experienced researchers who've been the boss of this community since it started. There are grad students - quite a lot of grad students; it's been fun to meet them. There are people my age, several were grad students when I first met them, but somehow found tenure-track jobs and have their own little gaggle of grad students now. And then there is me, the adjunct. I find myself explaining how adjuncting works to horrified grad students at least twice. I don't know what I'm doing here.
One of the more successful researchers my age, who I’ve been enjoying talking to, comes up to me at breakfast and asks which systems exactly I am concerned about. He points out I was using the word "transformers" as an umbrella term and including systems that do not actually use a transformer-based architecture. He is correct, actually. This guy has done actual published work on neural network architectures. I have not have my coffee. I want to impress him but I sound like a stammering fool.
Dan tells me to get some rest.
I do manage to crawl out of my dorm room for "Climate Change, Diversity, Equality, and Inclusion" which turns out to be my favorite paper session of all. There is so much low-hanging fruit here in terms of bias and its potential remedies; I am really jazzed about it. I jot down a lot of ideas.
Friday
We have a "community meeting" centered around two questions. One is about the format of upcoming conferences. The other one is just "okay so, in light of everything happening in industry currently, what do we do."
Everyone is asking this question, but everyone means something slightly different by it, and everyone has a different answer.
A guy stands up and says that this is his first ICCC. He says he loves the people and the community and the electic nature of the research. But he notes that there aren't any "builders" here - by this, he means the people who are actually putting LLMs and similar systems together. He says we need builders on our side or we will be "crushed by the bulldozer."
A woman who's been in the community longer replies: he might be confused because this is his first time. There are lots of people here who've been building creative systems for a long time!
But I hear people mulling it over on the bus later. "We've already been crushed by the bulldozer," says a bigwig sitting near me. "Now we're chasing after the bulldozer saying, 'Come back! Crush me again!'" He and his friends complain that there isn't as much building as there used to be. The community used to get in fights over "mere generation" - the practice of making a generative system without evaluating it or paying attention to anything interesting about its process. Now he sees a flood of "mere evaluation" - people who talk about generative systems, but aren't making them at all.
I don't think I agree with this framing, but I see what he's getting at. Everything's different this year. Everyone's reeling. My view is that we have two viable choices - we can critically engage with the big GenAI systems (which, yes, might involve "mere evaluation") or we can build something of our own and make a case why it's meaningfully different from the big systems. We can't just sit in our corner, playing with our toys, pretending like things can be the same as before.