sef 1: sound design's past and future
sonic edge fog 1
last week, on the Friday, I watched two things. one, during the day, was a youtube video about the newly updated synplant 2. the other, late at night, was the Peter Strickland film Berberian Sound Studio.
it felt like I’d jumped into a weird warp in the electro-sonic space-time continuum, witnessing sound design’s past and future all at once. (well, a few hours apart, at least)
the past, as depicted in BBS, looked like this: horror film foley sounds sourced ingeniously, if absurdly, from market vegetables.
ripping out witches’ hair? pulling leaves from radish heads does the job. throwing someone from a building? the right marrow makes a satisfying splat (“sounds a little watery… is there any fresh marrow?”). stabbing someone in the chest? a knife thrust, repeatedly, into a cabbage heart.
turns out a lightbulb dragged delicately against grill tray makes a convincing ufo sound, too.
but the future was perhaps more baffling still.
the just-released souped-up synplant 2 lets you drop an existing sound into it and then, within seconds, finds ways of recreating that sound in increasingly impressive, imitative synth patches. you can choose the one you like best - sometimes the ‘mistakes’ are cooler than the original - then twist and warp and play with the sound to your (cabbage) heart’s content.
synplant does this with a neural net it’s calling genopatch, which has been trained to ‘understand’ how the synth’s various parameters - its two oscillators (only two!!), two filters, envelopes, reverb etc - affect the sound produced. the algorithm runs iterative tests of hundreds, thousands of combinations of the synth’s available settings, evolving to become increasingly precise and, ultimately, approximate to the original input. it even plays you its favourite creations as it goes :]
(crazy how many sounds you can make with a relatively limited set of parameters (no wavetables here!) - provided you have the capacity to try oodles of microscopic tweaks to those parameters. algorithms running on moderately powerful computers today have that capacity.)
this isn’t a direct comparison between past and future, obviously. if you want the sound of witches’ hair being ripped out, or of a reddit-rousing ufo, and you don’t want that sound to be one made or recorded and already used 1000 times by other human beans, there’s probably a text-to-music AI tool that will do it for you. (not MusicLM, although it did produce some creepy af emo rap…first one is a beat. maybe other tools, though!)
but what's cool is that rather than just being able to take that AI-generated sound and hone or mangle it with effects, you can use another AI tool to turn the sound (or a section of it, if starting with a clip) into a completely editable synth patch, where you can toy with pretty much every aspect of the sound itself. directly editing its ‘DNA’, to use synplant terminology.
I suppose the next step is direct text-to-synth-patch. and beyond that(!), text-to-DAW project. so you don’t just get an audio file in response to your prompt, or even a track with all its component stems (which would still be cool), but all the notes and instruments and settings that come together to create the track, presented in your software of choice, within minutes. am assuming some big brains at ableton, apple and beyond are working towards this for 2030.
-
writing this as a rough, occasional (maybe fortnightly?) newsletter/blog of forays in music land. trying to keep to ~500 words. (will return to the other newsletter, Anticipant, v soon…but wanted a dedicated music space)
if you know anyone who might be interested, forward it on! thx