Devlog #1
Hi there!
My name is Duncan, I'm one half of Loud Numbers, and you're getting this email because you signed up to join the super-special Loud Numbers behind-the-scenes club on our website earlier this week.
Loud Numbers, in case you've forgotten already (and we wouldn't blame you, given The Peculiar Situation that the world is in), is a data sonification project. We've spent the last few months figuring out how to use numbers and music to tell stories, and in late 2020 we'll be releasing six of those stories to the world. You'll be able to listen in two ways - an EP of music available on all good streaming services, and a podcast that deconstructs that music and explains how it works, a bit like Song Exploder.
So here's the deal with the club: a short update will land in your inbox every Friday, talking about what we've been up to this week. You'll get a ringside seat to the creative process - the tools we're using, the roadblocks that we're coming up against, how we solve them, key sources of inspiration, and much more. In return, all we ask is that you tell people about it. Spread the word far and wide.
We'll introduce ourselves properly and talk about why we launched this project in the coming weeks. For now though, here's what we've been up to recently!
21 May, Miriam
Today I tested out a new way of sonifying some data on beer. So much data sonification represents changes over time, for example in the climate or the economy. Sound naturally unfolds over time, so this makes sense. But we wanted to do something a bit different, and use sound to highlight contrasts between things – categorical rather than continuous data, if you like. This sonification of the chemical profiles of red wines got us thinking: what if we could use sound to represent the taste of different beers, so that the sound evoked the experience of drinking them as ‘realistically’ as possible? Can we create synaesthetic mappings between sound, taste and smell? And can we make the sound last as long as the taste does?
This is our task. To get the data, we are working with a Swedish beer expert called Malin Derwinger, who has developed a rigorous system for categorising and grading the aromas, taste and appearance of different brews. Her system scores beers on various parameters: the hoppiness of their aromas, the sweetness or bitterness of their tastes, how alcoholic they are. Malin has given us scores for a range of beers from Irish Stout to fruit sour, even an alcohol-free beer. Using the excellent open-source program Sonic Pi (Patreon), I am converting the scores into sounds. For example, beers she judges as having a more malty aroma have a grainier sound (since malt comes from grains), and beers with a more fermented aroma have more pitch distortion, so sound more woozy. I’m combining these sounds in layers to make a unique ‘soundprint’ for each beer.
Today I reached a coding milestone: you enter the beer number from 0-10 into the Sonic Pi code and it will play back a sonification of that beer’s scores. Hooray! The result sounds a bit bleepy and rough, but you can definitely hear differences between the beers. My main challenges now are: how do I make it sound more organic and ‘musical’? How do I separate the layers of sound better so you can hear them distinctly as well as part of a whole? And how do I create maximal variety so the beers sound as different as possible from one another?
I also hit another milestone – our first potential copyright infringement. After coming up with a nice distorted bass riff to represent the body of a beer (how heavy and robust it is) I was quite pleased with myself. Until I realised it was a dead ripoff of the bassline to Leftfield’s ‘Phat Planet’, as used in the famous 1998 Guinness ad with the surfing horses. Oops! Better change that.
22 May, Miriam
Today I worked on a short piece for our social media pages that sonifies 100 years of sunspot data from the Royal Observatory of Belgium. [Edit: You can hear the final result here.] It’s a spinoff from a bigger story we’re telling about climate change in Alaska. To be super clear, fluctuations in solar activity are not driving climate change. But the solar cycle does influence the aurora borealis in polar regions: the more sunspots, the more spectacular the northern lights. In the larger story we’re imagining the aurora as a neutral backdrop to the foreground drama of human-caused climate change.
I used Sonic Pi to map the number of sunspots each month to the amplitude of a shimmering chord. One month of data equals 0.1 seconds of music. Sonic Pi reads through the data file then, every 0.1 seconds, randomly chooses a pitch from a small array and plays it using a synth. The amplitude of this pitch is mapped to the number of sunspots that month so that the louder the sound, the more sunspots there were. I overlaid three layers of these random pitches to create the shimmering effect. Then I added a subtle echo in Logic to make it seem like the sound is coming from far away.
Next I found some pretty Chinese percussion samples on Logic that sit well over the top of the sunspots track and give it a nice meditative quality. (I was inspired by this sonification, which uses gongs to powerful effect.) But what data to map them to? A gong every decade, to mark time passing? A sound every solar minimum and maximum? A sound every time a solar mission is launched?
In the end I decided on a minimal approach, adding a low ‘bong’ sound every 10 years and a high Tibetan chime ‘bing’ on the intervening five-year points. It’s simply a grid, like axis tick marks in sound, against which you can hear the 11-year solar cycle fading in and out. I made two versions of the track, a long one for Twitter and a short one for Instagram. The long version covers the 100 years from January 1920 to December 2019; the 40-year version starts in January 1980.
I also tightened our Loud Numbers theme tune so it’s ready to go.
25 May, Duncan
Been trying to figure out how we should tell the world about Loud Numbers. A big fanfare-oriented “launch” when we’re done is the traditional way, but I don’t think it’s very effective for the modern web. Instead, we need to operate more like a snowball - picking up people over time. The longer the snowball rolls, the bigger it gets.
So we’re going to “work in public” as much as we can while keeping the final pieces under wraps until the actual launch. That’s going to mean sharing little snippets of audio on social and/or through a newsletter. You may even be reading this through such a newsletter, but at the time of writing it doesn’t yet exist [Edit: clearly I can see the future]. Either way, the more people willing to give us their email address in advance, the greater the splash at the actual launch.
For today, that’s meant thinking about how often we’ll need to post stuff to social media - balancing up maximum audience growth with minimum extra effort on our part, above and beyond the work of getting the sonifications together. The latter is, of course, the important bit. But if no-one is there to hear our tree falling in the forest, then whether it makes a sound or not is irrelevant.
27 May, Miriam
Ok so I am beginning to realise that when I say something is ‘tightened so it’s ready to go’, that means I’ll inevitably make several more versions the following week. There’s always more to learn and improve on.
Today I learned that audio tails matter: the echo/reverb trail left by the sound after it’s finished playing can make a huge difference to its overall duration. In this case, we needed short and long versions of the sunspots sonification for social media that, when combined with our theme tune, are under 2:20 (Twitter video time limit) and 60 seconds (same for Instagram). Turned out there were big long tails on both the sonification and theme tune that needed trimming. Doesn’t sound like much, but I spent faffing with the fadeouts to get them to sound just right. Tightened and ready to go!
I also made five- and 10-second versions of the Loud Numbers theme tune to sit at the end of each video. The tune is based on the Fibonacci series, where each number is the sum of the two previous ones. The melody contains notes 1, 1, 2, 3, 5, 8, 13, 21 and 34 of a diatonic major scale counting from middle C, so the first nine values of the Fibonacci series. Our theme tune is, literally, loud numbers.
28 May, Miriam
Today I refined some of the ways of mapping the beer scores into sounds in Sonic Pi. I used the amount of pitch bend in a chord to represent the alcohol level of a beer – so alcohol-free beers have a straight, unwavering sound and the more boozy they get, the more woozy and unstable the pitches in the chord become. Pitch bend is such an evocative way of communicating alcohol levels: it’s disorientating to listen to and, like alcohol, can be detected in small amounts.
In the code, the carbonation (fizz) in a beer is represented by an upwards sweep made using the play_pattern_timed() function in Sonic Pi. This is made of four scales played simultaneously (including an obscure Chinese scale I recently discovered that’s become a personal favourite, the yu scale). The scales have different note lengths: slower for the bigger bubbles, faster for the smaller bubbles.
I mapped Malin’s carbonation scores to the amplitude and pitch range and echo decay of the sweep so the fizzier the beer, the louder the sound, the wider the pitch range and the longer and more dramatic the echo. Triple coding FTW! It works pretty well, though the results for the fizziest beers are a little over the top. The Gueuze, a Belgian beer nicknamed ‘Brussels champagne’ because of its extreme carbonation, sounds like an exploding slot machine.
1 June, Duncan
I spent a good chunk of the last week making some nifty videos to showcase our work to a wider audience. They’re focused around Miriam’s wonderful shimmery aurora borealis chords. I used Headliner to make the waveforms, and then cut the rest of it together in Adobe Premiere Rush. It feels a bit weird to be making a visualization of a sonification, but having something to look at is useful to catch people’s attention in social media streams.
Speaking of social media streams, we soft-launched today! It felt really good to see various data visualization luminaries retweeting us and saying how excited they are. When you work on something in the dark for so long, it’s really helpful and morale-boosting to get some external validation that the idea you’re working on is still a good one.
3 June, Duncan
I've set up an Airtable to organise our social posting. I don’t necessarily want social media to be the core of our audience strategy — more of a funnel to get people to sign up for the newsletter. That’s for four main reasons.
First because social media fatigue is an ever-increasing phenomenon. Second because we want to reach people who aren’t necessarily prolific social media users. Third because competition for attention in social feeds is so damned high. And finally because with a newsletter we own the list and can take it to another platform if we want to – there’s no way of exporting your Twitter followers to Facebook, or LinkedIn to Instagram, for example.
For all those reasons, it makes sense for us to centre our community-building work around an open platform like email, at least for the time being.
4 June, Duncan
Two main Loud Numbers tasks today. The first was setting up a proper digital audio workstation for editing. I’m planning to use Reaper, and followed this fantastic guide to set things up and automate some of the most fiddly tasks. Combined with my shiny new dynamic microphone and USB audio interface (in my day we called it a sound card), the audio should be about as top-notch as we can get without shelling out for studio time.
The second was writing the intro for our first email to our newsletter subscribers (who’ll be reading this in that very newsletter, how meta). Took me a few tries to get the tone right — resisting my British urge to be overly self-deprecatory and modest, and aiming instead for a more warm, and welcoming and open vibe. We’re proud of this project and we think we can do a good job on it, so it doesn’t make sense to talk it down out of politeness.
Phew! Congratulations on getting to the bottom of all that. It was actually a few weeks' worth of updates in one go - future devlogs will be shorter. For those of you who did make it to the end, here's a little treat - Grammy Award-winning composer Chilly Gonzales talking about musical storytelling. I sent it to Miriam and she said "Flipping between relative mediant minor and tonic major is always kind of cool but Schubert did it better." ¯\_(ツ)_/¯