So, after all the exposure of the create digital music post, it is high time I added a little bit of background to the Raspberry Pi-MS Korg combo. I really should have done that before I leaked the video anywhere but ho well, timing was also good with the korg announcement.
So, what’s that thing exactly ?
I’ve been, over time, expanding slowly a new framework for writing sound tools. Sequencers, synthesizers, midi processor.. anything. You could see it as my own little supercollider. Within the framework you can code sound modules (oscillators/filters/effects/..), connect them together in a nord modular/reactor fashion and map controls from those modules to midi. The swiss-knife army tool of the trade. With emphasis on portability. Because I’m extremely interested in embedded platforms and odd/cheap units.
The synth demonstrated in this video is just one application of this framework, ported to the Raspberry Pi.
It’s a simple 2 Osc/2 Filter/1 Envelope graph with proper MIDI mapping to correspond to some of the MS-20 controller layout. It’s interesting to see how important and strong the MS-20 legacy still is. I could have done the exact same demo with a MPK mini, but I’m sure it wouldn’t have the impact this one had. The thing about this controller, beside it’s heritage, is that is implies the promise of hours of fiddling and finding unique sounds. It is something you want to bond with and spend time with, much to the opposite of all generic controller most manufacturers do now. Providing a screen-less software back-end to such a controller effectively completes that feeling of being in presence of a physical loving machine.
Porting to the raspberry pi
Compiling for the RPi was the easiest thing ever. Installing the image is super easy and after connecting it to a wired network and enabling sshd from the install menu, it was up and running. I made the mistake at first to try to cross-compile it as I usually do but the toolchain misses a lot of headers and libraries (SDL/ASLA/Jack) and so I reverted to compiling on the board itself which, after a few usual apt-get and selecting a debian build in cmake went as a breeze. It’s super slow when trying to optimise the code (about 30 minutes while on my old mac it takes about 20 seconds) but other than waiting, there was no headache to get a running executable.
I was eager to test the performance of the RPi. It is a lot cheaper than the Beagleboard, my previous target for linux-based embedded synth, and possibly less performant so I was wondering what would be possible to do with it. Of course the minimum latency is heavily going to depend on what type of synth you’re trying to run and -although the architecture of the synth is pretty simple- the two filters are quite big models so it’s a fairely ‘heavy’ synth. For comparison, I’ve already gathered the maximum number of voices depending on the sample buffer settings for the beagleboard over here.
My first test was simply to use the ALSA default driver. It won’t accept latencies below 256 samples which isn’t super good but a 6 msecs latency is not that bad. The results are:
- 512 samples – 4 voices
- 256 samples – 2 voices
- 512 samples – 5 voices
- 256 samples – 4 voices
- 128 samples – 2 voices
- 64 samples – 1 voice
The RPi is a very decent plaform and provides a great environment out of the box to develop audio applications. Sure, it won’t be able to support a full polyphonic top of the notch soft synth ala u-He but it has plenty of resources to do a very powerful monosynth, polyphonic cheap synths or trackers (my next effort will be to fully port LittleGPTracker on it).
As such it will surely do some concurrence to the arduino/at mega based world also. It doesn’t have yet the ease of connection to external digital/analog environment as the arduino does but I guess it’s just a matter of time before ladyada & sparkfun rolls simple shields to enable it. The beagleboard stays a also decent contender for bigger project but at the price of the RPi, it’s a steal.