The Haptic Box is designed to perform a musical composition which must be sensed through touch and which responds to touch.1 It does this through a SuperCollider script which processes audio feedback limited to low frequency ranges (50-300 Hz). This page describes the process of configuring the Raspberry Pi built into the Haptic Box to run the script. Rather than being a step-by-step guide, I primarily intend to discuss what the steps do. I link to tutorials or guides that I used and provide code snippets for things I did differently.

I’m going to try to write in a high-level language when appropriate as I’d like this page to be interesting to as many people as possible. If there’s something you don’t understand or would like to learn more about (or if you found an error in my understanding!), please let me know.

The Pi I currently have mounted in the Haptic Box is the same one I used in Pathside Box, Tocatta, and 802.11, so it has been up and running for some time and that means I’m writing the first two sections from memory (though I think I did a fresh install in there at some point).


Getting the Pi up

The way I like to work with Raspberry Pis is over SSH, which lets me run commands on the Pi from my main computer. Something that’s convenient about this mode of working is that the OS installed on the Pi can be a “headless” version (it doesn’t have graphical applications or a desktop installed), ie, the “Lite” version of Raspberry Pi OS. There are plenty of guides around the internet on how to install the operating system, though unfortunately the official Quick Start Guide assumes the reader wants to install the full desktop. Mads Skjeldgaard has a very good guide which will work on the 3B(+) Pis as well as the 4 it targets.

A few additional notes and deviations from Mads’s workflow:

It’s worth adding other networks into the wpa_supplicant file. I found it especially worthwhile to add the Pi and my computer to my phone’s hotspot, which meant that I could work on the Pi regardless of what local WiFi was available (making sure to SSH by the Pi’s local network name so that the SSH communication didn’t go out through the internet and back). Other networks can by added to the file with separate network={} blocks. I set my phone’s WiFi hotspot to be the highest priority so I could be sure the Pi connected to it while I was at home, for instance.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev

 id_str="My Example Network"

 id_str="Another Example Network"

I use SSH connections fairly often, so I configured public key authentication between my computer and the Pi so I don’t need to type a password for every connection. Public key authentication is similar to an automatic login process between two computers (which can also be used to secure other transfers like email and file transfer). Again, there are lots of guides to doing this online (such as this one).

Speaking of file transfers, once SSH is set up, file transfers are easy to do with rsync: rsync file.txt rpiuser@mypi.local:~ will transfer a file called file.txt to the home folder of the user rpiuser on the Pi called mypi.

Finally, Mads mentions that kitty users should invoke ssh with an alias to kitty +kitten ssh to avoid trouble with the remote’s recognition of the terminal type. Another way to do this is to explicitly set the TERM environment variable to something more standard for each call to ssh. (I also had no trouble aliasing ssh directly.)

# in ~/.config/bash_aliases

# set TERM=xterm-color only for ssh sessions
# default of xterm-kitty seems useful enough for local sessions but breaks things with ssh
alias ssh='TERM=xterm-color ssh'

Audio on the Pi

Audio configuration on Linux has a reputation for being convoluted, but it doesn’t need to be. There are two “servers”, PulseAudio and JACK, which each have the role of managing audio connections between user programs (eg, Firefox, Supercollider, system notification sounds) and sending those to the kernel layer (called ALSA) which then sends the sound to and from the hardware.2 PulseAudio is designed to handle day-to-day audio tasks like listening to media and making video calls, whereas JACK is optimized for low-latency performance. The potentially complicated part is that Pulse and JACK don’t like to run simultaneously (only one can talk to ALSA at a time) and most software can only communicate with one or the other. For a desktop user who wants to work with music software without losing audio from programs like web browsers, it’s desirable to jump through the hoops to get JACK and Pulse to talk to each other, but this use-case only requires JACK.

Mads’s excellent guide includes an explanation of how to configure JACK and a script for performance tuning the Pi. I didn’t know about it when I started, so I had done some of the steps and gotten by without others. Specifically, setting the system swappiness and configuring the realtime @audio group only needs to be done once, and the jackd2 package actually performs the latter as part of its installation script.3 Mads recommends installing a special realtime kernel. Historically this has been an important part of performance tuning Linux systems for audio, but my understanding is that the bulk of the code that made it such a critical step has been merged into the mainline kernel, so I don’t bother with it anymore. (Though maybe I’ll give it a try later.) Finally, the AudioInjector soundcard I used needed to be enabled by adding dtoverlay=hifiberry-dac in /boot/config.txt, and I found that I had also disabled the onboard audio in the same file as Mads recommends.

Supercollider on the Pi

Debian (KXStudio?) provide a SuperCollider package, but it didn’t work on my Pi so I needed to compile SuperCollider manually. Once again I’m indebted to Mads for providing a script to automate the build and install process. I ran into a small problem caused by GitHub having introduced a policy that broke an earlier version of the SuperCollider repository; changing the SuperCollider branch version to 3.12 fixed it. (The plugins version needed to remain the same and weren’t affected anyhow.)

SuperCollider is structured as a server-client pair and I was excited to try out running the server on the headless Pi and the client on my computer, but there were a few additional steps required.

// reference:

Server.internal.options.bindAddress = "";
Server.internal.options.maxLogins = 2;
Server.internal.options.protocol = \tcp;
Server.local.options.bindAddress = "";
Server.local.options.maxLogins = 2;
Server.internal.options.protocol = \tcp;

s.waitForBoot({ "The server has been booted. Please keep your hands and arms inside the vehicle at all times and enjoy the ride.".postln; });
o =;
~addr = NetAddr("raspberrypi.local", 57110);
Server.default = s = Server.remote("remote", ~addr, o);

It took quite a while to figure out how to do this (which is surprising as SuperCollider’s documentation is generally quite good), but once I got it down, the performance was great. My previous headless workflow with PureData wasn’t very effective as the entire user interface needed to be continually updated over the network connection between the Pi and my laptop, which bogged down the Pi’s audio quite a bit.4 Running SuperCollider headless like this avoided that problem entirely by communicating with OSC, which is quite fast, even over WiFi. I could see this being a viable workflow for prototyping or even some live performance situations.

I ran into a brick wall, however, when after a few days of working SuperCollider refused to send my code to the Pi server because it was too big. I couldn’t find a way to work around this problem so I had to change up my workflow to use a SuperCollider client directly on the Raspberry Pi by way of running NeoVim with the scnvim plugin over SSH. (Again, Mads’s guide covers the install procedure.) I had been using scnvim on my own machine anyway, so the transition was as simple as installing the software and copying over the relevant portion of my main nvim config. I also finally copied over the actual SuperCollider script and got used to running it standalone, without tweaking parameters. The Pi continued to perform well with this setup.

tmux is a good addition to this workflow, particularly when working with a flaky network connection. tmux will maintain a terminal session through hangups until it is explicitly closed. If WiFi drops and the SSH connection fails, SuperCollider will continue to process audio and the nvim session will still be available inside the tmux session after reconnecting with SSH.

Rigging up the Button

I had originally thought the button would only serve as a safe way to shut down the Pi and had implemented it with the help of this HowChoo guide, but as I worked with the box I realised I’d like to be able to start and stop the haptic process as well, so I decided to make short presses toggle the patch on and off and long presses shut down the Pi.

How long does it take to make a button work? Longer than I thought. Buttons and switches work by completing or shorting an electrical circuit, so code that handles them has to check to see whether or not a voltage is present due to the circuit being closed. That job is quite straightforward for simple one-shot tasks like shutting the system down but becomes more complicated when the duration the button is held down needs to be a factor. The complication arises because the current flickers a little as the button is pressed and released, as opposed to simply going from off to on (or vice versa) as it might seem to do. In the case of cheap buttons like the one I’m using, the flicker can even continue while the duration the button is held. This flicker is known as “bounce” and the process of interpreting the intended switch actions from that bouncing input is known as “de-bouncing.”

(It’s also possible to address switch bounce in hardware by bridging the button terminals with a 0.1uF capacitor, but as you’ll have noticed I prefer to do things the difficult way.)

It’s also important to understand which way the button is connected, otherwise it won’t just be inverted, it will simply not work. Whereas switches complete or break circuits, buttons can either complete a connection (acting like a momentary switch) or short an existing connection. In my case, I used a connection to ground, which shorts an existing circuit. This meant that I needed to configure the Pi to use a pull-up resistor to ensure that non-pressed state has a voltage.In other words, in my configuration, the button’s GPIO pin sees a high voltage when the button isn’t pressed, and gets a zero voltage when the button is pressed.

hi  ───────────╮       ╭───────────
               │ press │
lo             ╰───────╯

Debouncing has been done in software for a long time in various contexts (I first learned about it in the context of interface design, where it’s used to reduce the number of times something happens when a user is scrolling or typing). The basic idea in this case is to interpret a voltage rising event as the end of a full button press only if there are no voltage falling events immediately following it. To do this,

This would-be simple task is made a little more complicated because it needs to be done asynchronously (the interpretation of the press-start events can’t wait for the interpretation of the press-end events). Python’s async seems to have gone through a number of iterations and has a number of different ways of using it, so there’s a lot of conflicting approaches and advice available online, many of which don’t always work with the necessary modules (the Raspberry Pi GPIO library in my case). Though this occasionally makes finding the right one time-consuming, with some perseverance I managed to find the right one and get things working.

Automating Everything

On the Python side, the remainder of the work was starting and stopping SuperCollider. Aside from some trite learning moments, this was straightforward. I was even able to take advantage of the unusual language feature that allows concatenating path strings with the / division operator, which I haven’t seen before:

patch = (Path(__file__).parent / PATCH).resolve()

After that, it was a matter of making it run “automagically,” which is to say on boot. Just as with Python’s async, getting something to run on boot on a Linux system is a problem with many different solutions (and some very opinionated netizens). While the HowChoo guide I linked above uses /etc/init.d/, I used systemd which offers a finer grain of control and has the additional indisputable advantage of already being familiar to me. The sum of the work on that front was creating a small .service file representing the script to systemd, and a few lines in to put it in a sensible location and tell systemd to use it. (On any system, this is all installing software is: copying files to the conventional locations and running small bits of code from the system to “inform” it about the new software; uninstalling is the inverse.)

Here’s the installation script:

#! /usr/bin/env bash

# Installs the haptic box files and activates the systemd unit file

# Safety first:
set -eu

# Target locations

# Make sure the directories exist
mkdir -p $INSTALL_DIR
mkdir -p $UNIT_FILE_DIR

# Move the files to the relevant directories
cp button-handler.service $UNIT_FILE_DIR

# Tell the system to use the button handler
systemctl --user daemon-reload
systemctl --user enable button-handler.service
systemctl --user start button-handler.service

# Tell the user installation has finished
echo "Haptic Box code has been installed and enabled. Have a nice day."

So that covers configuring the system and setting up the bits of glue code to get the button working. Maybe I will eventually break down the SuperCollider code that runs the haptic feedback process here as well. For now, my priority was providing a reflection on how I built this thing so that others can do so as well. Please be in touch if you have any questions or comments!

  1. The composition is a process which unfolds in real time. Its behaviour is related to algorithmic and generative music but I’m not sure those names are a good fit because in this case the process doesn’t make many discrete decisions and is rather an ongoing flow which accrues memory of its past events through their resulting effects on its current “position.”↩︎

  2. The most exciting development in Linux audio is a new “driver” called PipeWire which unifies and replaces both PulseAudio and JACK, similar to macOS’s CoreAudio. PipeWire is very useable for desktops (I use it on my main machine), but JACK can still be a little more performant and is known to work well on Raspberry Pis, so I stick with that.↩︎

  3. “Swap” is a section of hard drive space used as extra RAM, and “swappiness” is how likely the system is to use that instead of physical RAM. Swap is much slower than physical RAM and can degrade audio performance when used, so Mads’s script sets it quite low. The jackd2 package includes and installs a very similar config file to the one Mads’s script uses to configure realtime priority and memory locking, and optionally enables it post-install.↩︎

  4. Contributing to this problem is the fact that PureData is single-threaded, meaning each operation has to wait for the previous to complete. This is a well-known cause for audio interruptions when more time-dependent tasks like user interface and networking are involved.↩︎