A graphical tool for configuring Alesis V-Series MIDI controllers on Linux.

In my last post, I explained how I reverse-engineered the Alesis SysEx protocol and detailed my findings. Now, two months later, I’ve finally decided to implement a tool for configuring these controllers on Linux.

Reverse engineering the Alesis V-series SysEx protocol.

I recently got back into music production and decided to order myself a MIDI controller. I got a few recommendations for the the Alesis V25, so I went ahead and ordered it. However, I was less than pleased to find that its configuration software wouldn’t run on Linux, even under Wine. Of course, this prompted me to reverse engineer the protocol that lets the software talk to the keyboard.

Using black magic to make a fast circular buffer.

Yesterday, I took a glance at the Wikipedia page for the circular buffer and was intrigued by an alleged optimization technique that I was not familiar with: A circular-buffer implementation may be optimized by mapping the underlying buffer to two contiguous regions of virtual memory. (Naturally, the underlying buffer‘s length must then equal some multiple of the system’s page size.) Reading from and writing to the circular buffer may then be carried out with greater efficiency by means of direct memory access; those accesses which fall beyond the end of the first virtual-memory region will automatically wrap around to the beginning of the underlying buffer. When the read offset is advanced into the second virtual-memory region, both offsets—read and write—are decremented by the length of the underlying buffer

Proving a mathematical curiosity.

Today, a thread full of cool math facts appeared on Reddit. As usual, someone mentioned the fact that 111111111 * 111111111 = 12345678987654321. In another reply, someone pointed out that this also works in other bases. For some reason, I decided that I needed to prove that it works in all bases.

Generating spectrograms the hard way with numpy.

A spectrogram is a convenient visualization of the frequencies present in an audio clip. Generating one involves obtaining the frequency components of each window of the audio via a Discrete Fourier Transform (DFT) of its waveform. While tools are available to both generate spectrograms and compute DFTs, I thought it would be fun to implement both myself in my language of choice, Python. In the following, I will discuss computing a DFT (the hard way), processing a WAV file, and rendering a spectrogram (all in Python). If you’re impatient and just want to see the code, you can find it on GitHub.