The geological phonograph is a musical instrument which uses topological data of the Earth to generate a sound wave. Sounds are synthesized by placing a circle or other closed shape at any GPS location, and sampling elevations of hundred or thousands of points along the path. The ups and downs of mountains and valleys become ups and downs in the waveform which creates a unique timbre compared to other synthesis techniques.

Read more about the process in my blogpost about the instrument → Geological Phonograph


Short reflection on some of the difficulties behind building such an instrument.

Turning elevation data into sound

This was not exactly straight forward. The elevation data ranges from around zero to over 3000, and sound samples have to be constrained between -1 and 1. As well, it’s best for the sound samples to oscillate up and down through 0, so if I simply map 0 => 3000 to -1 => 1, much of the sound will be very quiet and not be making good use of the speakers. I have solved this by continually calculating the average elevation in a certain area and mapping the elevations on either side within the range of -1 => 1. This might not be the best solution as it tends to make most places sound similar regardless of their elevation; there is definitely room for more exploration in how the elevation data is converted into sound waves.


Originally writing the code in python and using to query elevation data at different places in the world, I soon began to run into issues in the amount of time that it would take to generate sound. Even after experimenting with multithreading and other efficiency optimizations, the fastest I could get the program to run would take ~1.2 seconds to generate one second of audio. This would result in constantly clicking as it stops momentarily to generate the next buffer.

To solve this, I eventually decided to rewrite the code in Rust. I have never used Rust before, but I had recently learned about a new creative coding framework called Nannou which I was interested in trying out. This had the added benefit of solving two problems at once as Python is not exactly stellar for creating visuals for an interface and I wanted to stay away from having to use pygame to create a display.

Rust is a much lower-level language than I am used to and the documentation for Nannou is rather lacking (in fact, the documentation for the latest version was non-existent as it failed to build) but I managed to learn enough from the example projects to figure out most things fairly easily. Maybe one of the biggest “programming culture shocks” from switching to rust is when I tried to use a modulus operator on two floating point numbers, which seemingly had no effect.

Interface Design

Designing a good interface can be a long project in itself, especially for the types of projects that I tend to work on. As such, I didn’t end up spending too much time on this aspect of the project. The interface was inspired by Teenage Engineering’s OP-1. And the time that I did spend designing the interface was mostly with how the numbers and icons would correspond to dials and circles on screen. I spent very little time considering my choices for what to actually display on the screen.

Having chosen a rather literal representation of topological contours with circles layered overtop, spinning around each other as one circle modulates the other. This representation had initially made sense to me as it represented how I am collecting the data in the first place—moving circles around each other and sampling points around a circle to create sound waves. However, now I realize that it doesn’t feel like the sound. Many of the OP-1 screens for example don’t have very literal interface designs, a cow, a spaceship, etc. But these screens make sense, because instead of mapping to how the sound is generated, they are mapping the interface to how we feel and experience the sound.

This completely flips around my conception of interface design—particularly in terms of synthesizers and musical instruments; although I imagine the idea will become useful for other types of interface design as well. What does the sound feel like, and how can the interface reflect the feel of the sound? And how might I still keep the topological aspect of the interface so that you can choose any GPS location in a way that makes sense to the user? I have some ideas for moving forward, but I will definitely have to think about these ideas to continue designing the interface for this instrument.


This is all just scratching the surface of the possibilities of this instrument. I mentioned already that I want to use moon or mars elevation data to explore the sound topologies of the solar system, but there are so many more possibilities to explore.