top of page
  • Writer's pictureKassian Houben

Using Processing for Music Visualization

In this tutorial we will be covering the usage of Processing for music visualization. Adding an animated component to generative artwork often relies on the use of some form of random function, whether it be Perlin noise, the random() function, or another self devised method. Using music to provide this input data can add a very visually pleasing result as it brings together two senses in a synergetic way. The highs and lows of the music are reflected in the highs and lows of the visuals.

This leads us to the question: How do we use music as our input?

Let us first define our goal and take a look at the tools needed for this project.


Our goal is to create this visualizer. As you can see, some of the lines extending from the circle in the center are changing length based on musical input. You can also see that there is other non-connected movement occurring to add more interest. The program works in real-time and we will be able to use our own song file to replicate these results.

A link to the code repository is given at the end.


Following are the tools used to create this program:

Processing - A graphical library based on the Java programming language. Some familiarity will be useful for following this tutorial.

Minim - An easily imported library that allows for the analysis of sound. It can also do more but we will leave that for another tutorial. To import click Sketch -> Import Library -> Add Library.. and search for Minim.

Lastly, we need some music. An mp3 file of a song you like should work, just make sure to put it into a folder named data in your Processing sketch folder.

Understanding our input

We can now return to the main question:

How do we use music as our input?

Music is perceived via sound waves. A speaker produces a vibration in the air at a certain frequency which hits our eardrum and is translated by our brain into a sound. The frequency of this vibration determines whether we perceive it as a low or a high sound. In real life, many different frequencies hit our eardrum in quick succession, which means that ultimately a sound can be broken down into a collection of frequencies and their loudness/amplitude.

Digitally, this can be represented by splitting up our song at a given moment into many frequency “bands”. Each band contains the amplitude information for that specific frequency.

In this video, the brighter the orange, the louder the frequency. Lower frequencies are at the bottom, higher at the top. As you can see, elements like the kick drum immediately stand out as bright, thick, orange, vertical lines. You can also see the bass as a bright, thick, horizontal line below the kick drum. This demonstrates the basic way we can deconstruct our song and use it as an input.

In Processing, there are two main ways we can retrieve data from a slice of music: amplitude, and FFT (Fast Fourier Transform). The first method gives us a number representing the overall loudness of the song at a given point. The second method uses the loudness information from the individual frequency bands mentioned before. We will be using FFT for our purpose since it allows closer control and more variety for our animation.


1. Imports and Global Variables

To start we will need to import our Minim library, make sure you have it installed via the editor.

import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
import ddf.minim.signals.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;

We then specify some global configuration variables which define some of the most important elements we want control over. The smoothingFactor is a way to control how quickly our visuals respond to audio. Too fast and the animation will seem jittery, too slow and it will lag behind. The rest are self explanatory but remember to put an audio file in your data folder and change the name down below to your own mp3 file name.

// Configuration variables
// ------------------------
int canvasWidth = 1080;
int canvasHeight = 1080;

String audioFileName = "kingfisher.mp3"; // Audio file in data folder

float fps = 30;
float smoothingFactor = 0.25; // FFT audio analysis smoothing factor
// ------------------------

More global variables follow with some necessary definitions and variables that will be used later on.

// Global variables
AudioPlayer track;
FFT fft;
Minim minim;  

// General
int bands = 256; // must be multiple of two
float[] spectrum = new float[bands];
float[] sum = new float[bands];

// Graphics
float unit;
int groundLineY;
PVector center;

Lastly we will define some settings to set the size of the canvas and set our anti-aliasing to the highest smoothing level to ensure our lines are drawn as smoothly as possible. If your program spits out any related errors, try to reduce this level to 4, or even 3 (which is the default).

void settings() {
  size(canvasWidth, canvasHeight);

2. setup()

The setup() function consists of the initialization of some basic variables.

First we set the framerate and set a number of variables that will affect the positioning and sizing of our drawings. The unit variable is especially useful for creating scalable visuals as every drawn element references it including the setting of line thickness, circle size, and the size of each element relative to the canvas.

void setup() {

  // Graphics related variable setting
  unit = height / 100; // Everything else can be based around unit to make it change depending on size 
  strokeWeight(unit / 10.24);
  groundLineY = height * 3/4;
  center = new PVector(width / 2, height * 3/4);  

  minim = new Minim(this);
  track = minim.loadFile(audioFileName, 2048);
  fft = new FFT( track.bufferSize(), track.sampleRate() );
  // track.cue(60000); // Cue in milliseconds

The Minim class is initialized to allow for track loading and playing using track.loop(). We also initialize the fft variable using the track’s buffer size and sample rate. In essence, the buffer size is the amount of time you give the computer to process a piece of audio. Lower buffer size will reduce latency which is especially important if live input is critical. The sample rate describes how many audio samples the program can capture in a second. This is important for capturing the full range of frequencies that humans can hear. I won’t go into more detail here, but if you’d like to know more you can visit this link.

linAverages specifies how many frequency bands we would like to retrieve. For our program this is 256 which will be enough for our purposes since we are not planning to make an analysis tool.

Lastly, if you would like to cue your song to start somewhere other than the beginning, you can uncomment the last line and set the position in milliseconds.

3. drawStatic() and drawAll()

Before we move onto the draw() function which you might be expecting, we are going to have a look at two functions that handle all the actual work, drawStatic() and drawAll(). From now on, make sure to preserve the indentation found in these code snippets as I will be commenting on code in the middle of functions. For organisations sake, you might also want to put these two functions and their associated global variables in a new tab named draw_all.


The drawStatic() function is responsible for drawing a number of extending lines that are not animated.

First we will need to define some more global variables. These variables are used to handle logic between the two functions and store details for faster re-drawing. extendingSphereLinesRadius sets the initial radius of each line.

int sphereRadius;

float spherePrevX;
float spherePrevY;

int yOffset;

boolean initialStatic = true;
float[] extendingSphereLinesRadius;

We then start the drawStatic() function, iterating through 240 angles in steps of 4 to draw the extending lines.

void drawStatic() {
  if (initialStatic) {
    extendingSphereLinesRadius = new float[241];
    for (int angle = 0; angle <= 240; angle += 4) {
      extendingSphereLinesRadius[angle] = map(random(1), 0, 1, sphereRadius, sphereRadius * 7);
    initialStatic = false;

A point is created using basic trigonometry to calculate the position based on the angle.

  // More extending lines
  for (int angle = 0; angle <= 240; angle += 4) {

    float x = round(cos(radians(angle + 150)) * sphereRadius + center.x);
    float y = round(sin(radians(angle + 150)) * sphereRadius + groundLineY - yOffset);
    float xDestination = x;
    float yDestination = y;

In this for loop, we continue to incrementally increase the length of the line while checking if our line is overlapping with the ground line. This is important since the ground line is a sine wave and constantly moving so we cannot simply check whether our line is more than a given Y value. The loop will continue until our previously initialized length has been reached or the ground has been hit. It almost acts like a ray-tracing algorithm.

    // Draw lines in small increments to make it easier to work with 
    for (int i = sphereRadius; i <= extendingSphereLinesRadius[angle]; i++) {
      float x2 = cos(radians(angle + 150)) * i + center.x;
      float y2 = sin(radians(angle + 150)) * i + groundLineY - yOffset;
      if (y2 <= getGroundY(x2)) { // Make sure it doesn't go into ground
        xDestination = x2;
        yDestination = y2;

Knowing our beginning and ending position, we draw a line between the two positions.

    if (y <= getGroundY(x)) {
      line(x, y, xDestination, yDestination);


The drawAll() function is responsible for drawing everything including a call to drawStatic(), the circles surrounding the sphere, the visualizer lines, and the ground sine wave.

Here we set some initial variables. Note the sum array passed in. It contains the loudness/amplitude values of each frequency band analyzed at the current time. It is provided by the draw() function which we’ll look at later.

void drawAll(float[] sum) {
  // Center sphere
  sphereRadius = 15 * round(unit);

  spherePrevX = 0;
  spherePrevY = 0;

  yOffset = round(sin(radians(150)) * sphereRadius);

  // Lines surrounding
  float x = 0;
  float y = 0;
  int surrCount = 1;

Here we draw the circles that are creating movement around the sphere. They only rely on the current frame count for movement.

  boolean direction = false;
  while (x < width * 1.5 && x > 0 - width / 2) {

    float surroundingRadius;
    float surrRadMin = sphereRadius + sphereRadius * 1/2 * surrCount;
    float surrRadMax = surrRadMin + surrRadMin * 1/8;

    float surrYOffset;
    float addon = frameCount * 1.5;
    if (direction) {
      addon = addon * 1.5;

    for (float angle = 0; angle <= 240; angle += 1.5) {
      surroundingRadius = map(sin(radians(angle * 7 + addon)), -1, 1, surrRadMin, surrRadMax); // Faster rotation through angles, radius oscillates
      surrYOffset = sin(radians(150)) * surroundingRadius;

      x = round(cos(radians(angle + 150)) * surroundingRadius + center.x);
      y = round(sin(radians(angle + 150)) * surroundingRadius + getGroundY(x) - surrYOffset);

      fill(map(surroundingRadius, surrRadMin, surrRadMax, 100, 255));
      circle(x, y, 3 * unit / 10.24);

    direction = !direction;
    surrCount += 1;

The sequence of if else statements below make up the most important section of the code. Part of what makes this visualizer unique is that frequencies are not displayed around the circle from low to high, left to right. They are instead displayed with lower frequencies in the center top of the circle and higher frequencies on the left and right ends.

These if else statements determine which frequency bands to use at which angle. Values are mapped approximately between the radius minus 1/8th of itself and 1.5 times a previously defined maximum. Lows, mids, and highs, are all mapped slightly differently to adjust for volume differences of the different bands. This was done manually with a lot of trial and error.

  // Lines extending from sphere
  float extendingLinesMin = sphereRadius * 1.3;
  float extendingLinesMax = sphereRadius * 3.5; 
  float xDestination;
  float yDestination;
  for (int angle = 0; angle <= 240; angle++) {

    float extendingSphereLinesRadius = map(noise(angle * 0.3), 0, 1, extendingLinesMin, extendingLinesMax);
    // Radius are mapped differently for highs, mids, and lows - alter higher mapping number for different result (eg. 0.8 to 0.2 in the highs)
    if (sum[0] != 0) {
      if (angle >= 0 && angle <= 30) {
        extendingSphereLinesRadius = map(sum[240 - round(map((angle), 0, 30, 0, 80))], 0, 0.8, extendingSphereLinesRadius - extendingSphereLinesRadius / 8, extendingLinesMax * 1.5); // Highs
      else if (angle > 30 && angle <= 90) {
        extendingSphereLinesRadius = map(sum[160 - round(map((angle - 30), 0, 60, 0, 80))], 0, 3, extendingSphereLinesRadius - extendingSphereLinesRadius / 8, extendingLinesMax * 1.5); // Mids
      else if (angle > 90 && angle <= 120) {
        extendingSphereLinesRadius = map(sum[80 - round(map((angle - 90), 0, 30, 65, 80))], 0, 40, extendingSphereLinesRadius - extendingSphereLinesRadius / 8, extendingLinesMax * 1.5); // Bass
      else if (angle > 120 && angle <= 150) {
        extendingSphereLinesRadius = map(sum[0 + round(map((angle - 120), 0, 30, 0, 15))], 0, 40, extendingSphereLinesRadius - extendingSphereLinesRadius / 8, extendingLinesMax * 1.5); // Bass
      else if (angle > 150 && angle <= 210) {
        extendingSphereLinesRadius = map(sum[80 + round(map((angle - 150), 0, 60, 0, 80))], 0, 3, extendingSphereLinesRadius - extendingSphereLinesRadius / 8, extendingLinesMax * 1.5); // Mids
      else if (angle > 210) {
        extendingSphereLinesRadius = map(sum[160 + round(map((angle - 210), 0, 30, 0, 80))], 0, 0.8, extendingSphereLinesRadius - extendingSphereLinesRadius / 8, extendingLinesMax * 1.5); // Highs

Here is another implementation of the incremental ray tracer, making sure that lines don’t go past the sine wave ground line.

    x = round(cos(radians(angle + 150)) * sphereRadius + center.x);
    y = round(sin(radians(angle + 150)) * sphereRadius + groundLineY - yOffset);

    xDestination = x;
    yDestination = y;

    for (int i = sphereRadius; i <= extendingSphereLinesRadius; i++) {
      int x2 = round(cos(radians(angle + 150)) * i + center.x);
      int y2 = round(sin(radians(angle + 150)) * i + groundLineY - yOffset);
      if (y2 <= getGroundY(x2)) { // Make sure it doesnt go into ground
        xDestination = x2;
        yDestination = y2;

Lastly, we draw the ground line using the getGroundY() function (discussed in the next step). I hope I have not lost you in this process. Many of these calculations have taken hours of experimentation, they did not just magically come to mind. At its root, everything consists of simple trigonometry with some adjustments to make it all look nice.

    stroke(map(extendingSphereLinesRadius, extendingLinesMin, extendingLinesMax, 200, 255));
    if (y <= getGroundY(x))  {
      line(x, y, xDestination, yDestination);

  // Ground line
  for (int groundX = 0; groundX <= width; groundX++) {

    float groundY = getGroundY(groundX);

    circle(groundX, groundY, 1.8 * unit / 10.24);

4. getGroundY()

Before we jump into the draw() function and our last step, we need to take a look at one last function: getGroundY(). It is referenced in many other places including the code just above.

getGroundY() is responsible for returning the Y position of the ground given an X position. Since the ground is a sine wave and animated, this function is used for redrawing and to ensure any extending lines do not intercept it. As you can see, the unit variable is used to ensure the ground has the same number of “waves” no matter the resolution.

// Get the Y position at position X of ground sine wave
float getGroundY(float groundX) {

  float angle = 1.1 * groundX / unit * 10.24;

  float groundY = sin(radians(angle + frameCount * 2)) * unit * 1.25 + groundLineY - unit * 1.25;

  return groundY;

5. draw()

Finally we reach the draw() function. It is short and sweet and its only responsibility is to keep the track playing, analyzing the current position, using our smoothingFactor to create the sum array, and calling our drawAll() function.

void draw() {
  spectrum = new float[bands];
  for(int i = 0; i < fft.avgSize(); i++)
    spectrum[i] = fft.getAvg(i) / 2;
    // Smooth the FFT spectrum data by smoothing factor
    sum[i] += (abs(spectrum[i]) - sum[i]) * smoothingFactor;
  // Reset canvas
  rect(0, 0, width, height);


While there are many simpler ways to create a visualizer, I have found that these are usually much less interesting. I hope this tutorial has been both informative and interesting and that you feel ready to dive deeper into the creation of your own visualizer or to modify this one. In either case, I would love to see what you make using this tutorial so feel free to contact me on my website down below.

For the full code, with even more functionality, visit my Github repository. You can also view a fully visualized song here.

About the author

Kassian Houben (Estlin) is a musician and creative tech enthusiast from New Zealand. He made this project for his 2021 release “Imperative” to add an animated element to his cover art. You can find the EP on his website and on all music streaming/buying platforms.

Cover art for the EP:

29,634 views2 comments

Recent Posts

See All


Randa Moses
Randa Moses
Jul 03, 2022

I'm getting this error

mix cannot be resolved or is not a field


Ember Leona
Ember Leona
Feb 23, 2021

Thank you I saved it as pdf

bottom of page