Generative Notation and Score with Tone.js

The concept: Notation System for Generative Music

For a long time, I try to find the bridge between generative music, which usually rely on synthesized music, and acoustic (or amplified) music playing and composition.

It felt that starting with creating a notation system for generative music could be the right approach.

MIDI

The effect that the invention of the MIDI protocol had on the evolution of generative (synthesized) music is well known and far beyond the scope of this blog post.

Since the MIDI protocol includes information about the notes that should be played and the way these notes should be played, it served as an inspiration behind my notation system for generative music.

My Generative Music Notation System

The notation system includes two main components:

Legend

The legend is sheet, that defines all possible notes (aka “objects”) that could be played during the generative score. Each note includes the following properties:

  • Color – A unique hex color code that will be used for visualization purposes to represent the note.
  • Frequency – The frequency of the note. Can be presented in Hz (e.g. 440) or letters (e.g. A#3).
  • Amplitude – Volume / velocity in range between 1 (loudest) and 0.
  • Duration – The duration of the note, including an envelope if needed.
  • Loops – The number of time the note should be repeated in a row on the score.
  • Connected_notes – This is the main difference from the MIDI protocol. The connected_notes property will hold a list of notes that should be played with or after this note. Each item on the list, which refer to a connected note, should include the following properties:
    • Color/index number of the connected note according to the legend.
    • The time on which the connected note should be initiated, including maximum and minimum values for silence after the initial timestamp (e.g. if the connected note should be played after the original note, the time will be <the_original_note_duration>+<silence_if_any>).
    • Probability value that will represent the chances that the connected note will be played. All connected notes probability values together should not exceed the value of 1 (== 100% chances).
Generative Music Notation: Legend
Generative Music Notation: Legend
Generative Music Notation: Potential Score
Generative Music Notation: Potential Score

What’s Missing?

Two major properties are missing from the note objects:

  • Instrument (or timbre) – The note object is a set of instructions that could be applied by any instrument. Since I believe that the process of generating music will include the usage of computers (or digital devices), the score can be played with a variety of instruments. The decision about the sound of the piece will be left by the hands of the performer.
  • Timing – Again, since the note object is a set of instructions, these instructions can be initiated and applied anytime during the score, by the performer or by the score itself. The decisions about the timing will also remain by the hands of the performer. The only timed notes are the connected notes which hold instruction that should specify if the note will be initiated with the original note, after the original note, during the original note, etc.

Example

For example, if we will use the legend above and will start the score with the first two notes (7D82B8 & B7E3CC), we will get the following result –

Demo

Using Tone.js, I was able to experiment with generating music based on the legend and score shown above.

The project can be seen here – http://www.projects.drorayalon.com/flickering/.

The current limitations of this demo are:

  • No instrumentation: All notes are being played using the same instrument
  • No dynamics: One of the most likable elements of a music performance is the dynamics and tensions the performer creates while playing the piece. The current implementation doesn’t support any dynamics :\
  • No probability: The current implementation presents a linear and predictable score. Notes have only 1 connected note, and no code was written to support the probability factor that will utilize the notation system to its maximum potential and will make this generative music more interesting (in my opinion).
  • Low-tech visualization: The notation system I described above set up the foundation for a readable visual representation of the score. This visual representation has not been implemented yet.

Some Code. Why Not

This is the code I’m using to run the demo shown above –

//-----------------------------
// play / stop procedures
//-----------------------------
var playState = false;

$("body").click(function() {
  if (playState === false) {
    play();
  } else {
    stop();
  }
});

function play(){
  playState = true;
  $("#click").html("i told you. it is now flickering really badclick anywhere to stop");
  console.log('playing...');
  Tone.Transport.schedule(function(time){
  	noteArray[0].trigger(time);
  }, 0.1);
  Tone.Transport.schedule(function(time){
  	noteArray[1].trigger(time);
  }, 0.4);

  // Tone.Transport.loopEnd = '1m';
  // Tone.Transport.loop = true;

  Tone.Transport.start('+0.1');
  setTimeout(backColorSignal, 100);
}

function stop(){
  playState = false;
  $("#click").html("it is probably still flicker really bad, but it will stop eventuallyclick anywhere to keep it going");
  console.log('stopping...!');
  console.log(Tone.Transport.seconds);
  Tone.Transport.stop();
  Tone.Transport.cancel(0);
}

//-----------------------------
// creating an array of note objects (noteArray)
//-----------------------------

// array of manually added notes
var noteArray = [];

// note constructor
function noteObject(index, color, frequency, amplitude, duration, loops, connected_notes_arry) {
  this.index = index;
  this.color = color;
  this.frequency = frequency;
  this.amplitude = amplitude;
  this.duration = duration;
  this.loops = loops;
  this.connected_notes = connected_notes_arry;
  this.trigger = function(time, index=this.index, frequency=this.frequency, duration=this.duration, connected=this.connected_notes){
    // console.log('time: ' + time);
    // console.log('index: ' + index);
    console.log('');
    console.log('------------');
    console.log('it is ' + Tone.Transport.seconds);
    console.log('playing: ' + index);
    console.log('frequency: ' + frequency);
    console.log('duration: ' + duration);

  	synthArray[index].triggerAttackRelease(frequency, duration, time);

    if (connected !== null) {
      var nextIndex = connected[0];
      var nextTime = 0.01 + Tone.Transport.seconds + connected[1] + parseFloat((Math.random() * (connected[2] - connected[3]) + connected[3]).toFixed(4));
      console.log('generated: ' + nextIndex);
      console.log('at: ' + nextTime);
      Tone.Transport.schedule(function(time){
        noteArray[nextIndex].trigger(time);
      }, nextTime);
    }
  };
}

// starting notes
noteArray.push(new noteObject(0, '7D82B8', 'c3', 1, 1.520*5, 0, [2,1.520*5,0.020*5,0.020*5,0.9]));
noteArray.push(new noteObject(1, 'B7E3CC', 'e2', 1, 6.880*5, 0, null));

// the rest of the notes
noteArray.push(new noteObject(2, 'C4FFB2', 'b2', 1, 1.680*5, 0, [3,1.520*5,0.40,0.80,1]));
noteArray.push(new noteObject(3, 'D6F7A3', 'c#2', 1, 3.640*5, 0, [4,0,0.8,1,1]));
noteArray.push(new noteObject(4, 'ADD66D', 'b2', 1, 0.650*10, 0, [5,0.650*10,0.2,0.2,1]));
noteArray.push(new noteObject(5, 'A4FF7B', 'a2', 1, 1.800*5, 0, [6,0,0,0,1]));
noteArray.push(new noteObject(6, '7BFFD2', 'f#2', 0.2, 1.800*5, 0, [0, 1.800*5, 1, 2, 1]));


//-----------------------------
// creating an array of synth objects (synthArray), based on note objects (noteArray)
//-----------------------------

var synthArray = [];

for (var i=0;i<noteArray.length;i++){
  options = {
    vibratoAmount:1,
    vibratoRate:5,
    harmonicity:4,
    voice0:{
      volume:-30,
      portamento:0,
      oscillator:{
        type:"sine"
      },
      filterEnvelope:{
        attack:0.01,
        decay:0,
        sustain:0.5,
        release:1,
      },
      envelope:{
        attack:0.1,
        decay:0,
        sustain:0.5,
        release:1,
      },
    },
  voice1:{
    volume:-30,
    portamento:0,
    oscillator:{
      type:"sine"
    },
    filterEnvelope:{
      attack:0.01,
      decay:0,
      sustain:1,
      release:0.5,
    },
    envelope:{
      attack:0.01,
      decay:0,
      sustain:0.5,
      release:1,
    }
  }
  };
  synthArray.push(new Tone.DuoSynth(options).toMaster());
}

//-----------------------------
// low-tech visualization
//-----------------------------
b = new Tone.Meter ("signal");
synthArray[1].connect(b);
// synthArray[2].connect(b);

function backColorSignal(){
  if (b.value === 0){
    setTimeout(backColorBlue, 100);
  } else {
    var color = "rgba(0, 0, 255," + b.value + ")";
    $("html").css("background-color", color);
    setTimeout(backColorSignal, 100);
    // console.log('b.value: ' + b.value + " " + color);
  }
}

function backColorBlue(){
  var color = "rgba(0, 0, 255,1)";
  $("html").css("background-color", color);
  setTimeout(backColorSignal, 100);
}

 

MANIFESTO

Even though I’ve spent most of my time up until now in creating new content — from short stories, articles, plays, songs, and drawings, to digital experiences and commercial products — I’ve never sat down to think about my manifesto. So now I did, and it felt just right.

At first, I felt that writing my manifesto could be a process of reinventing my creative self. As it turned out, writing my manifesto was all about clearing the dust off my original intentions and creative needs. It felt like a return to my inner creative studio, where all my inspirations are still hanging on the wall, and the stereo is still playing the great old CDs.

I guess that my present day manifesto could be summarized into a single sentence — “Keep on seeking for your own voice that will carry your words and your ideas across mediums.”

Having said that, here is a more detailed version of what I’ll try to achieve during this semester, and hopefully, forever, as a list of creative principles:

  • Aesthetics – Aesthetics could take many forms. It could be a seen as visual concept or heard as an idea. Aesthetics could be felt in the work process or received as an inspiration. To me, aesthetics is an invitation to look beyond it. It is like a clear glass of red wine that makes a person focus on the red tones of the wine, not on the glass. It is like a magic shower that makes you feel mentally clean after experiencing it. I would like my works to be aesthetic in a way that would invite a viewer or a listener and would influence his / her identity and self-esteem.
  • Surprising, and sometimes unpredictable – The expectation is what leads the viewer / listener to pay attention to my work during its presentation. To keep the work ‘alive’ with the viewer / listener after its presentation, the work should be surprising. I want my work not only to do what is definitely expected from it, but also what is beyond any expectations.
  • Emotional and humorous – To me, humor is an opportunity to cross the line and to experiment with new shapes and forms. I would like my work not only to be light and humorous, but also emotional, expressive and satiric.
  • Generative, model driven – I love patterns that change repeatedly. I want to unlock the model or the system behind my work, and to utilize it to its maximum potential and beyond.
  • Open the imagination – I expect my work to present a solution, but also to shed light on new problems and possible further development.
  • Calmness and balance – I want my work to form from my inner self, my thoughts, and my own imagination. All my inspirations and previous experiences should take the form of calmness and balance during the creation of the work. This inner balance should be present in the work itself.
  • Clarity, honesty and humbleness – I would like my work to come from an honest and humble place. It should be clear and transparent. It doesn’t mean that it has to be an open source project, but its content must be understandable. The viewer / listener should be able to know what the work is doing, and possibly, how it does it and what was the process of making it.
  • Crazy storytelling – As a result of the above principles, I would want my work to tell a crazily beautiful story, in a beautifully crazy way. Such story might only exist within the context of the work, but could serve as an inspiration for the work to come.

NOMNOM 2: The Video Machine – The Programming Behind the Project

Credit: This project was developed together with Mint. Thank you :))

For my ICM final, I worked on an improved version of my mid-term pcomp project.

This time the computational challenges were even greater.
Here is the outcome after long weeks of intensive coding –

NomNom: The Video Machine

NOMNOM’s github repository can be found here – https://github.com/dodiku/the_video_machine_v2

Synching the videos

As a conclusion from the mid-term project, we wanted to give users that ability to play cohesive music. In order to that, we knew that we have to find a way to make sure that all the videos are being played in sync (automatically).

There are many ways to make sure the media is being played synchronously, but none of them deal with videos. To workaround that, we repurposed 2 functions from the p5.js sound library — Phrase and Part.
We used these functions to handle our playback as a loop that includes bars. We can call any callback function at any point on the loop, and therefore, we can actually use them to time our play and stop functions (and many others), based on the user action.


/*********************************************
SETUP FUNCTION (P5.JS)
*********************************************/
function setup() {
  noCanvas();

  // setting up serial communication
  serial = new p5.SerialPort();
  serial.on('connected', serverConnected);
  serial.on('open', portOpen);
  serial.on('data', serialEvent);
  serial.on('error', serialError);
  serial.list();
  serial.open(portName);

  // creating a new 'part' object (http://p5js.org/reference/#/p5.Part)
  allVideosPart = new p5.Part();
  allVideosPart.setBPM(56.5);

  // adding general phrase (http://p5js.org/reference/#/p5.Phrase) to the 'part'
  var generalSequence = [1,0,0,0, 0,0,0,0, 1,0,0,0, 0,0,0,0, 1,0,0,0, 0,0,0,0, 1,0,0,0, 0,0,0,0];
  generalPhrase = new p5.Phrase('general', countSteps, generalSequence);
  allVideosPart.addPhrase(generalPhrase);

  for (var i = 0; i<16; i++){
    allVideosPart.addPhrase(new p5.Phrase(i, videoSteps, [0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0]));
  }

  // console.log(allVideosPart);
  allVideosPart.loop();

}

We initiate the Part, a Phrase per video, and a general Phrase that will be used as a clock, on the setup function.

The ‘countSteps’ callback function is being used to store the current step on a global variable, and the ‘videoSteps’ callback function is being used to play and stop video at the right time.

First success with the beat-sync feature – 

Improving the UI

We really wanted to make it easier for users to understand what is going on on the screen, and to provide a better sense of control on the videos.

In order to achieve that, we used the NexusUI JS library and added 4 graphical elements, each of which indicates a different property of the video (number of repetitions, volume, speed, and trim), on every video.

The graphical elements are shown to the user only when the video is being played.

Also, we add a grayscale CSS filter on videos that are not being played. This way, it is easier for the user to focus on the videos that are being played and making sounds.

Built to perform

While designing the technical architecture for the project, I faced many limitations, mostly because of the slow nature of the ASCII serial communication protocol. Therefore, I had to develop a very efficient internal communication protocol to compensate for the delay we had when pressing the buttons on the box. That was the only way to achieve fast responding controller, that will change the video states on the screen immediately.

This was the first time I was required to write efficient code (and not just for the fun of it). After 2 weeks of re-writing the code, and reducing few milliseconds every time, I came up with the following lines:

Reading data from controller (Arduino side) –


trellis.readSwitches();
for (uint8_t n = 0; n < numKeys; n++) {
  if (trellis.justPressed(n)) {
   LEDstatus[n] = 3; 

   continue; 
   }
    
    if (LEDstatus[n] == 3) {
        buttonPress[n]++;
        if (blinkTime >= 4) {
          if (trellis.isLED(n)) {
            trellis.clrLED(n);
            trellis.writeDisplay();
            } else {
              trellis.setLED(n);
              trellis.writeDisplay();
            }
        }
      }

    if (trellis.justReleased(n)) {
      if (buttonPress[n] > 8) {
        LEDstatus[n] = 1;
        oldStatus[n] = 1;
        buttonPress[n] = 0;
        trellis.setLED(n);
        trellis.writeDisplay();
      } else {
        buttonPress[n] = 0;
        if (oldStatus[n] == 1) {
          LEDstatus[n] = 0;
          oldStatus[n] = 0;
          trellis.clrLED(n);
          trellis.writeDisplay();
        } else {
          LEDstatus[n] = 1;
          oldStatus[n] = 1;
          trellis.setLED(n);
          trellis.writeDisplay();
        }
      }
    }

Parsing the data on the browser (JavaScript side) – 


/*********************************************
PARSER: PARSE DATA THAT ARRIVES FROM
ARDUINO, AND APPLY CHANGES IF NEEDED
*********************************************/
function parseData(data){

  // parsing the data by ','
  var newStatus = data.split(",");

  // turning strings into integers
  for (var x=0; x CONTINUE
    if ((newStatus[i] !== 3) && (newStatus[i] === videos[i].status)){
      var vidID = i+1;
      vidID = "#video" + vidID;
      $(vidID).css('border-color', "rgba(177,15,46,0)");
      continue;
    }
    else {

      // getting the relevant phrase
      var phraseIndex = i;
      var updatedPhrase = allVideosPart.getPhrase(phraseIndex);

      if (newStatus[i] === 3){

        if (videos[i].originStep === null) {
          videos[i].originStep = currentStep;
        }

        changeColor(i, 1);
        showKnobs(i);

        videos[i].volume = vol;
        videos[i].cut = cut;
        videos[i].speed = speed;
        videos[i].steps = newStatus[16];
        changeKnobs(i);

        // making the video border blink
        var vidID = i+1;
        vidID = "#video" + vidID;
        if (newStatus[20] === 2) {
          if (($(vidID).css('border-color')) === "rgba(177, 15, 46, 0)"){
            $(vidID).css('border-color', "rgba(255,255,255,0.9)");
          }
          else {
            $(vidID).css('border-color', "rgba(177, 15, 46, 0)");
          }
        }


        // clearing the sequence
        for (var n=0; n<32; n++){
          updatedPhrase.sequence[n] = 0;
        }

        // applying steps changes, if any
        var stepNum = videos[i].originStep;
        for (var m=0; m 31) {
            stepNum = stepNum - 32;
          }
        }

      }

      else if (newStatus[i] === 1) {
        videos[i].status = 1;
        changeColor(i, videos[i].status);
        var vidID = i+1;
        vidID = "#video" + vidID;
        $(vidID).css('border-color', "rgba(177,15,46,0)");
      }

      else if (newStatus[i] === 0) {
        videos[i].status = 0;
        hideKnobs(i);
        changeColor(i, videos[i].status);
        var vidID = i+1;
        vidID = "#video" + vidID;
        $(vidID).css('border-color', "rgba(177,15,46,0)");

        // clearing the sequence
        for (var n=0; n<32; n++){
          updatedPhrase.sequence[n] = 0;
        }

        videos[i].originStep = null;

      }
    }
  }
  serial.write(1);
}


When I review this code now, it all seems so simple (LOL!), but this is one of the pieces of code I'm most proud of.

After looong hours of coding, we are very happy we what we achieved 🙂

The MusicSystem Explained

Background: Why artists still compose music into 3-5 minutes songs?

Ever since popular music has been broadcasted by radio stations (somewhere between 1920’s and 1930’s), and consumed by listeners all over the world, artists were recording most of their music as 3-5 minutes songs.

This convention was born out of a technical limitation – The Phonograph, an early version of the record players we use today, could only play 12” vinyl records. Moreover, when an artist recorded a new album or a new single, the only way to ship it to the local or national radio station was by sending it using the US Post Office services. The biggest box one could send at that time, for a reasonable price, was a box that could only hold only a 12” record. As you can probably guess, a 12” vinyl record can hold a tune no longer than 5 minutes.

A century ago, music production, consumption, and distribution processes have gone completely digital. Even though most of the music we listen to today is basically bits of data that can be manipulated using simple algorithms, we still consume it in the 3-5 minutes linear format. Unlike other mediums, such as text or video, which in many cases are being consumed in a non-linear form, audio is still being consumed (and composed) in short linear sprints.

I believe that in the age of data, we can do more than that.

Let’s Record Data

The MusicSystem will allow musicians to record their musical ideas, and will help them turn them into an endless flow of music, structured from their own core concept.

The software will capture live recording, extract it to its musical features, and will format these features into a reusable data structure. Using this new data structure, the software will create countless versions and combinations, that will all accumulate the essence of the original piece.

The MusicSystem will use the data that will be extracted from the original recording to compose new music. The original recording could be handled as one musical version generated from the data, or as the main piece of the entire tune.

The artist will be able to control the way the music is being interpreted and recomposed, as well as to set rules about the way the music will change according to a variety of inputs, such as sensors.

More about all of that the sections below.

The System and Its Parts

Microphone: Recording Analog Signal

The initiator to the entire composition will a recorded sound. An artist will play an acoustic instrument or will amplify an electric instrument, and analog sound will be captured by a microphone.

The microphone will be connected to a computer, that will run analog-to-digital process to generate a digital file. The digital file will hold all the raw data about the analog recording (using this data, computers are able to play digital music files, such as .wav or .mp3 files).

Digital Audio Analysis

The purpose of The MusicSystem is to use the recorded sound as data, in order to generate new music out of it (instead of playing the recorded data itself).

The software will try to retrieve musical information from the recorded sound — From beat detection, to musical structure, notes, tone, repetition, and any other feature that can extracted from the file.

Using the Recording as a Practice Dataset

The captured and analyzed data will be fed into a neural network, that will identify the relations within it.  Using these relations, The SoundSystem will be able to generate a huge variety of compositions, that encapsulate the same relations.

Since we deal with generative music, composed by a machine learning algorithm, with small data set to practice on, the artist and machine will have to ‘converse’ in order to help the machine to focus on the faster on the expected results. The feedback from the artist will be used as a second dataset, that will be fed into the neural network.

Just like at the beginning, at a certain point, the artist will be able to decide if the music will be recorded and saved as (very) long file, or to save the music a set of rules and configurations. These rules and configurations will be saved as file, which will be used by The SoundSystem player to generate music, based on the artist recordings and decisions.

Playing Infinitely

Once the data has been analyzed, The SoundSystem will generate digital sound based on this data, infinitely.

The infinite playing mode will allow the artists to experiment with different aspects of the musical piece, with the effects of changes (see below) or new recordings, and to capture snippets of the infinite loop and make them permanent (played in a loop, which means that these pieces will not be randomly generative any longer).

The end user will listen to the music in that exact infinite form. The artist will be able to decide where the inifinate playing starts, but not where it ends.

Controlling the New Composition

If we use the recorded sound as data-feed and not as part of the desired outcome, we are starting to loose connection with the original recording. The original recording only ‘inspires’ the end result, but not strictly dictates it.

If the captured data can be interpreted and used to generate new music, we can assume that one of the outcomes could be a tune that is identical to the original recording. The probability that the software will play the original recording will be controlled by the artist. The artist will be able to control the way the software will handle the analog recording:

  1. As a final result that will be played as recorded
  2. As data that will teach the software how to generate new music
  3. As a combination – The recorded audio will be played entirely, and the data extracted from it will be used to generate new music.

Besides that, the artist will be able to control the generative outcome in a variety of ways, such as:

  • Highlighting specific recordings – The artist will be able to decide which of the recordings will be handled as a ‘major’ recording (will have more influence on the end result), and which ones will be handled as a ‘minor’ recording.
  • Use the generative sound as an input – The artist will be able to mark a specific part of the generative music, and us it a new input for The SoundSystem.
  • Strick VS. Loose music generation – The artist will be able to decide how ‘close’ the generative music will be to the original narrative enclosed in the recorded parts.
  • Sensors – The artist will be able to use sensors to change the musical outcome. For example, when the user is walking, in a dark room, or breathing heavily, the music will be played differently.
  • 3rd party data (rules) – The artist will be able to use 3rd party APIs and datasets to affect the music. For example, the music will be heard differently on holidays, or on a night when Phoenix Suns wins a basketball game.

Recording Some More

At this point of the interaction, the cycle can start to repeat itself in order to expand the results or to focus them on a specific musical idea.

The artist will be able to record more and more analog sounds, each of which will be extracted to a new dataset that will make The SoundSystem more educated about the artist direction.

Commits and rollbacks

To allow better communication with the musical piece, I would like the artist to feel free to make decisions, and the change them. In order to do that, I would like to implement a git, and to allow the artist to ‘commit’ changes, and to rollback to an older version of the musical piece.

Open Questions

This broad concept raises some unsolved questions:

Which Data Should Be Analyzed by the Software?

The software can analyze the DSP data, that is being generated through the Analog-to-Digital conversion of the recorded sound. This is the data that is being used to create and play the digital music file.

On the other hand, the software can analyze the digital file itself, and to retrieve information from this analysis.

It is currently unclear which data could be more relevant to create automatically generated (new) music, based on this data.

What is the Relevant Data?

Many types of data can be extracted from a digital music file. What data is relevant for this specific project? How can this data be manipulated or iterated to be used to generate data that is relevat for music creation (or music synthesis)?

How to Capture the Essense of the Original Recording?

It is critical to isolate the data the is most indicative of the ‘original essence’ of the recorded piece. The question about ‘what is an essence?’ or ‘what determines the essence of a musical piece?’ can be raised as well.

What is the relation between the software the composition itself?

Let’s assume that we use data A, that was extracted from the original recording, to produce data B, that will be used to generate new music. Isn’t the decision to produce data B, instead of to produce data C, a composition decision? Will the neural network make these decisions is a ‘trivial’ way, or is it the developer that is actually pulling the composition strings?

How to Create an Infinite Interaction?

In order to create an infinite piece of music, it could be assumed that an infinite creative process should be applied, or at least a procedure that allows such creative process.

The current system design will require the musician to put the instrument down in order to interact with the software.

Inspirations

There are two major inspirations to this project:

  • The Echo Nest API – A music information retrieval API that was used to extract musical features from a recorded track. The API, which is currently closed to the public, inspired the technical possibilities in the field.
  • The Infinite Jukebox, developed by Paul Lamere This web application inspired the creative applications that are currently possible using musical data, such as those provided by the Echo Nest API.