NOMNOM 2: The Video Machine – The Programming Behind the Project

Credit: This project was developed together with Mint. Thank you :))

For my ICM final, I worked on an improved version of my mid-term pcomp project.

This time the computational challenges were even greater.
Here is the outcome after long weeks of intensive coding –

NomNom: The Video Machine

NOMNOM’s github repository can be found here –

Synching the videos

As a conclusion from the mid-term project, we wanted to give users that ability to play cohesive music. In order to that, we knew that we have to find a way to make sure that all the videos are being played in sync (automatically).

There are many ways to make sure the media is being played synchronously, but none of them deal with videos. To workaround that, we repurposed 2 functions from the p5.js sound library — Phrase and Part.
We used these functions to handle our playback as a loop that includes bars. We can call any callback function at any point on the loop, and therefore, we can actually use them to time our play and stop functions (and many others), based on the user action.

function setup() {

  // setting up serial communication
  serial = new p5.SerialPort();
  serial.on('connected', serverConnected);
  serial.on('open', portOpen);
  serial.on('data', serialEvent);
  serial.on('error', serialError);

  // creating a new 'part' object (
  allVideosPart = new p5.Part();

  // adding general phrase ( to the 'part'
  var generalSequence = [1,0,0,0, 0,0,0,0, 1,0,0,0, 0,0,0,0, 1,0,0,0, 0,0,0,0, 1,0,0,0, 0,0,0,0];
  generalPhrase = new p5.Phrase('general', countSteps, generalSequence);

  for (var i = 0; i<16; i++){
    allVideosPart.addPhrase(new p5.Phrase(i, videoSteps, [0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0]));

  // console.log(allVideosPart);


We initiate the Part, a Phrase per video, and a general Phrase that will be used as a clock, on the setup function.

The ‘countSteps’ callback function is being used to store the current step on a global variable, and the ‘videoSteps’ callback function is being used to play and stop video at the right time.

First success with the beat-sync feature – 

Improving the UI

We really wanted to make it easier for users to understand what is going on on the screen, and to provide a better sense of control on the videos.

In order to achieve that, we used the NexusUI JS library and added 4 graphical elements, each of which indicates a different property of the video (number of repetitions, volume, speed, and trim), on every video.

The graphical elements are shown to the user only when the video is being played.

Also, we add a grayscale CSS filter on videos that are not being played. This way, it is easier for the user to focus on the videos that are being played and making sounds.

Built to perform

While designing the technical architecture for the project, I faced many limitations, mostly because of the slow nature of the ASCII serial communication protocol. Therefore, I had to develop a very efficient internal communication protocol to compensate for the delay we had when pressing the buttons on the box. That was the only way to achieve fast responding controller, that will change the video states on the screen immediately.

This was the first time I was required to write efficient code (and not just for the fun of it). After 2 weeks of re-writing the code, and reducing few milliseconds every time, I came up with the following lines:

Reading data from controller (Arduino side) –

for (uint8_t n = 0; n < numKeys; n++) {
  if (trellis.justPressed(n)) {
   LEDstatus[n] = 3; 

    if (LEDstatus[n] == 3) {
        if (blinkTime >= 4) {
          if (trellis.isLED(n)) {
            } else {

    if (trellis.justReleased(n)) {
      if (buttonPress[n] > 8) {
        LEDstatus[n] = 1;
        oldStatus[n] = 1;
        buttonPress[n] = 0;
      } else {
        buttonPress[n] = 0;
        if (oldStatus[n] == 1) {
          LEDstatus[n] = 0;
          oldStatus[n] = 0;
        } else {
          LEDstatus[n] = 1;
          oldStatus[n] = 1;

Parsing the data on the browser (JavaScript side) – 

function parseData(data){

  // parsing the data by ','
  var newStatus = data.split(",");

  // turning strings into integers
  for (var x=0; x CONTINUE
    if ((newStatus[i] !== 3) && (newStatus[i] === videos[i].status)){
      var vidID = i+1;
      vidID = "#video" + vidID;
      $(vidID).css('border-color', "rgba(177,15,46,0)");
    else {

      // getting the relevant phrase
      var phraseIndex = i;
      var updatedPhrase = allVideosPart.getPhrase(phraseIndex);

      if (newStatus[i] === 3){

        if (videos[i].originStep === null) {
          videos[i].originStep = currentStep;

        changeColor(i, 1);

        videos[i].volume = vol;
        videos[i].cut = cut;
        videos[i].speed = speed;
        videos[i].steps = newStatus[16];

        // making the video border blink
        var vidID = i+1;
        vidID = "#video" + vidID;
        if (newStatus[20] === 2) {
          if (($(vidID).css('border-color')) === "rgba(177, 15, 46, 0)"){
            $(vidID).css('border-color', "rgba(255,255,255,0.9)");
          else {
            $(vidID).css('border-color', "rgba(177, 15, 46, 0)");

        // clearing the sequence
        for (var n=0; n<32; n++){
          updatedPhrase.sequence[n] = 0;

        // applying steps changes, if any
        var stepNum = videos[i].originStep;
        for (var m=0; m 31) {
            stepNum = stepNum - 32;


      else if (newStatus[i] === 1) {
        videos[i].status = 1;
        changeColor(i, videos[i].status);
        var vidID = i+1;
        vidID = "#video" + vidID;
        $(vidID).css('border-color', "rgba(177,15,46,0)");

      else if (newStatus[i] === 0) {
        videos[i].status = 0;
        changeColor(i, videos[i].status);
        var vidID = i+1;
        vidID = "#video" + vidID;
        $(vidID).css('border-color', "rgba(177,15,46,0)");

        // clearing the sequence
        for (var n=0; n<32; n++){
          updatedPhrase.sequence[n] = 0;

        videos[i].originStep = null;


When I review this code now, it all seems so simple (LOL!), but this is one of the pieces of code I'm most proud of.

After looong hours of coding, we are very happy we what we achieved 🙂

The MusicSystem Explained

Background: Why artists still compose music into 3-5 minutes songs?

Ever since popular music has been broadcasted by radio stations (somewhere between 1920’s and 1930’s), and consumed by listeners all over the world, artists were recording most of their music as 3-5 minutes songs.

This convention was born out of a technical limitation – The Phonograph, an early version of the record players we use today, could only play 12” vinyl records. Moreover, when an artist recorded a new album or a new single, the only way to ship it to the local or national radio station was by sending it using the US Post Office services. The biggest box one could send at that time, for a reasonable price, was a box that could only hold only a 12” record. As you can probably guess, a 12” vinyl record can hold a tune no longer than 5 minutes.

A century ago, music production, consumption, and distribution processes have gone completely digital. Even though most of the music we listen to today is basically bits of data that can be manipulated using simple algorithms, we still consume it in the 3-5 minutes linear format. Unlike other mediums, such as text or video, which in many cases are being consumed in a non-linear form, audio is still being consumed (and composed) in short linear sprints.

I believe that in the age of data, we can do more than that.

Let’s Record Data

The MusicSystem will allow musicians to record their musical ideas, and will help them turn them into an endless flow of music, structured from their own core concept.

The software will capture live recording, extract it to its musical features, and will format these features into a reusable data structure. Using this new data structure, the software will create countless versions and combinations, that will all accumulate the essence of the original piece.

The MusicSystem will use the data that will be extracted from the original recording to compose new music. The original recording could be handled as one musical version generated from the data, or as the main piece of the entire tune.

The artist will be able to control the way the music is being interpreted and recomposed, as well as to set rules about the way the music will change according to a variety of inputs, such as sensors.

More about all of that the sections below.

The System and Its Parts

Microphone: Recording Analog Signal

The initiator to the entire composition will a recorded sound. An artist will play an acoustic instrument or will amplify an electric instrument, and analog sound will be captured by a microphone.

The microphone will be connected to a computer, that will run analog-to-digital process to generate a digital file. The digital file will hold all the raw data about the analog recording (using this data, computers are able to play digital music files, such as .wav or .mp3 files).

Digital Audio Analysis

The purpose of The MusicSystem is to use the recorded sound as data, in order to generate new music out of it (instead of playing the recorded data itself).

The software will try to retrieve musical information from the recorded sound — From beat detection, to musical structure, notes, tone, repetition, and any other feature that can extracted from the file.

Using the Recording as a Practice Dataset

The captured and analyzed data will be fed into a neural network, that will identify the relations within it.  Using these relations, The SoundSystem will be able to generate a huge variety of compositions, that encapsulate the same relations.

Since we deal with generative music, composed by a machine learning algorithm, with small data set to practice on, the artist and machine will have to ‘converse’ in order to help the machine to focus on the faster on the expected results. The feedback from the artist will be used as a second dataset, that will be fed into the neural network.

Just like at the beginning, at a certain point, the artist will be able to decide if the music will be recorded and saved as (very) long file, or to save the music a set of rules and configurations. These rules and configurations will be saved as file, which will be used by The SoundSystem player to generate music, based on the artist recordings and decisions.

Playing Infinitely

Once the data has been analyzed, The SoundSystem will generate digital sound based on this data, infinitely.

The infinite playing mode will allow the artists to experiment with different aspects of the musical piece, with the effects of changes (see below) or new recordings, and to capture snippets of the infinite loop and make them permanent (played in a loop, which means that these pieces will not be randomly generative any longer).

The end user will listen to the music in that exact infinite form. The artist will be able to decide where the inifinate playing starts, but not where it ends.

Controlling the New Composition

If we use the recorded sound as data-feed and not as part of the desired outcome, we are starting to loose connection with the original recording. The original recording only ‘inspires’ the end result, but not strictly dictates it.

If the captured data can be interpreted and used to generate new music, we can assume that one of the outcomes could be a tune that is identical to the original recording. The probability that the software will play the original recording will be controlled by the artist. The artist will be able to control the way the software will handle the analog recording:

  1. As a final result that will be played as recorded
  2. As data that will teach the software how to generate new music
  3. As a combination – The recorded audio will be played entirely, and the data extracted from it will be used to generate new music.

Besides that, the artist will be able to control the generative outcome in a variety of ways, such as:

  • Highlighting specific recordings – The artist will be able to decide which of the recordings will be handled as a ‘major’ recording (will have more influence on the end result), and which ones will be handled as a ‘minor’ recording.
  • Use the generative sound as an input – The artist will be able to mark a specific part of the generative music, and us it a new input for The SoundSystem.
  • Strick VS. Loose music generation – The artist will be able to decide how ‘close’ the generative music will be to the original narrative enclosed in the recorded parts.
  • Sensors – The artist will be able to use sensors to change the musical outcome. For example, when the user is walking, in a dark room, or breathing heavily, the music will be played differently.
  • 3rd party data (rules) – The artist will be able to use 3rd party APIs and datasets to affect the music. For example, the music will be heard differently on holidays, or on a night when Phoenix Suns wins a basketball game.

Recording Some More

At this point of the interaction, the cycle can start to repeat itself in order to expand the results or to focus them on a specific musical idea.

The artist will be able to record more and more analog sounds, each of which will be extracted to a new dataset that will make The SoundSystem more educated about the artist direction.

Commits and rollbacks

To allow better communication with the musical piece, I would like the artist to feel free to make decisions, and the change them. In order to do that, I would like to implement a git, and to allow the artist to ‘commit’ changes, and to rollback to an older version of the musical piece.

Open Questions

This broad concept raises some unsolved questions:

Which Data Should Be Analyzed by the Software?

The software can analyze the DSP data, that is being generated through the Analog-to-Digital conversion of the recorded sound. This is the data that is being used to create and play the digital music file.

On the other hand, the software can analyze the digital file itself, and to retrieve information from this analysis.

It is currently unclear which data could be more relevant to create automatically generated (new) music, based on this data.

What is the Relevant Data?

Many types of data can be extracted from a digital music file. What data is relevant for this specific project? How can this data be manipulated or iterated to be used to generate data that is relevat for music creation (or music synthesis)?

How to Capture the Essense of the Original Recording?

It is critical to isolate the data the is most indicative of the ‘original essence’ of the recorded piece. The question about ‘what is an essence?’ or ‘what determines the essence of a musical piece?’ can be raised as well.

What is the relation between the software the composition itself?

Let’s assume that we use data A, that was extracted from the original recording, to produce data B, that will be used to generate new music. Isn’t the decision to produce data B, instead of to produce data C, a composition decision? Will the neural network make these decisions is a ‘trivial’ way, or is it the developer that is actually pulling the composition strings?

How to Create an Infinite Interaction?

In order to create an infinite piece of music, it could be assumed that an infinite creative process should be applied, or at least a procedure that allows such creative process.

The current system design will require the musician to put the instrument down in order to interact with the software.


There are two major inspirations to this project:

  • The Echo Nest API – A music information retrieval API that was used to extract musical features from a recorded track. The API, which is currently closed to the public, inspired the technical possibilities in the field.
  • The Infinite Jukebox, developed by Paul Lamere This web application inspired the creative applications that are currently possible using musical data, such as those provided by the Echo Nest API.



Controlling video playback features


For the first time in my ITP history, I was able to combine home assignment for ICM with home assignment for Pcomp.

I created a video manipulation interface, that could be controlled by a physical controller.
The entire project was build with Mint for our Physical Computing class mid-term.

The Video Machine - Web Interface
The Video Machine – Web Interface


The Video Machine – Web Interface from Dror Ayalon on Vimeo.


I used the JavaScript video API to do the following manipulations on the video playback:

  • Loop – Playing the video in an infinite loop.
  • Volume – Changing the volume of the sound.
  • Cut – Trimming the length of the video.
  • Speed – Changing the speed of the video playback.
The Video Machine
The Video Machine


Here is the JavaScript code I used for this project –

The Video Machine


The Video Machine is a video controller, powered by an Arduino, that controls the playback of videos presented on a web browser. By pressing a button on the controller, the correlated video is being played on the screen and heard through the speakers.
Videos are being played in an infinite loop.
Only the videos that are being played, are being heard.

I was lucky enough to work on this project with the super talented Mint for our Physical Computing class mid-term.
Working with Mint not only was a great learning experience, but also a lot of fun! I hope I’ll be able to work with her again on our next project (more on that below).

The Video Machine from Dror Ayalon on Vimeo.

Many thanks for Joe Mango, our beloved resident, who assisted a lot with finding the right technologies for the project, and helped us on one critical moment, when suddenly nothing worked.

The Video Machine – Split Screen from Dror Ayalon on Vimeo.

The building process

The process of building The Video Machine went through the following stages:

  • Prototyping – Once we had a broad idea about what we want to make, we wanted to test how hard would it be to build such interaction, and if the interaction feels ‘right’ to us.
  • Understanding the complications The prototyping stage helped us understand what could be the possible complications of this product, and what might be the limitation. We analysed what could be the limitations of the serial communication between the Arduino board and our laptop computer, and what types of video manipulations could be easily achieved using JavaScript.
    Understanding what’s possible helped us shape our final design, and the different features
  • Designing the architecture – Before we started to build the final product, we talked about the technical design of the product under the hood. These decisions basically defined the way the end product would operate, and the way users would interact with it.
  • Picking up the technologies – To apply our technical design, we needed to find the right tools.
    For the video manipulations, we decided to use vanilla JavaScript, because its easy to use video API. The biggest discussion was around the implantation of the buttons, on which the user needs to press in order to play the videos. After some research, and brainstorming with Joe Mango, we decided to use the Adafruit Trellis. That was probably the most important decision we took, and one that made this project possible to make, given the short amount of time we had at that point (four days).
  • Building, and making changes – We started to assemble the project and the write the needed code. While doing that, we changed our technical design a few times, in order to overcome some limitations we learned about along the way. And then came then moment where everything worked smoothly.
The Video Machine - Final product
The Video Machine – Final product

Some code

The entire code can be viewed on our GitHub repository.


The reactions to The Video Machine were amazing. The signals started to arrive on the prototyping stage, when people constantly wanted to check it out.

When we showed the final project to people on the ITP floor, it appeared that everyone wants to put a hand on our box.

The Video Machine
The Video Machine

People were experimenting, listening, looking, clicking, laughing, some of them even lined up to use our product.

The Video Machine
The Video Machine

Further work

I hope that Mint and I will be able to continue to work on this project for our final term.
I cannot wait to see the second version of The Video Machine.
I believe that the goals for the next version would be:

  • To add more functionality, that will allow better and easier video/sound manipulation.
  • To make playing very easy for people with no knowledge of music or playing live music. Beat sync could be a good start. The product should allow anyone to create nice tunes.
  • To find a new way to interact with the content, using the controller. This new interaction needs to be somethings that allows some kind of manipulation to the video or the sound, that is not possible (or less convenient) using the current and typical controller interface.
  • To improve the content so all videos will be very useful for as many types of music as possible.
  • To improve the web interface to make it adjustable for different screen sizes.
The Video Machine - Controller
The Video Machine – Controller

Domino Blueprint

For this week’s assignment, I created a sequential domino scene, as a homage for a game I used to play a lot when was younger. I still think that it is a brilliant game.

Take a look – click here

Or watch the video –

Domino Blueprint from Dror Ayalon on Vimeo.

This piece required many lines of code, but it was somewhat expected :\

btw – I’m officially in ♥ with object constructors.

Canvas Duel

The purpose of this app is to run some user testing, and investigate how people will find the experience of drawing (or designing) with a machine that tries to complete the next step in the drawing for them.

Currently, the app handles a single user. This is the first step towards implementing this concept – Currently, user can draw on a canvas, and hit the ‘space’ key to apply his/her own changes.

You can try it out yourself here.

The code can be found here.

The ‘Loopster’ failure


This week I tried to to build a project that originally simple to me. I wanted to build Loopster – a web app that will allow users to record short samples of music, save them, and use loop them to create a musical piece. I wanted Loopster to allow users to share their loops publicly, and to use loops by other users. Sort of the GitHub for music making.

So the goals were:

  • To build a web application that will allow users to record live stream (basically an analog stream).The recording could be done using the laptop microphone or the laptop input jack.
  • To allow users to trim their loops (or to try to detect the beats and to do it automatically).
  • To allow users to put some loops, which were created by them or by others, on a sequencer, and to enjoy them all together.

Bottom line:  Almost none of than happened.


I decided to use Web Audio API to implement the main functionality of the app. The Web Audio APIs are a little complicated, and it took my about 1-2 hours to be able to record sound through the laptop microphone.

Anything beyond that, just failed to work.
I spent another 8-10 hours trying to gain some progress on this project, but nothing seems to work anywhere near my expectations.

Eventually, the only thing that this app can do, is to open the laptop microphone, and play what it goes through it (which cause an immediate feedback..).


This is the (complicated) code for this (simple) interaction:
For some reason, WordPress took my html code too literaly

var AudioContext = window.AudioContext || window.webkitAudioContext; //cross browser compatibility
var myAudio = new AudioContext();
var record;

// get user's audio as an inputPoint
navigator.getUserMedia = (navigator.getUserMedia ||
                          navigator.webkitGetUserMedia ||
                          navigator.mozGetUserMedia ||

// audio nodes
var gain = myAudio.createGain();
gain.gain.value = 0;

// connecting the nodes
if (navigator.getUserMedia) {
   console.log('getUserMedia supported.');
   navigator.getUserMedia (
         audio: true,

      // Success callback

      // Error callback
      function(err) {
         console.log('The following gUM error occured: ' + err);
} else {
   console.log('getUserMedia not supported on your browser!');

function showButtons(){

function play(stream) {
  var source = myAudio.createMediaStreamSource(stream);

// start recording
function recordInput(stream){
  record = new MediaRecorder(stream);

function buttonEvents() {
  console.log('callback worked');
    gain.gain.value = 0.5;
    console.log('click on play');

    gain.gain.value = 0;
    console.log('click on stop');

// stop recording
$(window).keypress(function(key) {
  if ((key.keyCode === 32) && (record.state == 'recording')){
    console.log('Space pressed -- stopping recording process');
    // record.exportWAV( doneEncoding );

You can try to use it using this link, but Chrome (and I’m sure that this is relevant to any other browser) doesn’t allow any web app to open the microphone if the app’s files are not being served using a secured connection.


If you still want to see it happening, you can go to the project page on GitHub, download the files, and open index.html on your beloved browser.

Learning (at least trying to..)

Ok, some lessons from this unpleasant experience:

  • I chose the wrong tools. Web Audio API is definitely a wonderful API, but learning it was not the purpose of this project. P5.js, for example, could do most of the things I wanted to do. Also, using P5.js could get be pretty fast to the stage where I can experiment with the app and test the overall experience, the design of the interaction, and so on, which to me, are way more interesting.
  • I must find ideas that I’m more passionate about. The idea behind Loopster is nice, but not nice enough. A ‘nice enough’ idea would have got me to get rid of all the complications and to build it as fast as possible.
    On the Loopster case, the only passion I found was in learning new API or to experiment with new tools. But to me, the tools are not the story.

I hope I’ll do better next time :\

Device motion and orientation data

In continue to my previous post, I decided to take the first step in building my next project (more about it soon).

I  followed the Web API instructions about how to detect device’s movement (using the device accelerometer sensor) and orientation (using the device orientation sensor), and experimented with getting and presenting the this data.

You can see the end result here.

I wrote the following code for the task (This code gets the device movement and orientation data, and appends it to the DOM’s body) –

And this is the result (the changes in the blue numbers occur because I rotated my phone, while the most of the changes in the red numbers cause because the Web API library is still in beta, and doesn’t deal well enough with the device accelerometer) –

Apparently, my Macbook Pro also has an accelerometer, to protect its hard-drive while moving the laptop –

Try it yourself.

Here’s the entire code on GitHub.

ICM (+pcomp) Synthesis Workshop

A few days ago we had an ICM synthesis workshop. I must say that I had very low expectations from the workshop, which ended-up being one of the most inspiring and enjoyable activities I’ve been part of since I started ITP. I truly feel that this workshop was a milestone in my personal ITP growth.

First experience with changing digital objects using physical sensors

I was lucky enough to work with talented Scott Reitherman. We picked-up one of Scott’s early P5.js drawings, and manipulated it using a potentiometer.

The following video shows the final outcome, but doesn’t show exactly what’s happening (lesson learned about documentation videos).

Scott’s P5 drawing is being manipulated with a physical potentiometer: The jitters’ speed is changing when I rotate the knob.

My personal conclusion from the workshop

  • Doing is more inspiring than reading – Sounds trivial right? Not to everyone. I tend to read a lot and cover a vast range of topics using only my eyes. For some reason, this workshop made me realize finally  (or maybe reminded me after so many years of learning and having almost zero time for experiments) what is the difference between reading and doing, and how inspiring doing can be.
    Working on this tiny project, made me think of so many applications that could be interesting and challenging to make.
    Maybe it is because I just read “Sketching User Experiences: The Workbook” by Saul Greenberg, Bill Buxton from the pcomp class reading list, which reminded me how useful sketching could be, but I decided that from now on I’ll try to focus on sketching and making more than on anything else. To me, this is a very motivating and uplifting conclusion.
  • Focus on projects – From on, I’ll try to be more focused on projects, that will include subjects that were covered in different classes, more than on simply learning. Sounds vague, but maybe this is how it should be.
  • P5 is not ready – P5.js doesn’t seem too stable when dealing with serial  inputs. It doesn’t handle buffers very well, and tend to stop being responsive after 5-10 seconds of interaction.
    This limitation didn’t take anything from my first ‘sensor + digital visualization’ experience, but led me to the conclusion that I will not be able to use P5 on any of my final projects, if stability is something I’m looking for (I do).

Music Canvas

The music canvas project, developed together with Sharif Hadidi, was my first step in music analysis in JavaScript.

The goal was to create a paint tool, that its brush changes according to the properties of a track that is being played in the background.
In order to ‘translate’ audio properties to visual properties, we set up the following guidelines:

  1. The size of the brush will be set according to the beat of the song.
  2. The color of the brush will be set by the tone, and the amplitude of the song.

While searching for a way to run beat detection on the played track, i stumbled upon this GitHub gist by b2renger. I ended up now using this solution, but it lead me to believe that P5.js sound library might do what I need.

music canvas: J. Pollock Mode
music canvas: J. Pollock Mode

P5.js sound library

The P5.js sound library is relatively new and basic, but it covers more than I actually needed.

These are the main functions that I used to get the data I needed:

function setup(){
. // some unimportant stuff here
amp = new p5.Amplitude(); // creating a new Aplitude object
fft = new p5.FFT(); // creating a new FFT object
.. // more unimportant stuff here

function draw(){
. // some unimportant stuff here
amplitude = amp.getLevel(); // getting amplitude level
beat = fft.getEnergy('lowMid'); // getting amplitude level at lowMid frequencies
color = fft.getCentroid(); // getting the brightness/tone level
.. // more unimportant stuff here


Data normalization

In order to use the data we audio analysis data for P5’s fill(); or ellipse(); we had to normalize the numbers. In order to do that, we the a linear scale function from d3.js.

// setting up the scales
var colorScale = d3.scaleLinear()
.domain([0, 8000]) // expected input
.range([0,255]); // normalized output

var ampScale = d3.scaleLinear()
.domain([0, 1])

var beatScale = d3.scaleLinear()
.domain([0, 300])

// using the scales
normColor = colorScale(color);
normAmp = ampScale(amplitude);
normBeat = beatScale(beat);

I must say that these scale functions are among my all time favorites. They are very usable for many different scenarios. On this project, for example, I used these functions to generate a scale of colors.

Building debugging tools

Since I am relatively new to the audio subject, I decided to spend some time building a debug function. This function will print all the data I need to the console in runtime.

This was one of the best decisions I made while working on this project. It allowed me to learn the ranges of my data, and to understand what are these numbers stand for.

You can try to use the debug function yourself. Just open the console and type: printData(‘help’); .
You should see something like that –

open console > printData('help');
open console > printData(‘help’);



I quickly setup a node.js server, so currently the music canvas is live 🙂

Check it out:

Automatic drawing mode (aka ‘J. Pollock mode’) –

Manual drawing mode (aka ‘the paint’) –

music canvas: Paint Mode
music canvas: Paint Mode