Final Project Proposal – The SoundSystem

Overview

Ever since popular music has been broadcasted by radio stations (somewhere between 1920’s and 1930’s), and consumed by listeners all over the world, artists were recording most of their music as 3-5 minutes songs.

This convention was born out of a technical limitation – The Phonograph, an early version of the record players we use today, could only play 12” vinyl records. Moreover, when an artist recorded a new album, or a new single, the only way to ship it to the local or national radio station was by sending it using the US Post Office services. The biggest box one could send at that time, for a reasonable price, was a box that could only hold only a 12” record. As you can probably guess, a 12” vinyl record can hold a tune no longer than 5 minutes.

A century ago, music production, consumption, and distribution processes have gone completely digital. Even though most of the music we listen to today is basically bits of data that can be manipulated, we still consume it in the 3-5 minutes linear format. Unlike other mediums, such as text or video, which in many cases are being consumed in a non-linear form, audio is still being consumed in short linear sprints.

I believe that in the age of data, we can do more than that.

Inspirations

The inspiration for the problem, and for the first steps of the solution, can to me from watching and interacting with The Infinite Jukebox project, build by Paul Lamere. Lamere posted a blog post, that tell about the process of making this project.

The Infinite Jukebox - user interface
The Infinite Jukebox – user interface

snapshot-111212-1004-am snapshot-111212-1005-am

 

Project proposal – The SoundSystem

I would want to build a system that will liberate music creators from composing their musical ideas into 3-5 minute songs.
Instead, artists will be able to focus and record their musical idea, and the system will generate an infinite, interactive, and dynamic piece of music, “conducted” by the artist.

Since I won’t be able to build the entire project for the ICM course final, I plan to build the first part of this project. The specifications of this part are highlighted in the text.

This how I would imagine the interaction (at least of the prototype)

Recording and analysing the recorded sound:

  • Artist will record a short snippet of audio.
  • The system will identify the tempo of the recorded snippet (beat detection).
  • The system will analyse the recorded snippet to get frequency data, timbre, etc. (and maybe in order to identify notes and / or chords?).
  • The system will suggest a rhythmic tempo to go along with the snippet.
  • The system will play the recorded snippet as in infinite loop, along with the rhythmic tempo.
  • The system will try to find new ‘loop opportunities’ within the snippet, in order to play the loop in a none linear way.
  • The artist will be able to record more musical snippets.
  • The artist will be able to choose which parts will be played constantly (background sounds), and which parts will be played periodically.
  • The system will suggest new and interesting combinations of the recording snippets, and play these combinations infinitely.

The listener interacts with the played tune:

  • Since the tune can be played infinitely, some controls will be given to listener. Each and every artist will be able to configure these controls differently. For example, one can decide that the controls will include 2 knobs, one of them changes the tune from ‘dark’ to ‘bright’, and the other changes the tune from ‘calm’ to ‘noisy’. The artist will decide what will happen when each one of these knobs is being turned.
  • For the ICM final, a generic user interface will be provided to the listener. The interface will include a visual representation of the played tune, and will allow the listener to change the rhythmic tempo.

Applying machine learning algorithms:

  • The system will try to generate new music, based on the recorded snippets, and earlier decisions by the same user. This new music will stretch the length of the recorded tune.

Modifying the system’s decisions:

  • The artist will be able to effect the system’s decisions about the looped tune, and about the new music it generates. For example, the user will be able to decide when a specific part enters, or which algorithmic rules won’t generate new music.

Applying sensors and automations

  • The artist will be able to set rules based on 3rd party data or sensors. For example, the tune can be played differently if it is rainy on the first day of the month, if it is currently Christmas, if it is exactly 5:55am, or if the light in the room was dimmed to certain level. These rules will apply to each tune separately.

Formatting

  • There should be a new music format that could hold the tune (or the snippets) and the data necessary for playing it correctly. In the same way, a new player should be introduced in order to read the data and to play the tune correctly.
  • This format should allow the artist to update the tune configuration or the musical snippets at any time, after the tune was distributed to the listeners.
  • For the ICM final (and probably for the end product as well), the tune will be played in the web browser.

 

Published by

Dror Ayalon

@drorayalon

Leave a Reply

Your email address will not be published. Required fields are marked *