NOMNOM 2: The Video Machine – The Physical Computing Aspects of the Project

 

NOMNOM: The Video Machine
NOMNOM: The Video Machine

Intent

The purpose of this project was to allow users to play music (or a DJ set) using videos from YouTube.

NOMNOM is an advanced version of The Video Machine presented for the mid-term. It controls the playback of videos presented on a web browser.
By pressing a button on the controller, the correlated video is being played on the screen and heard through the speakers. The videos are being played in sync with one another. Only the videos that are being played, are being heard.

On the new version, The Video Machine controller offers four functions that allow making changes to the way the videos are being played:

  • Repetition – Affects The number of times a video is being played during a single loop (1-4 times).
  • Volume – Affects the volume of the selected video.
  • Speed – Changes the speed of the selected video.
  • Trim – Trims the length of the selected video.

The first prototype of the new version in action –

NomNom: The Video Machine

Main Objectives

The goal was to gain a few critical improvements from the previous versions of the product. After brainstorming for possible improvements, and reviewing the feedback we had received, the following objectives were chosen:

  • To keep it simple, while introducing more functionality – One of the major strength of the original version was its simplicity. We were able to achieve a design that allowed simple and self-explanatory interaction, that was enjoyable for both experienced DJs and users with zero experience.
    For the new version, new features, such as a consistent and predictable playback sequence, an automatic beat-sync between the played videos, the ability to change the number of times a video will be played over a single loop, and the ability to change the playback properties for each one of the videos while it is being played.
    The new features of the new version allow the user the achieve great results more easily, by using the same simple controls of the old version. A total of 6 new features and improvements were added to the product while adding only a single rotary switch to the previous layout.

    The new features of the new version allow the user the achieve great results more easily, by using the same simple controls of the old version.

  • To make it feel solid – The first impression the user has on a product comes from looking at it. NOMNOM was built from solid materials in order to allow the user to feel free to physically interact with it. The solidity of the controls freed up users from thinking about the physical interaction and concentrating on the content (the video and the sound).

NOMNOM: The new version

  • To smoothen the controls – Enjoyable interaction cannot be achieved only by providing a fast and easy way to complete a task. The time the user spends using the product should be enjoyable as well.
    Is order to build a smooth and fun tangible interaction, a research was done around different potentiometers, buttons, and switches. Eventually, the controls that provide the best ‘feel’, and that were the most accurate, were chosen.

  • To take further development in current considerations – In most cases, the ability to innovate comes from deep understanding of the way a certain system works. To allow further development, there was a need to build the product in a way that will make it be easy to learn and to understand, to both for us and for other future contributors. Therefore, an effort was done to design and build the inner parts of the box in a way that will be very understandable for anyone who reveals it.
    The design of the structure of the internal electronic parts, not only allowed clarity on the debugging stages, but also fast analysis and understanding of the implications of any change or addition.
NOMNOM: Designing the inner structure
NOMNOM: Designing the inner structure

There was a need to build the product in a way that will make it be easy to learn and to understand, to both for us and for other future contributors. Therefore, an effort was done to design and build the inner parts of the box in a way that will be very understandable for anyone who reveals it.

NOMNOM: In the making of
NOMNOM: In the making of

Decision-Making and challenges

Design Overview

Leaning on the design of the previous version, we made a few improvements to our electric circuits, and a few major improvements to our physical interface design.

NOMNOM: Schematic
NOMNOM: Schematic

Doing More With the Same Buttons

On of the major limitations of the first version was that in order change the playback mode (properties / attributes) of a video, the user had to stop the playback, make the changes using the knobs, and start the playback again. Therefore, one of the most important features of the new version, was the ability to change the playback mode (properties / attributes) of a single video while the video is being played.

To avoid adding a series of knobs for each on of the videos, the existing buttons are being used for two functions:

NOMNOM: A single press to start / stop
A single press to start / stop
NOMNOM: Press & hold to make changes to the video playback
Press & hold to make changes to the video playback

 

 

 

 

 

 

The component that was used for the buttons is the Adafruit Trellis, a single PCB that connects 16 press buttons.

The Trellis PCB and its Arduino library support two modes:

MOMENTARY – A mode on which button press event is detected only a buttons is being held down.
LATCHING – A mode on which button press event changes the state of the button (e.g. from ON to OFF).

NOMNOM: One of the challenges was to make the Trellis PCB support both of its different modes at the same time
NOMNOM: One of the challenges was to make the Trellis PCB support both of its different modes at the same time

One problem was that by default, the Trellis can operate on only one of these modes at the time.
Another challenge was to find an efficient way (in terms of performance) to read the button states, so the controller will be very responsive to the user actions — the changes on the screen, and on the LEDs on the controller should be immediate.

After 3-4 weeks of research on the way the Trellis PCB works and coding different experiments, the following Arduino code allowed the support of the two modes simultaneously.


#include 
#include "Adafruit_Trellis.h"

Adafruit_Trellis matrix0 = Adafruit_Trellis();
Adafruit_TrellisSet trellis =  Adafruit_TrellisSet(&matrix0);

#define NUMTRELLIS 1
#define numKeys (NUMTRELLIS * 16)
#define INTPIN A2

int LEDstatus[16] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
int blinkStatus = 1;
int blinkTime = 0;
int buttonPress[16] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
int oldStatus[16] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};

void setup() {
  Serial.begin(9600);
  pinMode(INTPIN, INPUT);
  pinMode(5, INPUT);
  pinMode(6, INPUT);
  pinMode(7, INPUT);
  pinMode(8, INPUT);
  digitalWrite(INTPIN, HIGH);

  trellis.begin(0x70);  // only one trellis is connected

  // light up all the LEDs in order
   for (uint8_t i = 0; i < numKeys; i++) {
     trellis.setLED(i);
     trellis.writeDisplay();
     delay(50);
   }

  // then turn them off
  for (uint8_t i = 0; i < numKeys; i++) {
    trellis.clrLED(i);
    trellis.writeDisplay();
    delay(50);
  }
  while (Serial.available() <= 0) {
    Serial.println("hello"); // send a starting message
    delay(300);              // wait 1/3 second
  }
}

void loop() {
  delay(80); // 30ms delay is required, don't remove me!


  /*************************************
  // SENDING DATA TO P5.JS
  *************************************/
  if (Serial.available() > 0) {

      // reading serial from p5.js
      int incoming = Serial.read();

      // print current status
      for (int i = 0; i < 16; i++) {
        Serial.print(LEDstatus[i]);
        Serial.print(",");
      }

      // step knob
      int pot1Value = 0;
      if (digitalRead(5) == HIGH) {
        pot1Value = 4;
      } else if (digitalRead(6) == HIGH) {
        pot1Value = 3;
      } else if (digitalRead(7) == HIGH) {
        pot1Value = 2;
      } else if (digitalRead(8) == HIGH) {
        pot1Value = 1;
      }
      Serial.print(pot1Value);
      Serial.print(",");

      // volume knob
      int pot2Value = analogRead(A1);
      int pot2ValueMapped = map(pot2Value, 0, 1020, 0, 100);
      Serial.print(pot2ValueMapped);
      Serial.print(",");

      // speed knob
      int pot3Value = analogRead(A0);
      int pot3ValueMapped = map(pot3Value, 0, 1020, 0, 100);
      Serial.print(pot3ValueMapped);
      Serial.print(",");

      // cut knob
      int pot4Value = analogRead(A3);
      int pot4ValueMapped = map(pot4Value, 0, 1020, 0, 100);
      Serial.print(pot4ValueMapped);
      Serial.print(",");

      // blink data
      Serial.print(blinkTime);

      Serial.println("");
  }

  /*************************************************
  // CHANGING BUTTON STATES BASED ON BUTTON PRESSES
  **************************************************/
  blinkTime = blinkTime + 1;
  if (blinkTime == 5) {
    blinkTime = 0;
  }

  trellis.readSwitches();
  for (uint8_t n = 0; n < numKeys; n++) {
    if (trellis.justPressed(n)) {
      LEDstatus[n] = 3;

      continue;
    }

      if (LEDstatus[n] == 3) {
        buttonPress[n]++;
        if (blinkTime >= 4) {
          if (trellis.isLED(n)) {
            trellis.clrLED(n);
            trellis.writeDisplay();
            } else {
              trellis.setLED(n);
              trellis.writeDisplay();
            }
        }
      }

    if (trellis.justReleased(n)) {
      if (buttonPress[n] > 8) {
        LEDstatus[n] = 1;
        oldStatus[n] = 1;
        buttonPress[n] = 0;
        trellis.setLED(n);
        trellis.writeDisplay();
      } else {
        buttonPress[n] = 0;
        if (oldStatus[n] == 1) {
          LEDstatus[n] = 0;
          oldStatus[n] = 0;
          trellis.clrLED(n);
          trellis.writeDisplay();
        } else {
          LEDstatus[n] = 1;
          oldStatus[n] = 1;
          trellis.setLED(n);
          trellis.writeDisplay();
        }
      }
    }
  }
}

This code includes a fast and efficient protocol to read the different states from the Trellis board using a single read command, and to communicate them to the web browser using ‘handshaking’.

At a first glance, this code looks simple, but it includes a fast and efficient protocol to read the different states (“ON”, “OFF”, and “Being pressed”, a state that was used to make changes to the video playback) from the Trellis board using a single read command (trellis.readSwitches()), and to communicate them to the web browser using ‘handshaking’.

More about the programming behind NOMNOM can be found on this blog post, and on the project’s GitHub repository.

Finding the Right Potentiometers

As much as the Trellis board was satisfying as our press buttons, the movement of the potentiometers needed an upgrade. A long research and multiple experiments with different types of potentiometers and knobs (mostly from Adafruit, DigiKey) were made. It appeared that the knobs and potentiometers offered by Mammoth Electronics were the smoothest to turn, most built using high-quality materials, and fit best with our design vision.

Fabrications

One of the major objective for the new version was to make the physical interface feel as stable as the software that supports it. The desire was to build the box from more solid materials, which do not feel breakable like wood or delicate like thin acrylic. Therefore, a solid metal enclosure was used to add sense of strength and stability to the overall interaction.

To avoid any ‘shaky’ feeling when interacting with the product, the design of the drilled holes on the enclosure had to be very accurate and tight to the size of the electronic components.

NOMNOM: Design sketch before the drilling process
NOMNOM: Design sketch before the drilling process

User Testing

After building the first fully functional prototype, a user testing phase some light on the strength and weaknesses of the product.

Luckily, the physical interaction worked well and was largely understood by the users. A few changes were done to the terminology – The term “Steps”, which described the number of times a video will be played within a single loop, was changed to “Repetitions”, and the term “Cut”, which described the ability to trim the video, was changed to the term “Trim”.

The rest of the changes, based on the users’ feedback, were done on the graphical user interface, which now includes a much simpler and straight forward indications for each and every video status.

Presenting the Project to New Audience

As part of the process, I presented the product in front of a new audience, outside of the ITP community. This experience allowed us to get feedback from an audience that is closer to our target audience, and helped us to be more prepared for the (intense) presentation at the ITP Winter Show.

\

The ITP Winter Show

NOMNOM: The Video Machine was presented at the ITP Winter Show 2016.

NOMNOM: The Video Machine @ ITP Winter Show 2016
NOMNOM: The Video Machine @ ITP Winter Show 2016

Final Project Proposal – The SoundSystem

Overview

Ever since popular music has been broadcasted by radio stations (somewhere between 1920’s and 1930’s), and consumed by listeners all over the world, artists were recording most of their music as 3-5 minutes songs.

This convention was born out of a technical limitation – The Phonograph, an early version of the record players we use today, could only play 12” vinyl records. Moreover, when an artist recorded a new album, or a new single, the only way to ship it to the local or national radio station was by sending it using the US Post Office services. The biggest box one could send at that time, for a reasonable price, was a box that could only hold only a 12” record. As you can probably guess, a 12” vinyl record can hold a tune no longer than 5 minutes.

A century ago, music production, consumption, and distribution processes have gone completely digital. Even though most of the music we listen to today is basically bits of data that can be manipulated, we still consume it in the 3-5 minutes linear format. Unlike other mediums, such as text or video, which in many cases are being consumed in a non-linear form, audio is still being consumed in short linear sprints.

I believe that in the age of data, we can do more than that.

Inspirations

The inspiration for the problem, and for the first steps of the solution, can to me from watching and interacting with The Infinite Jukebox project, build by Paul Lamere. Lamere posted a blog post, that tell about the process of making this project.

The Infinite Jukebox - user interface
The Infinite Jukebox – user interface

snapshot-111212-1004-am snapshot-111212-1005-am

 

Project proposal – The SoundSystem

I would want to build a system that will liberate music creators from composing their musical ideas into 3-5 minute songs.
Instead, artists will be able to focus and record their musical idea, and the system will generate an infinite, interactive, and dynamic piece of music, “conducted” by the artist.

Since I won’t be able to build the entire project for the ICM course final, I plan to build the first part of this project. The specifications of this part are highlighted in the text.

This how I would imagine the interaction (at least of the prototype)

Recording and analysing the recorded sound:

  • Artist will record a short snippet of audio.
  • The system will identify the tempo of the recorded snippet (beat detection).
  • The system will analyse the recorded snippet to get frequency data, timbre, etc. (and maybe in order to identify notes and / or chords?).
  • The system will suggest a rhythmic tempo to go along with the snippet.
  • The system will play the recorded snippet as in infinite loop, along with the rhythmic tempo.
  • The system will try to find new ‘loop opportunities’ within the snippet, in order to play the loop in a none linear way.
  • The artist will be able to record more musical snippets.
  • The artist will be able to choose which parts will be played constantly (background sounds), and which parts will be played periodically.
  • The system will suggest new and interesting combinations of the recording snippets, and play these combinations infinitely.

The listener interacts with the played tune:

  • Since the tune can be played infinitely, some controls will be given to listener. Each and every artist will be able to configure these controls differently. For example, one can decide that the controls will include 2 knobs, one of them changes the tune from ‘dark’ to ‘bright’, and the other changes the tune from ‘calm’ to ‘noisy’. The artist will decide what will happen when each one of these knobs is being turned.
  • For the ICM final, a generic user interface will be provided to the listener. The interface will include a visual representation of the played tune, and will allow the listener to change the rhythmic tempo.

Applying machine learning algorithms:

  • The system will try to generate new music, based on the recorded snippets, and earlier decisions by the same user. This new music will stretch the length of the recorded tune.

Modifying the system’s decisions:

  • The artist will be able to effect the system’s decisions about the looped tune, and about the new music it generates. For example, the user will be able to decide when a specific part enters, or which algorithmic rules won’t generate new music.

Applying sensors and automations

  • The artist will be able to set rules based on 3rd party data or sensors. For example, the tune can be played differently if it is rainy on the first day of the month, if it is currently Christmas, if it is exactly 5:55am, or if the light in the room was dimmed to certain level. These rules will apply to each tune separately.

Formatting

  • There should be a new music format that could hold the tune (or the snippets) and the data necessary for playing it correctly. In the same way, a new player should be introduced in order to read the data and to play the tune correctly.
  • This format should allow the artist to update the tune configuration or the musical snippets at any time, after the tune was distributed to the listeners.
  • For the ICM final (and probably for the end product as well), the tune will be played in the web browser.

 

The Video Machine

Overview

The Video Machine is a video controller, powered by an Arduino, that controls the playback of videos presented on a web browser. By pressing a button on the controller, the correlated video is being played on the screen and heard through the speakers.
Videos are being played in an infinite loop.
Only the videos that are being played, are being heard.

I was lucky enough to work on this project with the super talented Mint for our Physical Computing class mid-term.
Working with Mint not only was a great learning experience, but also a lot of fun! I hope I’ll be able to work with her again on our next project (more on that below).

The Video Machine from Dror Ayalon on Vimeo.

Many thanks for Joe Mango, our beloved resident, who assisted a lot with finding the right technologies for the project, and helped us on one critical moment, when suddenly nothing worked.

The Video Machine – Split Screen from Dror Ayalon on Vimeo.

The building process

The process of building The Video Machine went through the following stages:

  • Prototyping – Once we had a broad idea about what we want to make, we wanted to test how hard would it be to build such interaction, and if the interaction feels ‘right’ to us.
  • Understanding the complications The prototyping stage helped us understand what could be the possible complications of this product, and what might be the limitation. We analysed what could be the limitations of the serial communication between the Arduino board and our laptop computer, and what types of video manipulations could be easily achieved using JavaScript.
    Understanding what’s possible helped us shape our final design, and the different features
  • Designing the architecture – Before we started to build the final product, we talked about the technical design of the product under the hood. These decisions basically defined the way the end product would operate, and the way users would interact with it.
  • Picking up the technologies – To apply our technical design, we needed to find the right tools.
    For the video manipulations, we decided to use vanilla JavaScript, because its easy to use video API. The biggest discussion was around the implantation of the buttons, on which the user needs to press in order to play the videos. After some research, and brainstorming with Joe Mango, we decided to use the Adafruit Trellis. That was probably the most important decision we took, and one that made this project possible to make, given the short amount of time we had at that point (four days).
  • Building, and making changes – We started to assemble the project and the write the needed code. While doing that, we changed our technical design a few times, in order to overcome some limitations we learned about along the way. And then came then moment where everything worked smoothly.
The Video Machine - Final product
The Video Machine – Final product

Some code

The entire code can be viewed on our GitHub repository.

Reactions

The reactions to The Video Machine were amazing. The signals started to arrive on the prototyping stage, when people constantly wanted to check it out.

When we showed the final project to people on the ITP floor, it appeared that everyone wants to put a hand on our box.

The Video Machine
The Video Machine

People were experimenting, listening, looking, clicking, laughing, some of them even lined up to use our product.

The Video Machine
The Video Machine

Further work

I hope that Mint and I will be able to continue to work on this project for our final term.
I cannot wait to see the second version of The Video Machine.
I believe that the goals for the next version would be:

  • To add more functionality, that will allow better and easier video/sound manipulation.
  • To make playing very easy for people with no knowledge of music or playing live music. Beat sync could be a good start. The product should allow anyone to create nice tunes.
  • To find a new way to interact with the content, using the controller. This new interaction needs to be somethings that allows some kind of manipulation to the video or the sound, that is not possible (or less convenient) using the current and typical controller interface.
  • To improve the content so all videos will be very useful for as many types of music as possible.
  • To improve the web interface to make it adjustable for different screen sizes.
The Video Machine - Controller
The Video Machine – Controller

First step towards the mid-term project

For the mid-term project, Mint and I decided to work on a game.
Without exposing too much about the the project – we are working on a game that will combine physical cannon, and a digital targets.

The cannon is sort of a shaker. User needs to shake to cannon to a certain point in order to shoot it. Needless to say, shaking and aiming at the same time will make the game challenging, and hopefully interesting and enjoyable.

We used the built-in accelerometer on Arduino 101, and we wrote Arduino code that will identify when user shakes the cannon, and when the cannon was shakened  long enough, and it is ready to fire.

It sounds a little abstract, but I’m sure that it will be very clear when the game will be ready.

This is what we have so far:

The making of ‘the pcomp shooter’: part 1 from Dror Ayalon on Vimeo.

 

This is the Arduino code we used (the examples on www.arduino.cc were very helpful):

#include "CurieIMU.h"
#include "pitches.h"

float oldx, oldy, oldz, newx, newy, newz = 0;   //scaled accelerometer values
float axp, ayp, azp;
int bullet = 0;

int bang[] = {
 NOTE_B3, NOTE_C4
};

int noteDurations[] = {
 2, 2
};

void setup() {
 Serial.begin(9600); // initialize Serial communication
 while (!Serial);    // wait for the serial port to open

 // initialize device
 Serial.println("Initializing IMU device...");
 CurieIMU.begin();

 // Set the accelerometer range to 2G    -2 to 2
 CurieIMU.setAccelerometerRange(2);
}

void loop() {

 // read accelerometer measurements from device, scaled to the configured range
 CurieIMU.readAccelerometerScaled(newx, newy, newz);

 // percentage
 axp = 100 - (oldx / newx) * 100;
 ayp = 100 - (oldy / newy) * 100;
 azp = 100 - (oldz / newz) * 100;

 // display tab-separated accelerometer x/y/z values
 Serial.print("axp: ");
 Serial.print(axp);
 Serial.print("%");
 Serial.print("\t");

 Serial.print("ayp: ");
 Serial.print(ayp);
 Serial.print("%");
 Serial.print("\t");

 Serial.print("azp: ");
 Serial.print(azp);
 Serial.print("%");
 Serial.print("\t");

 Serial.print("bullet: ");
 Serial.print(bullet);
 Serial.print("%");
 Serial.print("\t");

 if (( axp > -100) && ( axp < 100 )) { bullet = bullet - 0.5; } else { bullet = bullet + 4; Serial.print("SHAKING!!!"); Serial.print("\t"); } if (bullet >= 100) {
   Serial.print("BANG!!!!!");
   Serial.print("\t");

   int noteDuration = 1000 / noteDurations[0];
   tone(8, bang[0], noteDuration);
   int pauseBetweenNotes = noteDuration * 0.5;
   Serial.print(bang[0]);
   delay(pauseBetweenNotes);
   noTone(8);

   noteDuration = 1000 / noteDurations[1];
   tone(8, bang[1], noteDuration);
   pauseBetweenNotes = noteDuration * 0.5;
   Serial.print(bang[1]);
   delay(pauseBetweenNotes);
   noTone(8);

   bullet = 0;
 }

 Serial.println();

 oldx = newx;
 oldy = newy;
 oldz = newz;

 delay(5);
}

What’s next?

  • To build the graphical part of the game in p5.js.
  • To solve the sound problem: We couldn’t find a way to make the sound louder from the speaker connected to the Arduino.
  • To do some fabrication work and to make sure that the cannon has the ‘right’ feeling in the users’ hands.
  • To play the game over and over, make some changes, and make sure that it’s fun.

ICM (+pcomp) Synthesis Workshop

A few days ago we had an ICM synthesis workshop. I must say that I had very low expectations from the workshop, which ended-up being one of the most inspiring and enjoyable activities I’ve been part of since I started ITP. I truly feel that this workshop was a milestone in my personal ITP growth.

First experience with changing digital objects using physical sensors

I was lucky enough to work with talented Scott Reitherman. We picked-up one of Scott’s early P5.js drawings, and manipulated it using a potentiometer.

The following video shows the final outcome, but doesn’t show exactly what’s happening (lesson learned about documentation videos).

Scott’s P5 drawing is being manipulated with a physical potentiometer: The jitters’ speed is changing when I rotate the knob.

My personal conclusion from the workshop

  • Doing is more inspiring than reading – Sounds trivial right? Not to everyone. I tend to read a lot and cover a vast range of topics using only my eyes. For some reason, this workshop made me realize finally  (or maybe reminded me after so many years of learning and having almost zero time for experiments) what is the difference between reading and doing, and how inspiring doing can be.
    Working on this tiny project, made me think of so many applications that could be interesting and challenging to make.
    Maybe it is because I just read “Sketching User Experiences: The Workbook” by Saul Greenberg, Bill Buxton from the pcomp class reading list, which reminded me how useful sketching could be, but I decided that from now on I’ll try to focus on sketching and making more than on anything else. To me, this is a very motivating and uplifting conclusion.
  • Focus on projects – From on, I’ll try to be more focused on projects, that will include subjects that were covered in different classes, more than on simply learning. Sounds vague, but maybe this is how it should be.
  • P5 is not ready – P5.js doesn’t seem too stable when dealing with serial  inputs. It doesn’t handle buffers very well, and tend to stop being responsive after 5-10 seconds of interaction.
    This limitation didn’t take anything from my first ‘sensor + digital visualization’ experience, but led me to the conclusion that I will not be able to use P5 on any of my final projects, if stability is something I’m looking for (I do).

The Arduino drinking game

Today, I used the analog input lab to build an Arduino drinking game 🙂 .
Players need to press the pushbutton. If the serial input value is a multiple of 7, the player who pressed the pushbutton last must drink🍹.
Geeky, but I love it.

The Arduino drinking game
If the serial input value is a multiple of 7, the player who pressed the pushbutton last must drink.

This is how it looks like (sorry about the weird ratio) –

And this is the code I used –

void setup() {
pinMode(2, INPUT);
pinMode(3, OUTPUT);
pinMode(4, OUTPUT);
Serial.begin(9600);
}

int play = 1;
boolean drink = false;

void loop() {
if (digitalRead(2) == HIGH) {
play = play + 1;
}

if ((play % 7) == 0) { // detect if the number is a multiple of 7
digitalWrite(3, HIGH); // turn LEDs ON
digitalWrite(4, HIGH);
drink = true; // change 'drink' state
}
else {
digitalWrite(3, LOW);
digitalWrite(4, LOW);
drink = false;
}

if (drink == true) {
Serial.print("play: ");
Serial.print(play);
Serial.print("\t");
Serial.println("DRINK!!! :P ");
} else {
Serial.print("play: ");
Serial.println(play);
}
}

Improvement

I modified the code a little in order to avoid the blinking LEDs phenomenon. Now, I’m checking if the serial value is a multiple of 7 only after a digitalRead(2) == LOW event –

void setup() {
  pinMode(2, INPUT);
  pinMode(3, OUTPUT);
  pinMode(4, OUTPUT);
  Serial.begin(9600);
}

int play = 1;
boolean drink = false;

void loop() {
  if (digitalRead(2) == HIGH) {
    play = play + 1;
  }

  // run number check-up on button release 
  if (digitalRead(2) == LOW) {
    // detect if the number is a multiple of 7
    if ((play % 7) == 0) {
      digitalWrite(3, HIGH);      // turn LEDs ON
      digitalWrite(4, HIGH);
      drink = true;               // change 'drink' state
    }
    else {
      digitalWrite(3, LOW);
      digitalWrite(4, LOW);
      drink = false;
    }
  }

  if (drink == true) {
    Serial.print("play: ");
    Serial.print(play);
    Serial.print("\t");
    Serial.println("DRINK!!! :P ");
  } else {
    Serial.print("play: ");
    Serial.println(play);
  }
}

 

The result is that the LEDs are turned on only in case where the serial value is a multiple of 7, and the pushbutton is not pressed –

Usability case study: The NYC subway turnstiles

This week I went out for an observational study about the interaction between people and the NYC subway turnstiles.

NYU subway turnstile
NYU subway turnstile

The study was made on two busy subway stations: Bedford Avenue station in Brooklyn, and Union Square station in Manhattan. The observations, and the conclusions, were similar on both stations. Therefore, I will specify my findings in an unanimous form.

Findings

  1. A few meters before they get to the turnstiles, people reach to their pockets, wallets, or bags to get a paper card, shaped as a credit card. One face of the card is painted in yellow, and the other one is painted white.
    It is very unusual to see people get to the machine empty handed.

    Getting the cards in advance
    Getting the cards in advance
  2. People swipe the card on the right side of the machine.

    Swiping the metro cards
    Swiping the metro cards
  3. Usually, a ‘click’ sound is heard after the swipe, and the ‘swiper’ moves across the gate to enter the subway platform.
  4. Occasionally, a ‘beep’ sound was heard after people swiped their cards. After the ‘beep’, people were seen swiping their cards again. This scenario repeats itself until the ‘click’ sound was heard, after which, the swiper entered the platform.
    A closer look shows an ‘error’ message on the small display, placed just above the card swiping deck.

    Errors
    Errors
  5. On rare occasions, the swiper was swiping the card, a ‘beep’ sound was heard, and the swiper seemed angry.
    This situation made other people, who were lined-up behind the swiper, angry as well. A few of the waiting people were trying to move to another turnstile line.
    The swiper turned back, and went to the ‘refill machine’. After doing so, the swiper repeated the steps from step 2.
  6. The turnstile was used, by different people, both for entering the platform, and exiting from it.
    On many occasions, people were trying to exit the platform from a turnstile that was occupied by other people, who where trying to enter the platform.
    Ofter 1-2 second of confusion, the ‘exitter’ (it was never the one who enters) was turning to another turnstile to exit the platform. Some of the exitters took the liberty of exitting the platform from the emergency exit, which was open at all times.

    Commuters go in and out the platform through the turnstile, or through the emergency exit door
    Commuters go in and out the platform through the turnstile, or through the emergency exit door

Major insights

  1. The NYC subway turnstile is a very understandable device. Whether it is because people tend to use it often, of because its design is very simple, people rarely get confused while using it.
  2. At first sight, it felt like the the fact that people use the same turnstile to enter and to exit the platform indicates a design flaw, that might become a major one on rush hours. A longer observation on the turnstiles reveal that people tend to work-around this limitation pretty easily, while spending about a second or two doing so.
  3. Since there is no indication on the card for the amount of funds attached to it, people tend to block the turnstile on cases of ‘insufficient funds’, and to repeat previous steps in the interaction.

Conclusions

Be careful of over-engineering – Intuitively, I was surprised that the engineers, who designed the turnstile, were not concerned by the fact that people would have to cross the turnstile from both of its sides at the same time (to enter and to exit the platform). I assumed that a better design was one that allowed entering and exiting the platform from different locations.
But engineering and design are all about decision making, and compromises. The design I thought about is one that requires more space, and eventually less effective. Apparently, people do not care that much about this ‘limitation’, and address it very easily.
There might be an optimal design that could solve this situation, but sometimes, looking for complex solutions for simple situation is just over-engineering.

An indication for the amount of funds on the cards could be helpful – Since people tend to hold their subway (metro) cards a few steps before they reach the turnstile, having an indication for the amount of funds on the card could save time and frustration when addressing a situation of ‘insufficient funds’.
Also, it could have been useful if people were able to fill their cards with ‘rides’, instead of plain money. For example, one would be able to fill a card with 1-10 rides, instead of $20-$60. This situation could make it easier for people to track and remember the number of rides they still have in their credit (currently, the cost per ride is $2.75, which make it relatively hard to calculate the number of rides in $20, and supposably leads to ignorance).
Here are few suggestions for such indications on the NYC metro card:
Suggestion for NYC metro card design

Building with analog and digital I/O

This week I focused on two (very related) objectives:

  1. To make sure I understand the concept of digital input and output, and to write some Arduino code to play with it. — that went pretty well.
  2. To make suer I understand how voltage is being affected by different components of the analog circuit, such as resistors and transistors. — that didn’t went well…

Playing with sensor and digital input

Following the instructive videos and articles on the subject, I decided to take on a simple task to test my understanding of digital input.

I connected my Arduino board to a circuit that included 2 LEDs and a photocell sensor. The circuit was connected to the Arduino board, and using the following code, I was able to control to the power output of the board:

void setup() {
  pinMode(8, OUTPUT); // power to LEDs
  Serial.begin(9600);
}

void loop() {
  int photoCellState = analogRead(A2);
  Serial.print("photocell: ");
  Serial.print(photoCellState);
  Serial.print("\t");
  Serial.print("LEDs: ");

  if (photoCellState < 380) { // light LEDs if photocell sense low lights
    digitalWrite(8, HIGH);
    Serial.println("ON");
  } else {
    digitalWrite(8, LOW);
    Serial.println("OFF");
  }
}

Questions about voltage changes within an analog circuit

I wanted to understand what is really happening at each point of my analog circuit. In other words, I wanted to know what is the amount of volts that passes through my circuit, and how does it change when I change the component on the circuit.

In order to have a constant voltage measurement, I followed the instructions on Tom Igoe’s instructional video, connected my Arduino to a certain point on the circuit, and printed the amount of volts the Arduino received to the console. The following code was used to monitor and print the voltage amount on my A0 Arduino input:

void setup() {
  Serial.begin(9600);
}

void loop() {
  int reading = analogRead(A0);
  float voltage = reading * (5.0 / 1024.0);
  Serial.print("voltage: ");
  Serial.print(voltage);
  Serial.print("\t");
  Serial.print("reading: ");
  Serial.println(reading);
}

Scenario 01: 100 ohm resistor

I set up the following circuit using this useful tool that showed me the resistance amount on each of  my resistors –

Analog circuit with 100ohm resistor: 0.06 - 0.46 voltage were detected on A0.
Analog circuit with 100ohm resistor: 0.06 – 0.46 voltage were detected on A0.

Once the pushbutton was pressed, the number of volts that was printed on the console was between 0.06V and 0.46V (depends on the position on wire on the breadboard).

I was surprised by the results. I suspected that the 100ohm resistor will make the volts amount to drop at the point where my wire was connected to A0 on my Arduino, but I didn’t expect it to drop that much..

 

 

 

The circuit I used for Scenario 01: 100 ohm resistor.
The circuit I used for Scenario 01: 100 ohm resistor.

I tried to run some calculations using the V = I * R formula, but that was not a big success 🙁

I kept on measuring the circuit on different points: On one side of the resistor (+), the voltage was near 5V, and on the other side of the resistor (-), the voltage was low, as described below. That made sense to me. The resistor releases a large amount of the volts. Also, given the numbers I saw on the console, I assumes that 100ohm resistor releases about 4V from a circuit (although this assumption seems to be broad, generic, and pretty wrong. The amount of voltage that a resistor releases might depend on the structure of the circuit and its components. But how can I calculate that in advance??).

Scenario 02: 100 ohm resistor + 1k ohm resistor

Things got really confusing when I replaced the wire that connects my circuit to ground, with a 1k resistor.

Analog circuit with 100ohm resistor &amp; 1k ohm resistor: ±4.20 voltage was detected on A0.
Analog circuit with 100ohm resistor & 1k ohm resistor: ±4.20 voltage was detected on A0.

I expected that since there are more resistors on the circuit, the amount of volts will be reduced. But instead, the console started to show ±4.20V.

When I moved the A0 wire to different points on the circuit, things started to make sense: after the second resistor (the 1k one), the amount of volts was 0. The resistance was too much for the 5V power source.
Somehow, the bigger 1k resistor draws more ‘electric pressure’ (I’m sure that there a better term for that), which actually raises the amount of volts on every part of the circuit before it. But again, the big question that remained open to me is –

 

The circuit I used for Scenario 02: 100 ohm resistor + 1K ohm resistor.
The circuit I used for Scenario 02: 100 ohm resistor + 1K ohm resistor.

 

HOW CAN I CALCULATE AND / OR PREDICT THE AMOUNT OF VOLTS, ON DIFFERENT POINTS OF THE CIRCUIT, BASED ON THE ORDER OF DIFFERENT COMPONENTS?

 

 

 

Simple switch circuit

To gain better understanding of electronic circuits, I built a simple LED circuit.

LED circuit
LED circuit

Later, I added a force-sensing resistor. I used the resistor manually as a switch to close the circuit.

 

Closing a LED circuit with a force-sensing resistor
Closing a LED circuit with a force-sensing resistor

One thing if very clear to me know — whether it will go through some dead LEDs, or through a few burned down dc engines, I learn better when I try things hands one. Since I started to use the digital multimeter, I was able to test my understanding of circuits on live working components, but more importantly, I started to feel more confident about my knowledge. Can’t wait to build a more challenging piece.

Physical Interaction: Interpretation and Thoughts

What is physical interaction?

Interaction: a cyclic process

In order to answer this question, I believe that I should first describe what is an interaction? According to Chris Crawford (“The Art of Interactive Design: A Euphonious and Illuminating Guide to Building Successful Software”, 2003), an interaction is:

“a cyclic process in which two actors alternately listen, think, and speak.”

Crawford claims that interactivity must be cyclic, and that it must includes a process of listening, thinking, and speaking. But what if an interaction includes only a single cycle of a question and an answer? What if the one actor doesn’t listen, and instead says something that is unrelated? What is the unrelated reply makes the first actor go for another cycle (to process what he/she heard, and to say another thing in reply)? Would that be considered an interaction?

Number of cycles: an interactivity strength scale

Crawford mentions that, to his perspective, there are degrees of interactivity. Therefore, interactions could be strong or weak, based on the level of listeningthinking, and speaking by those who participate in the interaction.

To add to Crawford’s degrees of interactivity model, I would say that since an interaction is cyclic process, the number of cycles could be used as a measurement for the strength of an interaction, and that minimum strength of an interaction is a single cycle.

I agree with Crawford’s determination that in cases where the occurrence does not lead to (at least) a single full cycle of interactivity, this occurrence would be considered as a case of cause and reaction.

For example, when two people engage in a long conversation about the theory of Repeated Games With Incomplete Information, they experience an interaction that requires many cycles of questions and answers, that lead to many other repeated cycles. This conversation would be considered as a very strong interaction.
In contrast, when two people stand in an elevator and greet each other with ‘good morning’, I would say that, even though their interaction went through a single cycle (or a single exchange), this single cycle satisfies the minimum requirement for an interaction. Obviously, the strength of this interaction is very weak (some would say that this is the weakest interaction possible, but I always consider an interaction between people as one that includes an exchange of information the surpass the the words that are being said vocally).

In case where one of the actors greets the second actor with ‘good morning’, and the second actor embraces the greeting, but do not reply, I would consider that a case of cause and reaction.

Interaction: an exchange of information

I would also add that an interaction must be a cyclic process in which actors listen, think, speak, and exchange valuable information. Information is valuable for an interaction in case where it was related to the previous cycle, and had an effect on the next on.

For example, let’s have a look at the following conversation:

John: “What is the time now?”
Dana: “The leaves are falling because winter is coming.”

If John considers Dana’s answer as unrelated to his questions, he might stop the conversation, and won’t say anything else. John might as well assume that Dana did not reply to his question, and did not even talk directly to him. In this case, I would argue that Dana did not interact with John, and therefore, there is not even a single cycle of interaction, meaning that there was no interactivity at all.

This situation is very similar to the highly frustrated experience of operating a computer, and getting a repeated error message that doesn’t seem to have any connection to the operation, and that doesn’t lead the user towards his next step.

Error code. [taken from https://xkcd.com/1024/]
Error code.
[taken from https://xkcd.com/1024/]

What if John asks his question again, assuming that Dana miss heard him? What if after Dana’s reply John forgets about the time and starts to gaze at the falling leaves? Similarly to the previous case, In this one I would say that both actors are reacting to one another, but do not perform any real interaction between them.

But what if, somehow, John understands what is the time (or at least he thinks he understands) from Dana’s answer?  In this case, even though the content of the conversation doesn’t seem to be of any value, I would consider it an interaction, since Dana’s reply was valuable to complete a single cycle. In other words, regardless to Dana’s reply content, if this reply pushed John to try to re-interact with Dana, Dana’s reply effected the next step in the interaction. And therefore, it was valuable.

To summarize it all, I would argue that an interaction is:

at least a single exchange of valuable information that leads to a cyclic process in which two actors alternately listen, think, and speak.

Physical interaction

In my opinion, a physical interaction is one that includes a physical object, and that the physical object plays a major role in it. I would consider a physical object anything that has a tangible attribute. Therefore, humans, for example, are physical objects. This conclusion led me to a disagreement with Bret Victor’s article about the future of interaction design. Even though the future, as presented on Microsoft’s concept video, does not fulfil the potential of physical interaction, humans are playing a major role in it.

Physical interaction with digital interfaces. [taken from http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign]
Physical interaction with digital interfaces.
[taken from http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign]

For example, an exchange of information between two servers would not be considered to be a physical interaction. But in my opinion, any interaction between human and a machine is a physical interaction.

We can say that a physical interaction is

a cyclic process, on which a physical object plays a major role, and includes at least a single exchange of valuable information.

What makes for good physical interaction?

By now, it is clear to me that a good physical interaction must include a good use of physical object. By saying ‘good use’ I mean that it should be clear that the interaction could not be as strong, or even could not occur at all, without the physical object.

Having said that, and without quoting Don Norman’s book The Design of Everyday Things, I would say that a good physical interaction would also be one that fulfils its potential in the following ways:

It is sensitive

Remember the times when you got a new shirt, or a new haircut, and no one told you anything about it, as if it wasn’t happening? Have you ever experienced a situation where someone important to you forgot about an important day for you?

Computers have the potential to impact our lives with just a minimal effort. Think about a scenario where your desktop lamp would change its tone of light according to your mood, if it is saying that it feels you. Wouldn’t that be nice? What if your home stereo would lower its volume, or just lower its treble sound when you have a headache? What if your refrigerators would pour a glass of water when its sensors hear you coughing?

It is surprisingly smart, but not arrogant

In continue to the previous topic, a good (physical) interaction is one that has a sense of surprise in it. Repeating the same interaction over and over would probably kill this effect, and therefore, a good interaction should be one that evolves, and gets smarter in time.

The best interactions would be those that does not leave the user with a feeling of “that was great, but I have no idea what happened.”. I cannot stand looking at my father as he admires new features on his phone, while secretly admits that he is no way near the understanding of this features, and how they actually work. This interaction leaves my father (and many others) with the feeling that he cannot play a major part in the new world of computer based interactions. In other words, although my father appreciate these interactions, they tend to leave him with some frustration, which leads me to the next topic –

It is satisfying

Rolling back to the earlier parts of this post, I can say that a strong interaction, one that includes many cycles, is surely satisfying. But the feeling of satisfaction could be achieved even on weak interactions, and should not be overlooked. A satisfying interaction would lead a user to be more engaged in it, and would push the user to explore new ones.