wAs part of the final year of my Digital Arts degree, I created a Digital Interactive Sound Installation, involving the generation of an array of visualised LED sequences, using ambient sounds and intentional directed sound.
As a digital artist I am captivated by sound and I strive to learn ways of using sound to engage users through interactivity and the huge interactive potential sound holds.
The starting point for this project was to try and experiment into how people use and interact with a space in relation to sound. The project was an opportunity to create a piece that could un-imposingly have an impact on people within its range, by analysing the sound that was being created, and use that as a starting point to experiment into how that will affect the space.
The Production Process Documentation.
Here I have documented the process I went through of developing and building an interactive LED sound piece. I hope you find it interesting. If you have any questions, tips, or praise let me know on my contact page.
These are the tools I used to actually make the project. Out of the picture are the iCube and Midi converter.
iCube Interface .
I had to make this in order to trigger the LED sequences from software. Simply I could have used a relay to trigger the switch on the sequence, but the ones I had access to would have blown the iCube. So I had to use LED’s to trigger a light dependant resistor, hooked up to a transistor, to throw the switch instead.
1 of 8 LED Sequence boards.
I managed to get 8 LED sequence circuits out of fancy birthday cards. I butchered these to create one LED sequence.
Making it work.
As I mentioned before, I couldn’t use a relay to trigger the circuit, so I had to use the LED’s coming from the iCube. This meant I had to use a light dependant resistor, with a transistor instead.
I then put the light dependant resistors next to the LED’s from the iCube interface, which triggered the circuit.
Wiring in the LED sequences.
This bit was really hard, so many wires, so many LEDS – 96 to be exact!
The piece had a microphone and LED’s, so it was interactive, but I wanted to make it more so. I made a little speaker box to put inside, so the piece has the capability to speak and interact back!
In order to get the iCube to speak to the LED’s, I needed some serious wires. 8 Sequences, requiring 2 cables each, plus 2 for the speaker box, and about 2-3 meters of each!
I am getting there! Finally.
Setting it all up.
There is my powerbook, hooked into the midi controller, coupled onto the iCube. A nightmare to set up, but well worth it. The black box is the iCube.
Setting it all up 2.
Now the computer was set up I needed somewhere to hang the piece so it was easy to interact with.
With the light behind the piece, you can see all the circuit boards and wires.
A picture of how the interactive LED lantern looks when it is running.
Demonstration – Video.
This is a short video of the piece reacting to a Mylo track and people whistling at it. (Update: How things have changed, this was taken with my old Sony Ericson K750i)
Interactive LED lantern interface – Rationale
As part of the final year of my Digital Arts degree, I have created a Digital Interactive Sound Installation, involving the genertation of an array of visualised LED sequences, using ambient sounds and intentional directed sound.
As a digital arts student I am captivated by sound, and I strive to learn ways of using sound to engage users through interactivity and the huge interactive potential sound holds.
The starting point for this project was to try and experiment into how people use and interact with a space in relation to sound. The project was an opportunity to create a piece that could unimposingly have an impact on people within its range, by analysing the sound that was being created, and use that as a starting point to experiment into how that will affect the space.
So at the start of the term I wanted to create a sound interface, and as the term developed I had concluded that an interface that creates light by a users input would work well. From my own degree, a student Ben Hanbury made a sound to light project, but his intentions were slightly different to mine. His piece, “generated rhythms using pitch and amplitude readings from the sounds around it” [Hanbury, 2005]. I wanted something a little different, something that was unimposing, almost passively interactive.
Work by Sue Webster and Tim Noble eg. British Rubish and Bruce Nauman’s many contributions to media art, have been influencial in how I got to the stage of developing my own digital art piece.
Before starting to build the project, the main issues surrounding the implementation must be addressed. These are:
Representing sound through light, the impact on a space, what is interesting is how will people use the piece and will it change the space and sounds created within that space.
Examples of sound to light projects:
- James Clar
- Montreal “Artificel”
- Graphic equalisers
- Disco lights and Disco balls in night clubs
- Ben Hanbury
What is hoped to be achieved by this project, what is the meaning and outcomes of this project?
Generate more meaning in a space by invoking interaction and documenting that. Visual representation of the pitches we communicate in.
Frequency range we speak in is limited, which is illustrated through interaction.
By creating a piece like this, interaction will create a visual performace, by using found sound. The “Found Sound Movement” is a large movement in sound are where all sorts of sounds are used, manipulated and changed to create sound tracks, among many other variations within the movement.
Manifestation, how would something like this look?
There are so many aesthetic variations this piece could have, but ultimately I was influenced by disco balls. They are a strange looking object, yet when we see them we are accustomed to them as well as mesmerised by the effects they have on the surrounding objects in a room. I want an extension of a childs toy…. orb like… the link between action and intent a logical extension. From this I thought a sphere or orb like object would be something a user could relate communication with, as well as not look out of place. The perfect solution, within my student budget, is a Chinese lantern. The LED’s can shine through the thin paper easily, it is a decorative piece, cheap and unimposing.
Visualising sound in a room through pitch division.
I got the patch which visualised pitch working in max msp. For more information on this read the max/msp patch section.
So from there it is important to understand how people are going to use the piece. What are the set of expected rules associated with the piece. I did not construct the peice so that it could be moved like a ball. I think if the piece had this capability, the meaning of it could change dramatically.
Can I document the reactions as mapping of interactive design? How do people react to an interactive digital representation of sound, and what will people of different ages do. So far only students have had access to the piece. I also wanted to see how people learn how to use a piece such as this one? I think if the piece were to make noise reciprocal to that of the sounds in the room, a more spontaneous interaction would begin, but this would be more imposing.
Improvements can be made in terms of portability would have opened up lots more potential for unique interaction possibilities. If the sphere could have been moved, wirelessly, many more possibilities for varying interaction would have been made possible. I would have also liked to use the 8 digital outputs from the iCube to control an array of LED’s, rather than 8 LED sequences. This would have made it possible to increase the number of possibilities and variations for interaction, sequencing and interface experimentation by thousands.
As an interactive sound piece, that is unimposing and fun to play with and look at, I have succeeded in doing just that. My feedback from my presentation suggested that it provided great stimulation for people using it for short periods, it was easy to figure out, and that it worked well. Users wanted to see what it could do and it was interesting to see how each user applied a separate method of learning to what the object did.
Downloading the MAX/MSP Patch
This page briefly documents the work I had to do in software, MAX/MSP to make the pitch sequencer lantern work.
The Patch works by taking the core audio from the line in, this can be changed in the system settings to whatever input you want. The audio input, in my case, is multiplied by 3 to get a more sensitive reading and this created more chance of getting an accurate sound reading.
The amplified sound is then fed into the fiddle object (download fiddle here). Fiddle then takes a reading of audio and samples amplitude and pitch. I then took a frequently outputted value for pitch and used if statements to divide the audible audio into 8 separate pitch sectors. When a pitch was recorded, the if statements would then trigger one event when the pitch reading in between its given values. The pitch would then dictate which the 1 of 8 events would be triggered.
I then fed these events being triggered into the iCube. The iCube has 8 outputs, so when 1 of the 8 events are triggered, the iCube will turn on 1 of the 8 outputs.
I then connected 8 LED’s to the the 8 iCube outputs, to trigger the 8 LED sequences. Learn more about how I made the sequences.
The Patch.iCube Test (6 downloads)
iCube Help Patch.LED Pitch Sequencer (6 downloads)
To document the project I also created a microsite which lived on my portfolio website. This has since been merged into this WordPress installation. Below are the screenshots of the website, as it was seen on Makezine.com – http://blog.makezine.com/2006/02/14/interactive-sound-led-seq/ and by my peers at University.
This was the homepage
This was the page where you could download the patches for MAX/MSP
This was the page which documented the production process.