I am a recent graduate of the Master's program at Stanford University's Center for Computer Research in Music and Acoustics. My research interests include audio software design and development, physical computing, new music controllers, and sound spatialization techniques. I aim to design and build playful interactions that allow users and gallery participants to engage with sound and visuals in new and unexpected ways. Please browse through the following subsections to learn more about my work, and certainly feel free to get in touch if you have any questions. You can reach me at carlsonc (at) ccrma.stanford.edu. Thank you for your interest!
Borderlands is a new interface for composing and performing with granular synthesis, a technique that involves the superposition of small time fragments, or grains, of sound to create completely new textures and timbres. This software enables flexible, real- time improvisation. It is designed to allow users to engage with sonic material on a fundamental level, breaking free of traditional interaction paradigms such as knobs and sliders. The user is envisioned as an organizer of sound, simultaneously assuming the roles of curator, performer, and listener. The following video documents a recent live performance with this software.
An academic paper which was recently accepted to the 2012 conference on New Interfaces for Musical Expression is available here. Source code, technical details, usage information, and a list of future features may be found at the project website . The following screen capture provides a closer look at the software.
While the laptop version provides a large canvas and a great deal of RAM for storing audio files at runtime, the iPad's multitouch capabilities offer a much richer interactive experience. I extended the instrument to take advantage of these resources and added a social/networking component to allow users to share their "scenes" and download the work of others.
Detailed descriptions of the user interaction flow and technical features are provided here. This project will hopefully be available through the App store soon.
This interactive sound and light sculpture is inspired by the behavior of synchronous fireflies, which interact through a process called pulse-coupled oscillation. A network of tuned electronic chirping and blinking entities is suspended from a large sculpture. These creatures initially pulse at random rates, out of phase with each other. Each entity is aware of its neighbors, however. When an single creature "fires," its neighbors move a tiny step forward or backward in their cycles to try to match phase with the firing oscillator. Eventually, synchrony emerges through the entire group, only to be disturbed by the sudden presence of an onlooker.
The oscillator circuits consist of two 555 timers, one of which modulates the other. Each circuit is tuned differently using a few potentiometers and different capacitor values. Photoresisters are also built in so that the sonic character of the installation changes slightly in different lighting conditions. The pulse coupled oscillation algorithm runs in Processing on a Texas Instruments BeagleBoard (when installed. The demo video above shows the algorithm running on a Macbook Pro). A Parallax Ping Ultrasonic Range Sensor is used to monitor the presence of an onlooker under the sculpture. Arduino microcontrollers provide the interface between the oscillator circuits, the rangefinder, and the Processing code. All of the circuitry sits on top of a shapely, wooden platform that is suspended from a wall in the gallery.
The Sound Flinger (a.k.a. Sound Lobber) is an interactive sound spatialization device that allows users to touch and move sound. Users record audio loops from an mp3 player or another external source. By manipulating four motorized faders, users can control the locations of two virtual sound objects around a circle corresponding to the perimeter of a quadraphonic sound field. Physical models that simulate a spring-like interaction between each fader and the virtual sound objects generate haptic and aural feedback that allows users to literally touch, wiggle, and fling sound around the room. This instrument was a final project for Music 250 - Physical Interaction Design for Music at Stanford University's Center for Computer Research in Music and Acoustics.
This project was published in the Proceedings of the 2011 Conference on New Interfaces for Musical Expression in Oslo, Norway. The final paper is available here
Credits: Embedded audio programming by Chris Carlson, haptics interface by Hunter McCurry, box construction and design by Eli Marschner
The Motor Garden is an art installation that ran for 3 weeks in April 2007 at James Madison University. The primary objective of this project was to create a simple, yet engaging interactive piece that responded to the presence of visitors within the gallery. Eight motorized "plants" are positioned around the floor of the installation space. As gallery participants approach the sculpture, the entire garden whirs to life. If no additional motion is detected within a short period of time, the garden falls back to sleep.
Each plant consists of a bright green cardboard "root," a PVC-pipe 'stalk,' and a motorized 'flower' made out of a colorful print by fellow artist, Ben Nicholson. The garden is monitored by a webcam mounted near the ceiling. A computer, running custom-programmed video detection and motor control software in Processing, reviews the data from the camera and looks to see if any motion is detected within pre-specified regions around the space. Four regions exist, each centered on two plants (one "primary" and one secondary). If motion is detected within a region, the software prepares a packet of data indicating required state of all motors. This data is sent to the arduino microcontroller (via serial communication), which, in turn, sets the appropriate voltages at a series of H-bridge motor drivers. The "primary" flower in the region corresponding to the motion begins to spin.
In order to bring the entire garden to life in a cascading fashion, the installation has an embedded feedback loop. Each "primary" plant has a 'secondary' clone located in a separate region. As a primary plant begins to spin, its clone also spins, triggering the neighboring primary plant and its associated clone, etc.
The Motor Garden was featured on the Make Magazine blog in April 2007.
Sampling, midi-controlled loop manipulation, and multiband spatialization are explored in this project. This work was inspired by the idea of splitting an incoming sound into distinct spectral threads an weaving them around the a multichannel sound field. After realizing this idea in the ChUcK programming language, a live looping system was also implemented to allow the perfomer to build more complex sonic environments in real time.
All code for this project was written in the ChucK programming language. In performance, a single channel of audio input is processed first through the multi-track looping engine and then through the 8 channel filtered feedback delay spatialization effect. A Korg Nanokontrol midi controller is tightly coupled to to the looping engine, allowing the performer to trigger recordings of up to seven unique loops, manipulate playback rates and loop directions, and adjust both dry and wet output levels.
Signals sent to the multiband spatialization effect are passed through a bank of bandpass filters, separating out low, mid, and high frequency ranges. These filtered inputs are then processed separately through unique feedback delay lines for each of the eight output channels. The outputs of each delay line are randomly cross-faded with each other, resulting in echoes and resonances that glide through the eight channel sound field in real time.
Source code, usage instructions, and additional demo footage are available here. This project was built for Music 220b - Compositional Algorithms, Psychoacoustics, and Spatial Processing at Stanford, Winter 2011.
This prototype noise machine was built for a lab assignment in Music 250a - Physical Interaction Design for Music at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA). The task was to construct a mini-instrument that is "off the breadboard." My general goal in this class and in my personal work is to make a devices that are deeply expressive and allow performers to explore new and unconventional sounds. Since i have always loved feedback, I took this lab as an opportunity to test out some ideas for generating and controlling it. The materials in the instrument include: 4 piezo discs connected to rings of aluminum foil, a mini amplifier (hacked to enable toggle switch control for power), an aluminum foil pad connected to the hot lead of the audio input to the speaker, and the aluminum enclosure (with holes drilled for the wiring from each piezo and to the user's hands).
The instrument was featured on the Make Magazine blog in November 2010.
Inspired by the beautiful, warm sounds of certain strands of minimal Japanese electronic music, this piece consists of layers of processed guitar, synthesizer, and rhodes piano.
This recording is based on elements of my sound design work for RE_, a modern dance collaboration at Stanford, Spring 2012.
This album collects a variety material that I have been developing over the past five years under the name Cloud Veins, including beat-driven, ambient, noise, and algorithmic works (self-released in June 2011).
This piece is one of a series of sound poems composed for the New York Public Library in its centennial year. Recordings from the library are mixed with processed readings and throbbing textures wrought from a custom granular synth built in Max/MSP. This work is intended to evoke the image of books invisibly being transported throughout the bowels of the library.
This is a recent studio recording of an older guitar/vocal piece augmented by sounds from Max/MSP, walkie talkie feedback, and a borrowed Roland RE-201 Space Echo.
Please visit this site for additional compositions.
Choreography by Katherine Akemi Disenhof and Ali McKeon. Live Video Processing by Hunter McCurry. Sound by Chris Carlson.
Throughout the process of developing this work, we found ourselves consistently using words containing the prefix "re-": react, reflect, rewind, recount, retrace, reflex, reverberate, etc. We came to realize that, by definition, the prefix "re-" attaches the theme of memory to nearly any word that it is paired with. Thus, we titled our project Re- to echo the work's multidimensional commentary on the theme of memory. We began by developing our components separately. Drawing from our own personal experiences, Ali and Katherine choreographed a combination of solos and duets which were put into a sequence, fragmented, and combined. Hunter began writing computer code to create a variety of projection effects that would harmonize with the choreography. These effects included images such as distorted home videos, still image capture, motion trails and digital distortion. In composing the music, Chris crafted a set of distinctive yet cohesive sound worlds that complemented the various section divisions within the piece.
This installation deconstructs the 88 miles of books and 10,382,600 cubic feet of space housed by the New York Public Library's Stephen A. Schwarzman Building into a minimalist, participatory sound and video installation. It is the result of a collaboration with two students at New York University's Interactive Telecommunications Program (ITP).
Carlin Wragg was the project lead, handling concept development, user experience design, and interfacing with the New York Public Library. Kevin Bleich engineered the motion tracking and custom Processing code. I worked with found sounds from the library, produced all of the musical material, and contributed to the concept development. Volumes of Voices was exhibited at the NYU ITP Winter Show in December 2011.
This excerpt is an example of the sound world experienced by gallery visitors. Viola by Hunter McCurry.
Participants wear a set of wireless headphones and move through a space delineated by hanging card catelog fragments. The motion of the listener is tracked by a Microsoft Kinect and custom software written Processing. Open Sound Control messages relay the proximity of viewers to various "hot spots" within the space. This information is used to dynamically mix and trigger various sound materials in Ableton Live, which are transmitted back to the wireless headset. In this manner, viewers are able to explore the sound universe of the New York Public Library and create their own unique sonic experience by choosing their path through the space. Volumes of Voices uses physical and digital media to expose modern library collections as it examines the link between traditional and innovative approaches to librarianship.
On it’s first day of operations, May 24, 1911, between 30,000 and 50,000 New Yorkers toured Carrère and Hastings’ beaux-arts library on Manhattan’s Fifth Avenue (Source: History of the Stephen A. Schwartzman Building, www.nypl.org). The New York Public Library’s flagship building is at the head of a system of branches funded in part by bequests from two robber barons and a former governor. It was visioned by preeminent librarian Dr. John Shaw Billings as a place where everyone, from the newly-arrived immigrant to the expert scholar, could access one of the world’s most important learning collections. After a three-year, $50 million restoration and preservation project, the iconic marble structure has emerged from a suit of scaffolding to celebrate its centennial anniversary. In the words of NYPL President Paul LeClerc, the building’s “magnificence is a visual reminder of how centrally important reading, learning, and creating are to a vibrant and democratic society (Source: Historic, Three-Year Preservation Project Restores The Landmark Façade of the Library On 42nd Street, www.nypl.org).” On this, the 100th birthday of one of New York’s iconic public spaces, Volumes of Voices takes visitors on an immersive musical journey that reveals the library’s collections, treasures, and hidden sonic character.
More information about this project is available at www.volumes-tour.info
I collaborated with choreographer Cynthia Thompson to assemble the score for this piece. The source material is a recording of Chopin's Nocturne No. 7 in C Sharp and selections of shortwave numbers stations culled from The Conet Project (1997). This dance was performed at James Madison University in December 2005. Video editing by Rachel Burt.
Performed at the CCRMA Spring Concert, May 26, 2011.
This piece is derived from a collaboration with Carlin Wragg, a student at the NYU Interactive Telecommunications Program, and the New York Public Library. Together, we have been building a narrative sound tour that exposes library visitors to the history and architecture of each room, rare texts housed in the collections, and, perhaps most importantly, the rich world of sound that lives within the walls of the building. The piece being performed in this video is a heavy abstraction of our work, weaving source recordings from the library and composed material into an evolving tapestry of noise.
This performance uses a custom-developed looper and multiband spatializer described here. Prerecorded samples from the library (first and middle sections of the piece) are mixed and panned. Live input is captured from an Akai reel to reel deck through a pro co Rat distortion and a Mackie mixer, processed through the filtered feedback delay spatializer, and sampled/looped.
Live at CCRMA's annual Modulations concert in San Francisco, April 2, 2011. The event was held at the SOMArts gallery in the Mission. Beautiful visuals by Peter Nyboer.