Thoughts on computer based instrument paradigms

Note: This was originally posted on Scratch My Brain, but seemed appropriate to this space, so I have posted it here as well.

Over the past couple of years, I have been thinking about computer music instrument design, or how to turn my laptop into a musical instrument. Much of this is due to my participation in the Laptop Orchestra of Louisiana or LOLs. The process of writing a piece for the LOLs often involves designing an instrument, and in my thinking on the subject, I have been putting these instruments into two broad categories. Direct control instruments are instruments in which an action of the performer maps directly to a sound from the instrument, i.e. pull the trigger and sound comes out, move the joystick forward and the pitch changes, etc. The other category is code/process controlled instruments, or instruments where the sound is produced by a process, which is simply launched by the performer, or possibly live coded, but the performer does not have control of individual musical events once the process is set into motion.

I have tended towards direct control instruments in my own work. I think this is largely due to my trombone player DNA. I am used to playing an acoustic instrument (direct control) and so much of my performance world view has been formed by that experience. One of the difficulties with designing new direct control instruments is that it often takes a significant amount of time to learn to play them well. Like any instrument, one must spend some time with it to develop any technique or sense of musical connection to the instrument.

On the other hand, process controlled instruments allow for the creation of highly complex musical expressions with little or no time spent learning technique, but they lack the intimacy of control, especially in terms of timing, that one gets from direct control.

Tonight I was reading an article (from 1991) by David Wessel called “Improvisation with Highly Interactive Real-Time Performance Systems.” In this article, he describes a system that seems to be a direct process control system. He launches the processes (I use the term process to be consistent with my categories, I don’t know that he would use that word) from a direct control instrument. This returns the control of low level timing to the performer, yet allows the performer to still take advantage of what the computer processes have to offer. He also talks about mapping expressive gestures to entire phrases as opposed to single notes.

These ideas have started some wheels turning about my next computer instrument.

I love it when I discover that someone solved my current dilemma twenty years ago. That’s why we should always be attentive in history class.

Fall 2011 Conference Presentations

On September 9, 2011, I will present “Improvisation as Tool and Intention: Organizational Practices in Laptop Orchestras and Their Effect on Personal Music Approaches” at the Guelph Jazz Festival Colloquium in Guelph, Ontario, Canada. The panel is called “Improvising Communities: Telematics and Technology” and will also include Michael Kaler from York University and Jason Robinson from Amherst College.

On October 29, 2011, Nick Hwang, J. Corey Knoll, and I will present a paper/performance on GUA at the Electroacoustic Barn Dance at University of Mary Washington in Fredericksburg, VA.. GUA is a laptop based instrument that was developed at LSU, and has been used extensively by the LOLs as well as Nick, Corey, and I. Nick has posted more info on GUA on his website.

I am also expecting to have performances of electroacoustic pieces in Baton Rouge, LA and Austin, TX. I will post more info on those as the details are confirmed.

 

 

Electroacoustic Pieces from 2009 & 2010

i have nothing to say

i have nothing to say (mp3)

“i have nothing to say” was composed in the spring of 2010. The original source sounds for the piece are recordings of my children speaking. The title comes from one of the recordings, which is my daughter saying, “I have nothing to say.” The original three pieces of audio were granulated in Csound, and the melodies created in Csound were further manipulated in Sound Hack, Audacity and Logic. The piece was assembled in Logic.

Click here to download a zip file that contains the Csound files and the three audio files used to create this piece. All of the Csound files didn’t survive, but the .orc files are here.


Shell Game

Shell game (mp3)

I composed “Shell Game” in the fall of 2009. All of the sounds used in “Shell Game” are presentations or manipulations of recordings of me playing a conch shell. This piece was assembled in Logic. Some of the events were created with Cecilia or Spear.


Napkin Shreds

Napkin Shreds (mp3)

I composed “Napkin Shreds” in the spring of 2009. It combines synthesized sounds and samples from the Jeff Albert Quartet recording of “(It Could Have Been a) Napkin.” This piece was done entirely in Logic.


Jimbo and Ella Go For a Walk

Jimbo and Ella Go For a Walk (mp3)

“Jimbo and Ella Go For a Walk” was composed in the spring of 2009 as a class assignment. It was written in ChucK. Jimbo is Dr. James P. Walsh, a composer friend from whom I stole one of the ideas used in the piece, and Ella is my family’s miniature daschund whose collar jingle is heard in the piece.

Click here to download a zip file that contains all of the ChucK files and the one audio file used to create this piece. There is some randomization in the ChucK files, so each realization from the ChucK code will be slightly different. Place all of the files in the active directory and run “play.ck”.

Notes on CCRMA MIR Workshop, Summer 2011

These are my personal notes on the MIR Workshop that I attended at CCRMA at Stanford University in the summer of 2011. There is a course wiki that has much more detailed info. https://ccrma.stanford.edu/wiki/MIR_workshop_2011

Note: I am not going to link to much in my notes, as all of those links are in the wiki.

The Knoll – home of the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University

Day 1

Today’s lectures were presented by Jay LeBouef and Rebecca Fiebrink. The early lecture dealt with the three main components of any MIR process: segmentation, feature extraction, and analysis/decision making. My main take away from this lecture was that selecting the method of segmentation and types of features extracted are important decisions driven by the goal of the project. Different segmentation methods and features extracted work best for different types of applications. My note on segmentation was that the segments should have musically relevant boundaries, and be separated by some perceptual cue.

We looked at 5 different features to extract. Zero crossing rate is pretty easy to calculate and seemed more useful than I imagined in certain contexts. We also looked at the spectral moments: centroid, bandwidth/spread, skewness, and kurtosis. We did some Matlab tutorials using some custom toolboxes that extracted these features. The labs are on the wiki.

Prof. Fiebrink talked about Weka and her program based on Weka, the Wekinator. She explained some supervised learning algorithms, specifically k-NN (k nearest neighbor). The Wekinator looks to be a very useful tool, that may prove crucial to my dissertation project.

Day 1 has made me confident that I will find the tools I need for my dissertation project, and that I will be able to understand them enough to use them.

posted Monday, 27 June 2011, 10 PM PDT

Day 2

The Day 2 lectures were by Stephen Pope and Leigh Smith. Most of today’s code examples were in C/C++, and the general topic was a more in depth look at feature extraction. We started with a demo of MusicEngine, which is a similarity filter that uses content derived metadata. My big take away from the demo was that the selection and weighting of feature vectors is one of the most significant factors that determines the effectiveness of a program. Knowing what perceived qualities you want to sort, and the feature vectors that best discriminate those perceived qualities is crucial. I think developing an idea of those correlations is one of the keys to being good at MIR.

In the later lecture Leigh Smith talked about onset detection and beat mapping. It was an excellent presentation but pretty complicated. Lots of Day 2 lecture slides on the wiki.

Much of the lab time was spent installing libraries that are used in the C/C++ code. Going through that process a number of times was good for me. I am much more confident with it now. I got the examples to work, and I think some tweaking of those examples will end up being part of my final project this week.

Day 3

The Day 3 lectures were by Stephen Pope and Steve Tjoa. In the morning Pope talked about 2nd stage processing. Once we have a huge set of feature data we need to prune it and/or smooth it. The bottom line is determining what features give us information gain.

In the later lecture Tjoa explained Non-negative Matrix Factorization, which is used for source separation in polyphonic audio files. He explained the math to us, and we ran the code in Matlab, and it still seems like magic. In the simpler examples the amount of source separation is really astounding. Check out his slides from Day 3 on the wiki.

NB- John Chowning was hanging out at CCRMA today. We spoke briefly. He remembered me from his visit to LSU, and we had a fun conversation about subwoofers. What a nice man.

Updated Wednesday 29 June 2011, 10 PM PDT

Days 4 & 5

As the week went on, we moved to some higher level concepts. Day 4’s lecturer was George Tzanetakis. He started the day talking about hearing and pitch perception, and symbolic musical representations (like notation, MIDI, etc). Most of the technical aspects of his lecture dealt with pitch detection, and the idea of mapping chroma (pitch class) as opposed to actual pitch. In the afternoon we looked at Marsyas, which is a powerful and flexible (and fairly complex) MIR toolset that George wrote.

The Day 5 lecture was by Douglas Eck, from Google Research. He talked about music recommendation methods, based on various models of both human and machine generated data. He made a great arguement for the inclusion of content generated data in the recommender algorithms. In the lab we did some comparisons of various ways of looking at and making recommendations from the CAL500 dataset.

We closed the day with a nice tour of CCRMA. It is quite a facility. There is a 3D listening room with ambisonic capabilities. We listened to some music by Fernando Lopez-Lezcano that was made for that space and took great advantage of the speakers above and below the listening position. We also got a bit of a history lesson, hearing about some of the seminal work done at CCRMA, and seeing one of the late Max Matthews radio batons.

All in all, it has been a great week. I learned a lot, and feel like I have found a starting point with some of the tools for my dissertation project. If you are considering attending a CCRMA workshop (or studying at CCRMA as a degree seeking student), I highly recommend it.

Updated Friday 1 July 2011, 10:40 PM PDT

Improvisational practices in laptop orchestras

 

Below is a link to the pdf of a survey that I used for my paper “Improvisation as Tool and Intention: Organizational Practices in Laptop Orchestras and Their Effect on Personal Music Approaches” that I will present at the Guelph Jazz Festival Colloquium on September 8, 2011. I’ll post more on the paper itself after the presentation.

Survey of Improvisational Practices of Laptop Ensembles