root system (for Harry Bertoia) Process

One of Harry Bertoia's "sonambient" sound sculptures sits on display at Bowling Green State University. However, its resonant tines remain sonically isolated in a courtyard outside the Dorothy Uber Bryan Gallery lobby. This installation imagines that the sculpture has sonic “roots” extending through the window. As the roots creep further into the lobby, a branching generative system stretches recordings of the sculpture in time and pitch.

Process (each lasting from ~30 seconds to 6 minutes):

An audio file is compiled from three recordings of the sculpture that have variable length, most lasting less than 10 seconds (sometimes this can result in long or short "sources").

A "depth" is chosen, that is, how many times the iterative process branches.

A path down the “roots” is devised: for each level of the root system, there is a branch to either the left or right, recording a new manipulated version of the original sound.

If the path branches left, the recording is stretched in time (without affecting pitch); if it branches right it is stretched in pitch (without affecting its length). 

So if a path is chosen that branches all the way to the right, the pitch of the final version will be much lower but it will still be the same speed as the original; conversely, if the process branches all the way to the left, it will be the same pitch as the original but very slow.

Each level manipulates the sound played in the level just before it (a la Alvin Lucier’s “I am sitting in a room”). Two iterations of the process with an equal depth might have an equal number of branches to the right and left, but swapping directions at different levels could produce a very different final iteration. 

Though each process follows the same "form," there is infinite variation in the sonic result. 

The final drone that fades in is created from dynamically cross-fading 4 manipulated versions of a single recording of the tines’ resonance. 

Over the course of each process, the original recording (which only appears in the front two speakers by the window) fades away, and the manipulated playback moves to the back two speakers, to augment the spatial extension roots of the sculpture and its tree companion pushing through the ground into the lobby gallery.

reversible reaction

reversible reaction takes its inspiration from the chemical phenomenon in which reactants and products can form each other, oscillating between chemical states. Contrasting sonic and visual environments create an abstracted microscopic world in this installation: molecular bonds join and break, atoms float in suspension, and the environment changes states when “catalyst” participants disturb the system’s equilibrium.

reversible reaction contains several interactive elements. The most salient is visitor motion tracking with a Kinect infrared camera. Visitors in the space directly affect the installation, switching it from state to state depending on criteria related to their position – in relation to each other and the space itself. Depending on the number of visitors within the space, the criteria adjusts to keep the installation oscillating at a fairly consistent rate.

Example criteria that cause the installation to switch states:

·  The space has the same number of visitors for a certain amount of time.

·  The space has no visitors for a certain amount of time.

·  The number of visitors exceeds a certain threshold.

·  A visitor steps into a randomly designated area in the space (changing after it is triggered)

·  If the distance between a pair of visitors exceeds a certain threshold, the state changes. 

A GUI for this component allows for tweaking of these elements in real time (including the visitor number thresholds, distance tolerances, and a “speed limit” on the rate of change). The goal is for the visitor to be aware they are influencing the installation’s state without being certain of its mechanics, allowing them to experiment to uncover the conditions of change.

The rest of the interactivity occurs among the sonic and visual elements of the piece. The audio for each state is algorithmically layered from prerendered soundfiles in real time, creating a non-repeating aural component. Gradual volume changes, alternation and phasing of sonic events, and harmonic changes create “concentric” time scales that add variety over time to the general sonic atmosphere of each state.

2.1 channel audio
Max/MSP (environment control)
Processing, After Effects (animations)
MadMapper (projection mapping)
Arduino (LED lanterns)
Kinect and KVR Tracker (motion tracking)
Logic (sound design)

Reconstructing Ligeti's "Poème Symphonique" for 100 metronomes

This post is adapted from the paper presentation I gave at the György Ligeti and the Future of Maverick Modernity conference in Maccagno, Italy in July 2014. 

An iteration of the piece from the digital reconstruction environment, with animation by Josh Simmons.

The "analog" realization.

Gÿorgy Ligeti’s 1962 composition for 100 metronomes, Poème Symphonique, owes much of its success to its presentation as a ridiculous spectacle. But no piece in Ligeti’s catalogue better distills the composer’s fascination with chaos, order, and broken systems. The piece, notated as a short text score, lasts as long as it takes one hundred mechanical metronomes, all set in motion at the same time, to unwind and stop ticking. Thus, the shape and energy of the piece, if not the duration, is always the same: a tendency towards sparser texture and eventually silence as the metronomes unwind.

However, with the obsolescence of the mechanical metronome, gathering enough instruments for a performance proves difficult. Even in 1962 when the "instruments" were certainly more common, Ligeti devoted a large portion of the score to informing the presenter about how one hundred metronomes may be acquired – also going so far as specifying that each performance be dedicated to those who contributed their personal instruments or helped procure them:

On each [performance] the work is dedicated to the person (or persons) who have helped to bring about the performance through the contribution of instruments, by any means whatsoever, whether it be executive council of a city, one or more of the music schools, one or more businesses, one or more private persons.

This project, a reconstruction programmed in Max/MSP, attempted to model Ligeti’s famous piece. Though it certainly loses the absurd theatricality of the intended realization, it is simple to recreate the piece in real time and experience a simulation of the world that fascinated Ligeti throughout his musical career. Because Poème Symphonique is dependent on the eventual release of mechanical tension in its “instruments,” a main goal of the digital model was to imitate the behavior of mechanical metronomes as closely as possible.


 Ligeti specifies a few conditions that gave a good place to start in programming the reconstruction.

The work is performed by 10 players under the leadership of a conductor . . . Each player operates 10 metronomes . . . The metronomes must be brought onto the stage with a completely run-down clockwork . . . the players wind up the metronomes . . .  at a sign from the conductor, all the metronomes are set in motion by the players.

Perfect: Max functions best as a modular programming environment, so each “player” of ten metronomes was treated as a replicable unit, controlled by a main “conductor” module. Here is a general schematic of the architecture of the program:

The “starter” module, analogous to the conductor, designates the length of the piece and one of the two performance modes. sending information to each of the players and their metronomes. The player modules then select a tempo for each metronome in the group and distributes them to the instruments depending on the performance mode selected. Each impulse of a metronome in tempo is sent to a synthesizer, which creates an audible “clack.”


a detail from the random tempo module for each "player" group of 10 metronomes.

The most variable element then is the tempo of each metronome. Ligeti specifies two slightly different modes of performance in his score, each presenting unique programming challenges, discussed below. In both performance modes, each metronome is assigned a tempo randomly selected from the thirty-nine standard metronome markings available, ranging from 40 to 208 beats per minute. Many mechanical metronomes have grooves in the length of the pendulum’s arm that preclude tempi in between these markings, and so this limitation was implemented.

Fortunately, more refined mechanical metronomes do not decelerate as their wound spring loses tension. Therefore, there was no need to account for this variable in imitating a mechanical metronome: the piece could be simulated by assigning a set number of impulses to each metronome – “higher tension” is simulated as a greater number of impulses. When each metronome reaches the specified number of impulses, it shuts itself off. Credit goes to Dan Tramte for this particular idea – he made his own reconstruction in Max which he was gracious enough to send to me.

Performance Modes

There are two possible ways of winding the metronomes specified in the score. The first was pretty simple:

"All metronomes are wound equally tightly. In this version the chosen metronome numbers (oscillation speeds) wholly determine the time it will take for the several metronomes to run down: those which swing faster will run down faster, the others more slowly."

Equal tension was simulated by assigning the same number of impulses to each metronome, regardless of tempo. A module randomly selects a length for the piece (between 1 and 5 minutes in the original implementation). The smallest number of impulses possible is 40 (the slowest possible tempo setting multiplied by number of minutes), so the given “minutes” value is multiplied by 20 (since one cycle of a metronome has two impulses) and distributed to every metronome. Therefore, the metronomes set at faster tempi run down more quickly than those at a slower tempo since they all have to complete the same number of impulses. 

The second performance option Ligeti specifies is a bit more complicated:

"The several metronomes of a group are wound unequally: the first of the 10 metronomes the tightest, the second a little less, the tenth, the least tightly. Care must be taken, however, that the winding and the regulation of the speeds of the several metronomes are carried out completely independently of each other. Thus the metronome in each group which has been most lightly wound must not be the fastest or the slowest in its oscillation."

In this case, both tempo and tension, or number of impulses, are variable.

"Tightness" values - the maximum number of clicks is read from this table and randomly distributed.

"Tightness" values - the maximum number of clicks is read from this table and randomly distributed.

When this mode is selected, the maximum number of impulses is randomly chosen from a table (from between 40 and 100 clicks) and sorted in descending order. If these sorted numbers were distributed to the metronomes in the same way as the first mode, the last metronomes playing in each group would still be the ones with the slowest tempo.

Therefore, the sorted list of tension values rotates 5 positions to the right, so the median value leads. The sorted list of tempi for each group of 10 metronomes rotates 3 positions to the right. This ensures that the relationships of the longest-lasting metronomes – that is, those with the most tension – are neither the fastest nor slowest of each group.

Delaying and Clacking

There is a performance limitation in the original piece – the players cannot start all ten of their metronomes at once. Ligeti anticipated this and offered a bit of advice:

"To [start the metronomes] quickly as possible, it is recommended that several fingers of each hand be used at the same time. With a sufficient amount of practice, the performers will find that they can set 4 to 6 instruments in motion simultaneously."

When the piece is initiated, 4, 5, or 6 metronomes are started, and after a delay from 2-4 seconds, the rest of the group begins:

The delayer module.

The delayer module.

Each metronome impulse triggers a pulse, output through a resonant filter and summed with a burst of filtered noise for color.


An individual metronome synthesizer.

The center frequency of this filter varies from one metronome to another, accounting for the different-sounding brands of metronomes that one would have to acquire for an actual performance! These synthesized impulses could certainly be substituted for samples of mechanical metronomes – and indeed, when trying to model physical metronomes in other ways, why exclude their sound? I wanted something a little closer to what we might hear in a digital metronome, as a sort of bridge between the physical and mechanical worlds. But the sound is not so important as the texture, and could easily be replaced with playback of sound files.


This project is simply an exercise in trying to capture the variable elements specified in the score so that one may recreate the piece dynamically with ease. Of course, the patch's output (in terms of the timing of the metronomes) is still a bit too polished to completely represent the chaotic conditions present in a performance with mechanical metronomes. What if the players, despite operating under the first mode, do not quite wind all the instruments equally? What if one player is quite slow in starting all the metronomes, or one of the arms on a donated instrument is a bit rusty and slows down quickly?

A primary assumption – and certainly a contributing factor to the piece’s traditional realization – is the imperfection of mechanical and human systems. A digital reconstruction by design cannot account for these sorts of microscopic but nonetheless significant variations.

Yet its concept is more relevant than ever to a current generation of sound artists. The limitations of technology and the flaws in their systems guide the whole concept of Poème Symphonique, as for the work of many sound artists working today. Artists like Jeff Carey, Tristan Perich, and Toshimaru Nakamura deliberately use digital artifacts, extremely limited digital sound, or noise created by signal chains to craft music. I often wonder how the ever-playful Ligeti would experiment in this vein if he had spent time experimenting with digital technology.

Upcoming Events

The events page has been updated with a few new performances:

Pratītya at the New York City Electroacoustic Music Festival in early June.

Gavin Goodwin will be premiering Foreign Masonry for baritone saxophone and live electronics at the ClaZel Theater in downtown Bowling Green on 29 April.

lenticule for amplified string quartet will receive a public reading by the Toledo Symphony String Quartet at BGSU on 30 April.

Pitching and Yapping


Ranjit Bhatnagar put out a call for works for an installation at QuBit's Machine Music event, calling specifically for 26-beat microworks that could be triggered on a Disklavier by each of the barks coming from the little toy chihuahua to the right.

He stipulated that notes could only occur on the yapped beats (quarter note = 344). Without having to worry about the dimension of rhythm, I thought it would be a perfect opportunity to explore algorithmically generated pitch! First though, some background on the progress I've already made with pitch selection in JALG.

Choosing Pitch Collections


I have made several modules that allow me to create pitch collections to feed into other modules. The simplest is is [jalg_manualchord]. This allows me to either peck out a chord on the onscreen keyboard, or enter notes via MIDI keyboard. Another module, [jalg_randomrange], outputs a random value within a selectable range (0-127, more useful for velocity or control data, though its could be scaled to create a range of pitches).

Being able to input notes is great if I'm improvising at the keyboard, but it could take a long time to enter all the notes I want to use. A bounded range is simple enough, but it would allow for any chromatic series of pitches. And what about storing the pitch collections I like for later use? I decided to tackle these questions early on, with a module called [jalg_chordstore]. Using [jalg_chordstore], I can add multiple chords to a list, and then save it as an indexed collection. This is great if I'm coming up with chords to enter in on the fly via [jalg_manualchord] or something. But it can recall existing chords too!

I went to an old, undeveloped sketch in my notebook that was simply a sequence of 12 chords, deciding to use them to build the pitch material for some yap iterations. It was built in two phrases that were the same except for their last chord:

I entered in the notes in Sibelius, exported the whole sequence as a MIDI file, and ran it through a little utility I made called [jalg_seqtochord] that could take the individual chords and store them in a text file readable by [jalg_chordstore]. An example  of how it works is below.

The original progression...mmm, tasty (click to expand)

[jalg_seqtochord] logic and an example of its output

Now for some yapping.

I've loaded the chords into a [jalg_chordstore]. I can feed it a number 0-11 (corresponding to each of the chords), and get the corresponding chord out. Here's an example of the sequence in order, triggered by a [counter] at the tempo specified by the yapping dog (repeated to fill up the entire 26 beats):

Yap 1 - the sequence played in order (click to enlarge)

I made some velocity adjustments to the notes so the whole thing wouldn't sound so robotic. I like contracting nature of the chords in the original sequence, but nothing is really happening, simply playback. Maybe I can preserve some of that nice, contracting voicing in the original sequence, but add a little chance to it. The [drunk] object is perfect for this! [Drunk] outputs a random number within a "step range." For example, if the initial number is 6 and the step range is 3, the next number [drunk] outputs can either be from 6-9 (±3 steps). The next output is then ±3 from there.

Since I want to preserve voice leading, I'll make sure the module can move only up or down by one step. Setting an initial value randomly, away we go for 26 more yaps:

Drunk Yap - the sequence played in stepwise, random order (click to enlarge)

Whoops! See those big leaps in the second system? It's due to the fact that there is a large leap in the original sequence - the repetition of the phrase. We started on chord 2, but there's a leap from chord 6 to chord 7. This is just a function of the input material though; I could thin down the list so that there were only 7 chords (accounting for the different last chord), instead of duplicates of 5 of them. This is something I'm going to have to keep in mind when inputting sequences into [jalg_chordstorage]. Unless I want repetition, I have to make the source list as lean as possible.

Before the next iteration, I removed duplicate chords from the list - now only 7 chords to choose. Throwing voice-leading out the window, I used an [urn] object to select every chord before repeating any of them.

I randomized velocities of every note to give it a little variety. 26 beats allows for all seven chords 3 times (with 5 extras). Repetition of the sequence, in part or in full, is very likely from such a small number of chords.

Urn Yap - the sequence played randomly, no repeating. (click to enlarge)

Octave Displacement

Every iteration so far has been in the middle of the piano. What if I want to preserve the chords, but distribute them across the keyboard? The Disklavier isn't bound by the limits of the human hand, so why not get a chord going across the whole piano at once? The concept of octave equivalency is natural to our perception and practice of music, so in effect, notes displaced by octaves are really articulating the same harmony. The module [Jalg_octave] exploits this phenomenon to create textural variety.

[Jalg_octave] takes a note in, and adds or subtracts random multiples of 12 to the incoming note: e.g. -12 = 1 octave lower; +24 = 2 octaves higher. The possible multiples added or subtracted can be adjusted by a range slider - up to four octaves above or below. If no range is defined (slider has a difference of 0 between upper and lower bounds), the note will always be transformed by the single amount shown on the slider.

Example of [jalg_ranger] connected to [jalg_octave].

However, these transformations are always kept in between a range defined by an input object such as [jalg_ranger]. If the transformed note is outside of the range, octaves are added and subtracted to the pitch within [jalg_octave] until it falls within the range specified. So the note that ultimately comes out of [jalg_octave] is not necessarily the transformation specified by the "random octave ops." This could be a feature for great control. For example, I could have notes in one module stay within only a few octaves, ensuring that a generated melody stays in an instrument's idiomatic register. But with [jalg_octave], pitches are not restricted to their original register!

The whole contraption. I just chose arbitrary octave operation ranges for each chord member.

An unfortunate limitation of [jalg_octave] so far is that it can only take one note at a time. For this particular chord collection, I'll need to make six parallel versions of it, each taking one pitch output from the [jalg_chordstore]. I'm using [zl scramble] to shuffle the outgoing pitches, and then [cycle] to distribute each member to its own octave module. Each module will be able to output a pitch across the full keyboard (one [jalg_ranger] controls the range for all six). To really hear the effects of octave displacement, I'm going to play the chords in sequence again, like in the first iteration, but now with octave displacement.

Woah. That is some crazy stuff. It seems chaotic, but I can definitely hear the chord progression despite all the wild displacement. I think I might end up with something more palatable if I were to limit the ranges individually.

Jarring. Yappy. (click to enlarge)

I got 99 problems and a pitch ain't one

There's a lot more that will go into choosing pitch, let alone beginning to put pitch and rhythm choices together. [jalg_chordstore] and [jalg_octave] won't be the only modules I use to select pitches. But even just with them and a few other simple objects, there'd be more rules I could add to the mix, even for this simple chord progression. Off the top of my head:

-Chord 7 in the unduplicated chord can only be played after chord 6 has been played once. This would shuffle the chords but still end with the differing final chords of the original phrases.
-Chord 1 is never transposed.
-Chord 3 is twice as likely to move to Chord 4 than to any other chord.

If you happen to be in NYC later this week, you can catch some of the results from this blog post along with a few other iterations, and many other micro-pieces by other composers (not to mention many other cool Disklavier things)! Brought to you by a yapping chihuahua and an automated piano.