reversible reaction

reversible reaction takes its inspiration from the chemical phenomenon in which reactants and products can form each other, oscillating between chemical states. Contrasting sonic and visual environments create an abstracted microscopic world in this installation: molecular bonds join and break, atoms float in suspension, and the environment changes states when “catalyst” participants disturb the system’s equilibrium.

reversible reaction contains several interactive elements. The most salient is visitor motion tracking with a Kinect infrared camera. Visitors in the space directly affect the installation, switching it from state to state depending on criteria related to their position – in relation to each other and the space itself. Depending on the number of visitors within the space, the criteria adjusts to keep the installation oscillating at a fairly consistent rate.

Example criteria that cause the installation to switch states:

·  The space has the same number of visitors for a certain amount of time.

·  The space has no visitors for a certain amount of time.

·  The number of visitors exceeds a certain threshold.

·  A visitor steps into a randomly designated area in the space (changing after it is triggered)

·  If the distance between a pair of visitors exceeds a certain threshold, the state changes. 

A GUI for this component allows for tweaking of these elements in real time (including the visitor number thresholds, distance tolerances, and a “speed limit” on the rate of change). The goal is for the visitor to be aware they are influencing the installation’s state without being certain of its mechanics, allowing them to experiment to uncover the conditions of change.

The rest of the interactivity occurs among the sonic and visual elements of the piece. The audio for each state is algorithmically layered from prerendered soundfiles in real time, creating a non-repeating aural component. Gradual volume changes, alternation and phasing of sonic events, and harmonic changes create “concentric” time scales that add variety over time to the general sonic atmosphere of each state.

2.1 channel audio
Max/MSP (environment control)
Processing, After Effects (animations)
MadMapper (projection mapping)
Arduino (LED lanterns)
Kinect and KVR Tracker (motion tracking)
Logic (sound design)

Reconstructing Ligeti's "Poème Symphonique" for 100 metronomes

This post is adapted from the paper presentation I gave at the György Ligeti and the Future of Maverick Modernity conference in Maccagno, Italy in July 2014. 

An iteration of the piece from the digital reconstruction environment, with animation by Josh Simmons.

The "analog" realization.

Gÿorgy Ligeti’s 1962 composition for 100 metronomes, Poème Symphonique, owes much of its success to its presentation as a ridiculous spectacle. But no piece in Ligeti’s catalogue better distills the composer’s fascination with chaos, order, and broken systems. The piece, notated as a short text score, lasts as long as it takes one hundred mechanical metronomes, all set in motion at the same time, to unwind and stop ticking. Thus, the shape and energy of the piece, if not the duration, is always the same: a tendency towards sparser texture and eventually silence as the metronomes unwind.

However, with the obsolescence of the mechanical metronome, gathering enough instruments for a performance proves difficult. Even in 1962 when the "instruments" were certainly more common, Ligeti devoted a large portion of the score to informing the presenter about how one hundred metronomes may be acquired – also going so far as specifying that each performance be dedicated to those who contributed their personal instruments or helped procure them:

On each [performance] the work is dedicated to the person (or persons) who have helped to bring about the performance through the contribution of instruments, by any means whatsoever, whether it be executive council of a city, one or more of the music schools, one or more businesses, one or more private persons.

This project, a reconstruction programmed in Max/MSP, attempted to model Ligeti’s famous piece. Though it certainly loses the absurd theatricality of the intended realization, it is simple to recreate the piece in real time and experience a simulation of the world that fascinated Ligeti throughout his musical career. Because Poème Symphonique is dependent on the eventual release of mechanical tension in its “instruments,” a main goal of the digital model was to imitate the behavior of mechanical metronomes as closely as possible.


 Ligeti specifies a few conditions that gave a good place to start in programming the reconstruction.

The work is performed by 10 players under the leadership of a conductor . . . Each player operates 10 metronomes . . . The metronomes must be brought onto the stage with a completely run-down clockwork . . . the players wind up the metronomes . . .  at a sign from the conductor, all the metronomes are set in motion by the players.

Perfect: Max functions best as a modular programming environment, so each “player” of ten metronomes was treated as a replicable unit, controlled by a main “conductor” module. Here is a general schematic of the architecture of the program:

The “starter” module, analogous to the conductor, designates the length of the piece and one of the two performance modes. sending information to each of the players and their metronomes. The player modules then select a tempo for each metronome in the group and distributes them to the instruments depending on the performance mode selected. Each impulse of a metronome in tempo is sent to a synthesizer, which creates an audible “clack.”


a detail from the random tempo module for each "player" group of 10 metronomes.

The most variable element then is the tempo of each metronome. Ligeti specifies two slightly different modes of performance in his score, each presenting unique programming challenges, discussed below. In both performance modes, each metronome is assigned a tempo randomly selected from the thirty-nine standard metronome markings available, ranging from 40 to 208 beats per minute. Many mechanical metronomes have grooves in the length of the pendulum’s arm that preclude tempi in between these markings, and so this limitation was implemented.

Fortunately, more refined mechanical metronomes do not decelerate as their wound spring loses tension. Therefore, there was no need to account for this variable in imitating a mechanical metronome: the piece could be simulated by assigning a set number of impulses to each metronome – “higher tension” is simulated as a greater number of impulses. When each metronome reaches the specified number of impulses, it shuts itself off. Credit goes to Dan Tramte for this particular idea – he made his own reconstruction in Max which he was gracious enough to send to me.

Performance Modes

There are two possible ways of winding the metronomes specified in the score. The first was pretty simple:

"All metronomes are wound equally tightly. In this version the chosen metronome numbers (oscillation speeds) wholly determine the time it will take for the several metronomes to run down: those which swing faster will run down faster, the others more slowly."

Equal tension was simulated by assigning the same number of impulses to each metronome, regardless of tempo. A module randomly selects a length for the piece (between 1 and 5 minutes in the original implementation). The smallest number of impulses possible is 40 (the slowest possible tempo setting multiplied by number of minutes), so the given “minutes” value is multiplied by 20 (since one cycle of a metronome has two impulses) and distributed to every metronome. Therefore, the metronomes set at faster tempi run down more quickly than those at a slower tempo since they all have to complete the same number of impulses. 

The second performance option Ligeti specifies is a bit more complicated:

"The several metronomes of a group are wound unequally: the first of the 10 metronomes the tightest, the second a little less, the tenth, the least tightly. Care must be taken, however, that the winding and the regulation of the speeds of the several metronomes are carried out completely independently of each other. Thus the metronome in each group which has been most lightly wound must not be the fastest or the slowest in its oscillation."

In this case, both tempo and tension, or number of impulses, are variable.

"Tightness" values - the maximum number of clicks is read from this table and randomly distributed.

"Tightness" values - the maximum number of clicks is read from this table and randomly distributed.

When this mode is selected, the maximum number of impulses is randomly chosen from a table (from between 40 and 100 clicks) and sorted in descending order. If these sorted numbers were distributed to the metronomes in the same way as the first mode, the last metronomes playing in each group would still be the ones with the slowest tempo.

Therefore, the sorted list of tension values rotates 5 positions to the right, so the median value leads. The sorted list of tempi for each group of 10 metronomes rotates 3 positions to the right. This ensures that the relationships of the longest-lasting metronomes – that is, those with the most tension – are neither the fastest nor slowest of each group.

Delaying and Clacking

There is a performance limitation in the original piece – the players cannot start all ten of their metronomes at once. Ligeti anticipated this and offered a bit of advice:

"To [start the metronomes] quickly as possible, it is recommended that several fingers of each hand be used at the same time. With a sufficient amount of practice, the performers will find that they can set 4 to 6 instruments in motion simultaneously."

When the piece is initiated, 4, 5, or 6 metronomes are started, and after a delay from 2-4 seconds, the rest of the group begins:

The delayer module.

The delayer module.

Each metronome impulse triggers a pulse, output through a resonant filter and summed with a burst of filtered noise for color.


An individual metronome synthesizer.

The center frequency of this filter varies from one metronome to another, accounting for the different-sounding brands of metronomes that one would have to acquire for an actual performance! These synthesized impulses could certainly be substituted for samples of mechanical metronomes – and indeed, when trying to model physical metronomes in other ways, why exclude their sound? I wanted something a little closer to what we might hear in a digital metronome, as a sort of bridge between the physical and mechanical worlds. But the sound is not so important as the texture, and could easily be replaced with playback of sound files.


This project is simply an exercise in trying to capture the variable elements specified in the score so that one may recreate the piece dynamically with ease. Of course, the patch's output (in terms of the timing of the metronomes) is still a bit too polished to completely represent the chaotic conditions present in a performance with mechanical metronomes. What if the players, despite operating under the first mode, do not quite wind all the instruments equally? What if one player is quite slow in starting all the metronomes, or one of the arms on a donated instrument is a bit rusty and slows down quickly?

A primary assumption – and certainly a contributing factor to the piece’s traditional realization – is the imperfection of mechanical and human systems. A digital reconstruction by design cannot account for these sorts of microscopic but nonetheless significant variations.

Yet its concept is more relevant than ever to a current generation of sound artists. The limitations of technology and the flaws in their systems guide the whole concept of Poème Symphonique, as for the work of many sound artists working today. Artists like Jeff Carey, Tristan Perich, and Toshimaru Nakamura deliberately use digital artifacts, extremely limited digital sound, or noise created by signal chains to craft music. I often wonder how the ever-playful Ligeti would experiment in this vein if he had spent time experimenting with digital technology.

Upcoming Events

The events page has been updated with a few new performances:

Pratītya at the New York City Electroacoustic Music Festival in early June.

Gavin Goodwin will be premiering Foreign Masonry for baritone saxophone and live electronics at the ClaZel Theater in downtown Bowling Green on 29 April.

lenticule for amplified string quartet will receive a public reading by the Toledo Symphony String Quartet at BGSU on 30 April.

Pitching and Yapping


Ranjit Bhatnagar put out a call for works for an installation at QuBit's Machine Music event, calling specifically for 26-beat microworks that could be triggered on a Disklavier by each of the barks coming from the little toy chihuahua to the right.

He stipulated that notes could only occur on the yapped beats (quarter note = 344). Without having to worry about the dimension of rhythm, I thought it would be a perfect opportunity to explore algorithmically generated pitch! First though, some background on the progress I've already made with pitch selection in JALG.

Choosing Pitch Collections


I have made several modules that allow me to create pitch collections to feed into other modules. The simplest is is [jalg_manualchord]. This allows me to either peck out a chord on the onscreen keyboard, or enter notes via MIDI keyboard. Another module, [jalg_randomrange], outputs a random value within a selectable range (0-127, more useful for velocity or control data, though its could be scaled to create a range of pitches).

Being able to input notes is great if I'm improvising at the keyboard, but it could take a long time to enter all the notes I want to use. A bounded range is simple enough, but it would allow for any chromatic series of pitches. And what about storing the pitch collections I like for later use? I decided to tackle these questions early on, with a module called [jalg_chordstore]. Using [jalg_chordstore], I can add multiple chords to a list, and then save it as an indexed collection. This is great if I'm coming up with chords to enter in on the fly via [jalg_manualchord] or something. But it can recall existing chords too!

I went to an old, undeveloped sketch in my notebook that was simply a sequence of 12 chords, deciding to use them to build the pitch material for some yap iterations. It was built in two phrases that were the same except for their last chord:

I entered in the notes in Sibelius, exported the whole sequence as a MIDI file, and ran it through a little utility I made called [jalg_seqtochord] that could take the individual chords and store them in a text file readable by [jalg_chordstore]. An example  of how it works is below.

The original progression...mmm, tasty (click to expand)

[jalg_seqtochord] logic and an example of its output

Now for some yapping.

I've loaded the chords into a [jalg_chordstore]. I can feed it a number 0-11 (corresponding to each of the chords), and get the corresponding chord out. Here's an example of the sequence in order, triggered by a [counter] at the tempo specified by the yapping dog (repeated to fill up the entire 26 beats):

Yap 1 - the sequence played in order (click to enlarge)

I made some velocity adjustments to the notes so the whole thing wouldn't sound so robotic. I like contracting nature of the chords in the original sequence, but nothing is really happening, simply playback. Maybe I can preserve some of that nice, contracting voicing in the original sequence, but add a little chance to it. The [drunk] object is perfect for this! [Drunk] outputs a random number within a "step range." For example, if the initial number is 6 and the step range is 3, the next number [drunk] outputs can either be from 6-9 (±3 steps). The next output is then ±3 from there.

Since I want to preserve voice leading, I'll make sure the module can move only up or down by one step. Setting an initial value randomly, away we go for 26 more yaps:

Drunk Yap - the sequence played in stepwise, random order (click to enlarge)

Whoops! See those big leaps in the second system? It's due to the fact that there is a large leap in the original sequence - the repetition of the phrase. We started on chord 2, but there's a leap from chord 6 to chord 7. This is just a function of the input material though; I could thin down the list so that there were only 7 chords (accounting for the different last chord), instead of duplicates of 5 of them. This is something I'm going to have to keep in mind when inputting sequences into [jalg_chordstorage]. Unless I want repetition, I have to make the source list as lean as possible.

Before the next iteration, I removed duplicate chords from the list - now only 7 chords to choose. Throwing voice-leading out the window, I used an [urn] object to select every chord before repeating any of them.

I randomized velocities of every note to give it a little variety. 26 beats allows for all seven chords 3 times (with 5 extras). Repetition of the sequence, in part or in full, is very likely from such a small number of chords.

Urn Yap - the sequence played randomly, no repeating. (click to enlarge)

Octave Displacement

Every iteration so far has been in the middle of the piano. What if I want to preserve the chords, but distribute them across the keyboard? The Disklavier isn't bound by the limits of the human hand, so why not get a chord going across the whole piano at once? The concept of octave equivalency is natural to our perception and practice of music, so in effect, notes displaced by octaves are really articulating the same harmony. The module [Jalg_octave] exploits this phenomenon to create textural variety.

[Jalg_octave] takes a note in, and adds or subtracts random multiples of 12 to the incoming note: e.g. -12 = 1 octave lower; +24 = 2 octaves higher. The possible multiples added or subtracted can be adjusted by a range slider - up to four octaves above or below. If no range is defined (slider has a difference of 0 between upper and lower bounds), the note will always be transformed by the single amount shown on the slider.

Example of [jalg_ranger] connected to [jalg_octave].

However, these transformations are always kept in between a range defined by an input object such as [jalg_ranger]. If the transformed note is outside of the range, octaves are added and subtracted to the pitch within [jalg_octave] until it falls within the range specified. So the note that ultimately comes out of [jalg_octave] is not necessarily the transformation specified by the "random octave ops." This could be a feature for great control. For example, I could have notes in one module stay within only a few octaves, ensuring that a generated melody stays in an instrument's idiomatic register. But with [jalg_octave], pitches are not restricted to their original register!

The whole contraption. I just chose arbitrary octave operation ranges for each chord member.

An unfortunate limitation of [jalg_octave] so far is that it can only take one note at a time. For this particular chord collection, I'll need to make six parallel versions of it, each taking one pitch output from the [jalg_chordstore]. I'm using [zl scramble] to shuffle the outgoing pitches, and then [cycle] to distribute each member to its own octave module. Each module will be able to output a pitch across the full keyboard (one [jalg_ranger] controls the range for all six). To really hear the effects of octave displacement, I'm going to play the chords in sequence again, like in the first iteration, but now with octave displacement.

Woah. That is some crazy stuff. It seems chaotic, but I can definitely hear the chord progression despite all the wild displacement. I think I might end up with something more palatable if I were to limit the ranges individually.

Jarring. Yappy. (click to enlarge)

I got 99 problems and a pitch ain't one

There's a lot more that will go into choosing pitch, let alone beginning to put pitch and rhythm choices together. [jalg_chordstore] and [jalg_octave] won't be the only modules I use to select pitches. But even just with them and a few other simple objects, there'd be more rules I could add to the mix, even for this simple chord progression. Off the top of my head:

-Chord 7 in the unduplicated chord can only be played after chord 6 has been played once. This would shuffle the chords but still end with the differing final chords of the original phrases.
-Chord 1 is never transposed.
-Chord 3 is twice as likely to move to Chord 4 than to any other chord.

If you happen to be in NYC later this week, you can catch some of the results from this blog post along with a few other iterations, and many other micro-pieces by other composers (not to mention many other cool Disklavier things)! Brought to you by a yapping chihuahua and an automated piano.

Rhythmic Considerations

Screen Shot 2014-02-09 at 10.56.46 PM.png
Screen Shot 2014-02-09 at 11.02.09 PM.png

There's a simplicity about rhythm in a line of monophonic music. There's no "up or down" in contour, no loud or soft, no infinitesimal shades of timbre. Discounting articulation and note length, a note simply attacks or doesn't. There is a one-dimensional interonset interval between one attack and the next. 

The complicated and expressive thing about rhythm is its context, both within expectation and familiarity. Certain genres are instantly recognizable by distinctive rhythms. What makes a rhythm make sense in the context of other rhythms? In a single line of music, rhythm happens linearly, and our ears can definitely pick out things that aren't "grammatical" to it.

This makes sense:

from Rimsky-Korsakov's Procession of the Nobles

from Rimsky-Korsakov's Procession of the Nobles

But this doesn't:

A little bit different.

Nor does this:

Screen Shot 2014-02-08 at 8.00.30 PM.png

In the first example, the quintuplet that finishes the phrase is wildly out of place among the regular divisions of the beat that precede it, though it preserves its melody. In the second example, the available note durations are preserved, but beginning in the second measure, there seems to be no rules governing the progression of one kind of note value to another. Of course, in a larger context, these abnormalities could be points of articulation or recurring motives. But in general, the language of rhythm in a piece depends on a limited number of note values, and a limited range of movement between those values.

The first rhythm module I've built for JALG uses the highly flexible [prob] object to make decisions about rhythm. [Prob] creates weighted probability tables that move from one numbered "stage" to another when they receive an impulse. It takes messages in groups of three integers: stage 1, stage 2, weight. The "weight" is not a percentage; rather, it is divided by the total number of probability values for a stage. For example, the full probability table:

1:1 25
1:2 25

means that there is an equal chance of stage 1 repeating or going from stage 1 to stage 2.

So [prob] is quite flexible and values can easily be changed within each stage.  

Each one of these stages corresponds to a particular rhythmic value. Max allows the [metro] object to be tempo-synced, outputting an impulse to a globally-synced tempo. By defining the possible progressions of each stage, I can create a grammar for the rhythm modules to follow.

How do I populate the list of rhythms? I could do it more or less randomly (how I've been operating up until now), or create a set of templates for rhythm. I'm going to make my first ones based on musical rhythms that I like from other composers. Here's a sample process for creating a probability table for a specific rhythm.

Steve Reich's Clapping Music...

Steve Reich's Clapping Music...

I'm going to choose Steve Reich's iconic Clapping Musicfamously built off of one rhythm (in a bout of characteristic nerdiness, I've also made this rhythm my custom vibration pattern for phone calls).

...with only interonset values.

...with only interonset values.

Though this passage notated with rests, it makes more sense to the rhythm module to treat an eighth note (8n) +rest as quarter note (4n), with only interonset values. Pretty low rhythmic variety.

With the quarter note as the beat, there are eight distinct attacks. It's important to consider what I choose as the "controlling metronome," that is, a metronome synced with the global tempo that forces the [prob] to choose the next stage for the note impulse. If this value is set at (4n), a new note value can only be chosen every (4n), for example. I'll want to assign it to (8n), eighth notes, in this case.

 The possible rhythm transition values are shown to the right.This corresponds to a permutation of 2 rhythmic values, taken 2 at a time, with repetition.

(2 * 2 = 4 possible transitions)

How does this manifest in the rhythmic pattern?

Screen Shot 2014-02-08 at 7.38.48 PM.png

Since each step moves linearly, I looked at the proportion of the step from one to another and the proportion of each possible path a particular pnote (sic) could take.

Every possible ordering is represented in the music, though this wouldn't necessarily be the case as we'll soon see. It's three times more likely for (8n) to proceed to a quarter note than for (8n) to repeat. But it is equally likely for either (8n) or (4n) to repeat. Taking this rhythmic template, I ran the module for 8 bars of 4/4:

Screen Shot 2014-02-08 at 4.44.50 PM.png
Screen Shot 2014-02-08 at 7.41.50 PM.png

The breakdown of note transitions is shown to the right. The model rhythm doesn't happen to be duplicated in these results.

Hmm, it doesn't quite match the distribution seen in Clapping Music, especially because of the subsequent strings of continuous eighth and quarter notes. This is to be expected from such a small sample size. f I were to let the module run for even a few minutes, the distribution would probably look much closer to its model rhythm. But even its first iteration is recognizably Reichian! (-ien?) I find myself craving the first or second half of the cell when its counterpart appears in the output.

Ok, how about for a slightly more complicated rhythm? Let's take a tiny bit of a great melody from Nikolai Rimsky-Korsakov's Procession of the Nobles:

Screen Shot 2014-02-08 at 7.25.03 PM.png

There is a greater variety of rhythm values in this excerpt: quarter notes (4n), eighth notes (8n), dotted eighth notes (8nd), and sixteenth notes (16n). In the above example I eliminated all the rhythm progressions that didn't occur to saves space (though there are 4*4=16 possible progressions) Some notes about this set:

  • (8n) is the most promiscuous of the note values, pulled by the gravity of any other note value. This makes sense given the fact they lay somewhat in the middle the other possible durational values. In this music, notes never more than double or halve in durational value from one to the next, so the (8n) is a bridge between them.
  • (8nd) are always followed by (16n), and (4n) are always followed by (8nd). The first of these rules is natural to the rhythmic language of the piece from which it is excerpted – syncopation never happens at so fine a rhythmic level as (16n) in this piece. Otherwise, the stately and steady meter could become disrupted very easily. The second is probably a function of how short an excerpt I've taken, though Rimsky-Korsakov DOES save the quarter notes for the gravitational pull of every sixth beat.
  • Here's a few bars of running the Processions blueprint through the rhythm module:
Screen Shot 2014-02-08 at 7.35.15 PM.png
Screen Shot 2014-02-08 at 8.48.54 PM.png

And a break down of note...processions.

Still widely variant in a short excerpt, but much more consistent within steps. The sixteenth note module is almost exactly the same as in the model rhythm. It's kind of hard to compare the others quantitatively, because not every note combination occurred. Perhaps I should have waited for an excerpt that also created (4n), but this omission highlights the quarter note's plight: it can only be approached from (8n). So they only have an 8% chance of ever occurring. The output of this module doesn't recall the model at all, but at least the rhythms that are coming out of it would be grammatical to Rimsky-Korsakov.

And now because I just *have* to do it: a combination of both probability tables.

Procession of the Clapping Music

Procession of the Clapping Music

Now, I realize this data isn't normalized: there are a lot more rules in Procession's blueprint than in Clapping Music's, and so it has undue influence on the total weights for each stage. This will be one thing to account for in future attempts, a concept I need to consider very regularly as I enhance its capabilities.

Tuplets - Max can handle triplet divisions but nothing else. I'll have to build a separate tuplet module that converts beat/tempo information into milliseconds so that beats/half-beats can be evenly divided into groups of 5, 7, 9, etc.
Polyrhythm - I am going to have to figure out how to configure rhythm control modules so they can reference divisions internally but still remain synced to a global clock. I could get some very cool polyrhythmic content out of this with very strange divisions if I could run parallel but referenced rhythm modules.
Phrasing - I really don't want just an uninterrupted stream of notes. Though figuring out the possibilities of one rhythmic value moving to another is important, I need to extend the profile more than one note: instead, longer strings of notes, like words into phrases, become probabilistically triggered. Perhaps this module could average and learn from its own output: the primacy of the first interpretation populates its future function, birthed and spun out in one direction by chance.

Here's what one of these rhythm control modules looks like so far:

Screen Shot 2014-02-08 at 9.29.38 PM.png

I imagine it's going to take up a little more real estate on the screen soon.