Concat Performer (2024)
The Concatenative Performer is a MaxMSP device I built using the
FluCoMa Library. This instrument allows me to analyse a longer
piece of sound or music and resynthesise it based on audio
descriptors. It does so by creating small segments which are then analysed for various properties (like pitch, loudness, color, spectral shape, etc.). These small segments are then sorted and projected onto a 2D plane (near equals more or less similar).
This plane can then be
navigated (in the moment I use an Ipad and touchOSC) and the instrument selects a certain
number of sounds which are close to the cursor, playing them
back with a granular synthesiser. The distance from the cursor to
each point determines different playback functions of each point (speed, filter, length).
As each point is a single voice,
they can be mapped to a multichannel sound system. Superimposed
onto this corpus is another map, which interpolates
between different loudspeaker configurations, effects, filters and so forth. This allows me to
map perceptual gradients in the sounds to specific loudspeaker
set-ups or interesting acoustic properties of a specific room. I use this instrument in the studio for composing my fixed media pieces but also to improvise in live settings in solo concerts or other constellations, often together with my other instrument, the Room.
Above is the latest and reduced version of the performer, less functions but faster, more stable and less CPU and RAM intensive, which opens up the possibility to have multiple performers open with different datasets loaded.
Below is an earlier version with more capabilities but more CPU intensive. My computer kept crashing. This version is still a work in progress.
Below is an earlier version with more capabilities but more CPU intensive. My computer kept crashing. This version is still a work in progress.