Computer-Aided Composition of Musical Processes

Dynamic Music Generation with Formal Specifications

This example takes place in the context of automatic music generation systems combining formal specifications of temporal structures and interactivity. Such systems find applications for instance in computer improvisation. The objective is to embed agents generating musical material in high-level, formal while interactive time structures. We consider the generation engine of ImproteK, an interactive music system dedicated to guided or composed human-computer improvisation. This system generates improvisations by guiding the navigation through an online or offline musical “memory” using a “scenario” structure. It is constituted by a chain of modular elements: a guided music generation model; a reactive architecture handling the rewriting of musical anticipations in response to dynamic controls; and synchronization mechanisms to adapt MIDI or audio rendering to a non-metronomic pulse during a performance.

Dynamic Music Generation with Formal Specifications

Musical agents are implemented as visual programs (patches) in OpenMusic. The ImproteK generation model produces musical contents (here, a sequence of notes) by navigating through a musical memory (here, a collection of recorded jazz solos) to collect some sub-sequences matching the specification given by the scenario (here, a chord progression) and satisfying secondary constraints. Each agent embeds a reactive handler and is able to produce such musical sequences on demand. The process is computed by the engine when an agent is reached by the score playhead. It performs two operations:

  1. It generates some data (a MIDI sequence) according to the scenario (chord progression) and other generation parameters – the scheduling of this data is automatically handled by the system,
  2. It launches a new computation generating another agent in the other voice.

The preliminary evaluation of the control patch builds the two instances of the ImproteK generation engine, includes them within the two interconnected agents, and adds the first agent on track 1 to start the sequence. The rest of the process unfolds automatically at rendering time, computing the sequences in alternation from the two improvisation agents. To increase the musicality of this example, an accompaniment track is pre-computed (using a third instance of the generative model), and each agent informs the other on its generated data to add continuity in solos trades.

Control of the auditory distortion product synthesis software

In this example, we consider a score driving the long-term control of an external system synthesizing sound in real-time. It is based on Alex Chechile’s composition research on synthesis techniques for evoking auditory distortion products, which are sounds generated within the listener’s ears from acoustic primary tone combinations. In this context, multiple oscillators are to be controlled simultaneously at specific frequencies with precise ratios between each voice.

Control of the auditory distortion product synthesis software

In the control patch, an initial object is set up with a list of frequencies, a duration, a tempo, a port and an address for OpenMusic to communicate with Max, which runs custom software and a bank of oscillators. During playback, mouse input (implemented using the interface instantiated in the control patch) triggers the generation and scheduling of additional curves in the maquette, each of which are multiples of the original sequence. The durations of the additional tracks are determined by the length of time between the start of playback and the trigger input, and the durations decrease with each subsequent track. Through OSC, OpenMusic sends frequency information to the Max oscillators, and Max sends timing information to OpenMusic. In addition to the sequences of frequency data sent to Max, the clock object examines the number of currently active tracks in the maquette and outputs a variable used in Max to determine the ratio between the additional pitches. Conversely, four continuous streams of EEG data from Max change the position of tracks 2 to 5 in the maquette using four similar agents, resulting in sequences that move ahead and behind time during playback.

 


efficace/wp/musical-processes.txt · Dernière modification: 2016/05/20 17:36 par Jean Bresson