Table des matières

Antescofo workshop @PAW 2018

Context

The Antescofo system couples machine listening and a specific programming language for compositional and performative purposes. It allows real-time synchronization of human musicians with computers during live performance, especially in the context of mixed music (the live association of acoustic instruments played by human musicians and electronic processes run on computers).

During live performance, musicians interpret the score with precise and personal timing, where the score time (in beats) is evaluated into the physical time (measurable in seconds). For the same score, different interpretations lead to different temporal deviations, and musician's actual tempo can vary drastically from the nominal tempo marks. This phenomenon depends on the individual performers and the interpretative context. To be executed in a musical way, electronic processes should follow the temporal deviations of the human performers.

Achieving this goal starts by score following, a task defined as real-time automatic alignment of the performance (usually through its audio stream) on the music score. However, score following is only the first step toward musician-computer interaction; it enables such interactions but does not give any insight on the nature of the accompaniment and the way it is synchronized.

Antescofo is built on the strong coupling of machine listening and a specific programming language for compositional and performative purposes:

This way, the programmer/composer describes the interactive scenario with an augmented score, where musical objects stand next to computer programs, specifying temporal organizations for their live coordination. During each performance, human musicians “implement” the instrumental part of the score, while the system evaluates the electronic part taking into account the information provided by the listening module.

Content

The presentation will focus on the Antescofo real-time programming language. This language is built on the synchrony hypothesis where atomic actions are instantaneous. Antescofo extends this approach with durative actions. This approach, and its benefits, will be compared to others approaches in the field of mixed music and audio processing.

In Antescofo, as in many modern languages, processes are first class values. This makes possibles to program complex temporal behaviors in a simple way, by composing parameterized processes. Beyond processes, Antescofo actors are autonomous and parallel objects that respond to messages and that are used to implement parallel electronic voices. Temporal patterns can be used to enhance these actors to react to the occurrence arbitrary logical and temporal conditions.

During this lecture, we will explain how Antescofo pushes the recognition/triggering paradigm which is actually preeminent in mixed music, to the more musically expressive paradigm of synchronization, where “time-lines” are aligned and synchronized following performative and temporal constraints.

Synchronization strategies are used to create specific time-line that are “aligned” with another time-line. Primitive time-lines include the performance of the musician on stage followed by the listening machine, but may also include any kind of external processes using a dedicated API to inform the reactive engine of its specific passing of time.

Material

Antescofo on the IRCAM forum (register, it’s free):