Ceci est une ancienne révision du document !


Antescofo workshop @PAW 2018

Context

The Antescofo system couples machine listening and a specific programming language for compositional and performative purposes. It allows real-time synchronization of human musicians with computers during live performance, especially in the context of mixed music (the live association of acoustic instruments played by human musicians and electronic processes run on computers).

During live performance, musicians interpret the score with precise and personal timing, where the score time (in beats) is evaluated into the physical time (measurable in seconds). For the same score, different interpretations lead to different temporal deviations, and musician's actual tempo can vary drastically from the nominal tempo marks. This phenomenon depends on the individual performers and the interpretative context. To be executed in a musical way, electronic processes should follow the temporal deviations of the human performers.

Achieving this goal starts by score following, a task defined as real-time automatic alignment of the performance (usually through its audio stream) on the music score. However, score following is only the first step toward musician-computer interaction; it enables such interactions but does not give any insight on the nature of the accompaniment and the way it is synchronized.

Antescofo is built on the strong coupling of machine listening and a specific programming language for compositional and performative purposes:

  • The Listening module of Antescofo software infers the variability of the performance, through score following and tempo detection algorithms.
  • And the Antescofo language
    • provides a generic expressive support for the design of complex musical scenarios between human musicians and computer mediums in ;real-time interactions
    • makes explicit the composer intentions on how computers and musicians are to perform together (for example should they play in a “call and response” manner, or should the musician takes the leads, etc.).

This way, the programmer/composer describes the interactive scenario with an augmented score, where musical objects stand next to computer programs, specifying temporal organizations for their live coordination. During each performance, human musicians “implement” the instrumental part of the score, while the system evaluates the electronic part taking into account the information provided by the listening module.

Content

The presentation will focus on the Antescofo real-time programming language. This language is built on the synchrony hypothesis where atomic actions are instantaneous. Antescofo extends this approach with durative actions. This approach, and its benefits, will be compared to others approaches in the field of mixed music and audio processing.

In Antescofo, as in many modern languages, processes are first class values. This makes possibles to program complex temporal behaviors in a simple way, by composing parameterized processes. Beyond processes, Antescofo actors are autonomous and parallel objects that respond to messages and that are used to implement parallel electronic voices. Temporal patterns can be used to enhance these actors to react to the occurrence arbitrary logical and temporal conditions.

During this lecture, we will explain how Antescofo pushes the recognition/triggering paradigm which is actually preeminent in mixed music, to the more musically expressive paradigm of synchronization, where “time-lines” are aligned and synchronized following performative and temporal constraints.

Synchronization strategies are used to create specific time-line that are “aligned” with another time-line. Primitive time-lines include the performance of the musician on stage followed by the listening machine, but may also include any kind of external processes using a dedicated API to inform the reactive engine of its specific passing of time.

Material

Antescofo on the IRCAM forum (register, it’s free):

  • download page for the Max and PD objects. Non real-time standalone is not distributed but can be found on the Additional material in the dedicated repository item described below.
    • source of the examples "paw examples.zip"
    • slides of an Antescofo tutorial at SMC 2017
    • quick start in antescofo
    • last non-real-time executable: standalone.zip
    • last debian docker-compiled executable: standalone_docker.zip (but does not include compilation nor differential curve)
    • antescofo for PD-linux, version 0.92
    • last Max object (include differential curve and compilation) antescofo~.mxo.zip
    • documentation on differential curve (not in online documentation) Differential Curve - AntescofoDoc.pdf
    • documentation on compilation (not in online documentation) Compilation - AntescofoDoc.pdf
    • seven ways (by José Echeveste) to do piano phase (by Steve Reich): piano_phase.zip
      1. with 2 nested loops. The time shift for the second loops is computed.
      2. same but the time shift is due to the tempo
      3. a recursive process is used to play a note, notes are in a tab, the tempo is inherited.
      4. Antescofo play the second voice following an external musician and4bis same but the shift is controled by a variable and the variable is updated using the space bar
      5. SuperVP (phase vocoder) is used to modulate the playback. The curve is linked with the first round in the audio file.
      6. hybrid approach betweeb 4bis and 5
      7. as in 4bis and 6, the shift is defined by the space bar. The audio is recorded in a buffer (first voice) and we use SuperVP to read in the buffer (second voice). A nim is used to record the mapping between position in beats and time in second for the buffer playback.
 


giavitto/paw2018.1543679369.txt.gz · Dernière modification: 2018/12/01 16:49 par Jean-Louis Giavitto