More videos by searching
Antescofo on GoogleVideo!
A documentary movie on Antescofo produced by Inria and Pierre-Olivier Gaumin, describing how the system works and featuring three world premier pieces using Antescofo by composer Philippe Manoury, Ichiro Nodaira and Marco Stroppa.
A general lecture on musical synchrony and Antescofo, during 7 Keys to the Digital Future lectures series curated by Prof. Gérard Berry of Collège de France and co-organized by The Royal Society of Edinburgh.
This one hour lecture proceeded one by Prof. Berry on Synchronous Programming which can be viewed directly from the event website.
This lecture describes computer music programming within realtime synchronous programming paradigms in computer science and attempts to situate Antescofo's current and future perspectives with regards to challenges posed by music to synchronous programming.
This page documents the evolution of Score Following at Ircam since 1983, focusing on its artistic use since its inception. The order is reversed chronologically:
Following relative robustness of Antescofo in live detection, the musical goals of such systems became more and more explicit. This led to the extension of score following paradigm to Synchronous Programming. Electronic events are now polyphonic programs, running concurrently and in parallel to the performer, and written in relative time. The attempt is thus to bridge the gap between the performative and compositional aspects of computer music. Antescofo's synchronous programming language has since evolved to address more composers' demands.
The musical and scientific goals of score following are different. The scientific one requires exact alignment whereas the musical one demands access to live interpretation parameters to undertake electronic actions synchronous to live performers. In late 2007, in collaboration with composer Marco Stroppa, score following moved to an
anticipatory paradigm, decoding both position and tempo in realtime; and anticipating performance parameters as musicians do. These considerations led to the development of Antescofo, which has become the standard score following platform in many pieces involving live electronics.
Late 1990s saw the advent of probabilistic methods for speech and audio processing. An ideal score follower should take into account uncertainties due to performance or machine perception, thus favoring probabilistic methods for robustness. This led to a new generation of score followers based on Hidden Markov Models (HMMs), started in 2000 at Ircam by Nicola Orio and Diemo Schwarz and led to suivi~ module, enhanced with an artificial training system. This system was first employed in a concert situation in 2005 for a performance of Pierre Boulez' piece …explosante fixe… for flute, orchestra and electronics.
With the advent of dedicated audio processing hardware modules (such as 4X and ISPW at Ircam), the first
pitch-based score followers emerged out, this time taking audio as input and following pitches in the music score. Many early real-time electronic pieces used this technology for performances of live electronics with musicians playing a music score. A historical example of this development is Philippe Manoury's Jupiter for flute and live electronics composed originally for MIDI flute and ported for audio in 1992, and
En Echo for voice and live electronics (left video). Jupiter is considered the first realtime piece composed in MaxMSP.
The video on right shows the natural big step from this paradigm using
stochastic methods pioneered by Dannenberg and Grubb in Carnegie Melon.
With the advent of MIDI standard in the 1980s and its integration within commercial musical instruments, score followers were adapted to accept symbolic inputs in MIDI format. Many early musical examples of score following used the MIDI version. A historical musical example of this development is Philippe Manoury's Pluton for Piano and Live electronics. Today we tend to use polyphonic score followers directly using live audio for such setups.
Score Following research was initiated at Ircam by Barry Vercoe in 1983, and also by Roger Dannenberg at CMU. Due to computing limitations at the time, inputs were monophonic. In the case of Vercoe's synthetic performer (video on the left), Larry Beauregard (the flutist of the Ensemble Intercontemporain) set up sensors on his instruments at IRCAM to provide symbolic inputs to the machine to enhance audio pitch detection based on a filter bank. Dannenberg shows his software from 1985 (right) on a Commodore Amiga, where an external device from IVL Technology converts monophonic audio to MIDI. Notice the output is able to jump both forwards and backwards in the score to synchronize in extreme cases.