Ceci est une ancienne révision du document !


We are seeking a good candidate for a PhD starting in September 2013 at Ircam in the Musical Representations team and the INRIA team Mutant,

Title: Formal mettons for the static analysis of realtime interactive music systems

Research context: This PhD project will be hosted in a joint project-team between INRIA (Paris-Rocquencourt research unit), Ircam (Musical Representations Group) and CNRS, as part of the STMS Mixed Research Unit (UMR 9912) between Ircam, CNRS and University Pierre & Marie Curie (UPMC).

The computer music research project of our team is centered around two disjoint themes of real-time music information processing and retrieval and that of synchronous reactive languages for computer music. Their coupling is a necessity in a musical context. The first incarnation of this coupling is the software and programming language Antescofo, an award-winning software that is being used today worldwide for real-time synchronous composition and performance of mixed electronic music.

Computer Music as a research discipline dealing with various domains in computer science related to audio: signal processing, object recognition, machine learning, computer languages, physical modeling, and perception and cognition, just to name a few. In its practice, it brings in various disciplines with the aim of making music with the aid of (or for) computers. Among long practical traditions in computer music, interactive music systems have gained tremendous attention in both research and artistic communities during the past 10 years. Within this paradigm, the computer is brought into the cycle of musical creation as an intelligent performer and equipped with a listening machine capable of analyzing, coordinating and anticipating its own and other musicians’ actions within a musically coherent and synchronous context. A simple and commercial facet of such systems are automatic accompaniment systems, where the computer handles machine listening and rendering of the instrumental/orchestral part and the musician(s) play the vocal/solo sections. In modern practices, the computer’s role goes beyond rendering pre-recorded accompaniments and is replaced by concurrent, synchronous and real-time programs defined during the compositional phase by artists and programmers.

Our team integrates and federates latest findings in the real-time processing and synchronous programming literature within the computer music community, thanks to the internationally recognized platform provided by Ircam in the field. Ircam is a center for research, development and artistic production in the domain of sound and music. As the largest center in the world dedicated to music research in coordination with science and technology, Ircam has fixed several standards for computer music industry such as the system Max, diffused worldwide, and OpenMusic, the environment for computer assisted composition.

PhD research work description: The theme of this project is the use of formal models and methods developed for the verification of complex systems (for transportation, production, communication…) in the context of systems involving a complex interaction between humans and computers based on a temporal scenario.

Context. The target application is the interaction between humans and computers in mixed music performance, defined as live association of human performers with computers. This application, although specific, is a very good example of the complex relationships that must be managed in a system involving feedback and timely interactions with "humans in the loop". This type of application is gaining increasing importance with the development of ubiquitous computing, augmented reality, smart space (smart-building, smart-cities) and more generally all cyber-physical systems, based on a strong coupling of information processing units with the physical components and humans.

The purpose of interactive music systems is the live execution computerized actions along with human performance. Besides live execution, such systems are responsible for real-time synchronization of computer processes with human musicians. The central problem in this context is the coupling of two important research themes concerning multimedia systems:

  1. artificial perception (such as listening machines: the recognition and extraction of data in real time from a musical audio signal)
  2. the specification of complex temporal scenarios, here with the development of a language for authoring mixed music scores defining the real-time interaction between musicians and machines, using concepts originally developed in the field of synchronous programming of industrial real-time systems.

Interactive music systems share two essential characteristics with industrial systems: a critical dimension (insofar as they are used in concerts and must not fail) and real-time. Regarding this last point, for these systems, as well as for many critical industrial systems, the time is not simply a measure of performance but a semantic property: time is not a mere accident of implementation but timeliness should be a strong property of system and execution. In addition, they are open and reactive systems: they react during a performance to events coming from an unpredictable environment (the musician). Indeed, by definition, the musician has a degree of freedom (usually not quantified) in its interpretation regarding duration. Predicting and reasoning over possible performances and interactions in such mixed live medium is challenging to achieve.

Objectives. Compared to traditional polyphonic scores where all events are specified in extenso, mixed music scores are real programs, containing all the instructions for the control of electronic parts, synchronization with the instrumental parts, errors handling… Such problems are usually left implicit when writing acoustic music. One objective of this project is to contribute to the design of a domain specific language with appropriate constructs for writing complex temporal scenarios in mixed music scores and the development of techniques for assisting artists using this language.

More precisely, we shall develop and experiment with techniques for static analysis and test of mixed music scores in order to be able to explore all the possible live behaviors and to ensure that they are consistent with what is expected in all circumstances. A related problem is to give a quantitative estimation of the robustness of a score to the various possible instrumental interpretations, taking into account in particular the temporal variations and errors of the interpreter and of the artificial perception module. Our goal is not only to provide formal proofs of the correction of a system, but to give users some feedback on the outcome of the static analysis, relevant in the context of music authoring and expected performance. In other words, we should give composers appropriate information about critical aspects of their mixed music pieces, especially those related to the interaction between the performers and the machine, to permit to improve them from a functional point of view.

This work will be conducted in cooperation with other research teams specialized in formal verification of software and systems, and will lead to developments of verification tools based on existing systems, and to experiments with Antescofo in the context of creation projects at Ircam.

Required skills and background:

  • Some basic knowledge on formal languages and automata theory is expected, as well as a taste for formal methods.
  • A musical knowledge would be appreciated (although not required).
  • The applicant must have good programming skills and be able to implement theoretical propositions.

Contact: Florent Jacquemard florent.jacquemard@inria.fr

 


phd-fj.1364570625.txt.gz · Dernière modification: 2013/03/29 16:23 par Florent Jacquemard