Synchronous Realtime Processing & Programming of Music Signals
The MuTant Real-time Multimedia Computing Seminar series aims at reviving and identifying related research issues to the topic by inviting world-leaders in academia and industry for real-time audio and video computing systems.
Click on each topic for abstract, videos and more.
28-05-2013, 12h, Salle Stravinsky, Ircam-Centre Pompidou. (Click for abstract)
28-05-2013, 12h, Salle Stravinsky, Ircam-Centre Pompidou.
Abstract: Music systems demand innovations in real-time programming, software architecture and programming languages. Music programming has taught us principles that offer practical guidelines for designing complex real-time interactive systems. I will describe and illustrate some principles that form the foundation of many successful music systems. Looking to the future, many-core computers introduce new challenges for musicians, programmers, languages, and system architecture. I will offer some suggestions that go against conventional wisdom: Functional programming is problematic, if hardware offers shared memory just say "no," and limit the number of threads to get higher performance.
Speaker: Dr. Roger B. Dannenberg is Professor of Computer Science, Art, and Music at Carnegie Mellon University. Dannenberg is well known for his computer music research, especially in real-time interactive systems. His pioneering work in computer accompaniment led to three patents and the SmartMusic system now used by over 100 thousand music students. He designed and implemented Nyquist, a mostly-functional programming language for music with a unique temporal semantics. He also played a central role in the development of Audacity, the audio editor with millions of users. Other innovations include the application of machine learning to music style classification and the automation of music structure analysis. As a trumpet player, he has performed in concert halls including the Apollo Theater in Harlem, and he is active in performing jazz, classical, and new works. His compositions have been performed by the Pittsburgh New Music Ensemble, the Pittsburgh Symphony, and at festivals such as the Foro de Musica Nueva, Callejon del Ruido, Spring in Havana, and the International Computer Music Conference.
21-11-2012, 12h, Salle Stravinsky, Ircam-Centre Pompidou. (Click for Video)
21-11-2012, 12h, Salle Stravinsky, Ircam-Centre Pompidou
Abstract: SuperCollider is an audio synthesis environment with a client-server architecture. This presents some problems in dealing with timing. This talk will cover the various ways that time is handled in SuperCollider on both the language (client) side and on the synthesis engine (server) side. Issues discussed will include Open Sound Control time stamps and NTP synchronization, coordination between real time and non real time threads, synchronizing multiple SC servers, drift between network time and sample time, accounting for latency when sending commands to the server, and trade offs involving timing between sample by sample vs block processing.
Speaker: James McCartney is the author of the audio synthesis and algorithmic composition programming environment named "SuperCollider". He studied computer science and electronic music at the University of Texas at Austin, composed music for local theater, modern dance and music performances, and performed with the group "Liquid Mice" which expored the boundaries of what one could get away with performing in Austin bars in the 1980's and 90's. He was a member of the Austin Robot Group which explored robotics, cybernetics and the arts. He worked for the NASA Astrometry Science team on the Hubble Space Telescope project. He now lives in San Jose, California and continues exploring sound.
28-03-2012, Ircam-Centre Pompidou. (Click for Video)
28-03-2012, Ircam-Centre Pompidou
Abstract: Computer music researchers have been concerned at least since the 1970s with a fundamental problem: how to build systems that can simultaneously reach high levels of computation throughput, get things done at very short latencies, and offer a clear and consistent programming model (and even, perhaps, a decent user interface). This talk will address the choices and tradeoffs that beset the computer music system designer: how to use multiprocessors efficiently, how the memory model constrains scheduling; how to manage tasks with multiple, different latency requirements; the costs and benefits of making systems run deterministically; and the interface between sporadic event-driven processes and ones running at fixed sample rates.
Speaker: Miller Puckette is the author of Max and PureData real-time programming languages, and teaches Computer Music in University of California in San Diego. His website: http://crca.ucsd.edu/~msp/