MuTant Real-time Multimedia Computing Seminars

The MuTant Real-time Multimedia Computing Seminar series aims at reviving and identifying related research issues to the topic by inviting world-leaders in academia and industry for real-time audio and video computing systems.

Click on each topic for abstract, videos and more.

Christoph KIRSCH: Principles of Real-Time Programming

11-04-2014, 14h30, Salle Stravinsky, Ircam-Centre Pompidou. (Click for abstract)

Christoph KIRSCH: Principles of Real-Time Programming

11-04-2014, 14h30, Salle Stravinsky, Ircam-Centre Pompidou.

Christoph KIRSCH: Principles of Real-Time Programming

11-04-2014, 14h30, Salle Stravinsky, Ircam-Centre Pompidou.

Abstract: Real-time programming is a software engineering discipline that has been around ever since the dawn of digital computing. The dream of real-time programmers is to unlock the virtually unlimited potential of software for embedded computer systems -digital computers that are supposed to behave like analog devices. The perfect embedded computer system is invisibly hybrid, it works according to the largely unidentified laws of embedded software but acts according to the laws of physics. The critical interface between embedded software and physics is real-time and yet, while physical processes evolve in real-time, software processes do not. Only the embedded computer system as a whole - em- bedded software and hardware- determines a complex notion of so-called soft-time to which the software processes adhere: mapping soft-time to real-time is the art of real-time programming. We discuss various real-time programming models that support the development of real-time programs based on different abstractions of soft-time. We informally introduce a real-time process model to study (1) the compositionality of the real-time programming models and (2) the semantics of real-time programs developed in these models.

Speaker: Christoph Kirsch is full professor and holds a chair at the Department of Computer Sciences of the University of Salzburg, Austria. Since 2008 he is also a visiting scholar at the Department of Civil and Environmental Engineering of the University of California, Berkeley. He received his Dr.Ing. degree from Saarland University, Saarbruecken, Germany, in 1999 while at the Max Planck Institute for Computer Science. From 1999 to 2004 he worked as Postdoctoral Researcher at the Department of Electrical Engineering and Computer Sciences of the University of California, Berkeley. His research interests are in concurrent programming and systems, virtual execution environments, and embedded software. Dr. Kirsch co-invented the Giotto and HTL languages, and lead the JAviator UAV project for which he received an IBM faculty award in 2007. He co-founded the International Conference on Embedded Software (EMSOFT), served as ACM SIGBED chair from 2011 until 2013, and is currently associate editor of ACM TODAES.

http://www.cs.uni-salzburg.at/~ck/

Roger Dannenberg: Principles for Effective Real-Time Music Processing Systems

28-05-2013, 12h, Salle Stravinsky, Ircam-Centre Pompidou. (Click for video)

Roger Dannenberg: Principles for Effective Real-Time Music Processing Systems

28-05-2013, 12h, Salle Stravinsky, Ircam-Centre Pompidou.

Roger Dannenberg: Principles for Effective Real-Time Music Processing Systems

28-05-2013, 12h, Salle Stravinsky, Ircam-Centre Pompidou.

Download Video

Abstract: Music systems demand innovations in real-time programming, software architecture and programming languages. Music programming has taught us principles that offer practical guidelines for designing complex real-time interactive systems. I will describe and illustrate some principles that form the foundation of many successful music systems. Looking to the future, many-core computers introduce new challenges for musicians, programmers, languages, and system architecture. I will offer some suggestions that go against conventional wisdom: Functional programming is problematic, if hardware offers shared memory just say "no," and limit the number of threads to get higher performance.

Speaker: Dr. Roger B. Dannenberg is Professor of Computer Science, Art, and Music at Carnegie Mellon University. Dannenberg is well known for his computer music research, especially in real-time interactive systems. His pioneering work in computer accompaniment led to three patents and the SmartMusic system now used by over 100 thousand music students. He designed and implemented Nyquist, a mostly-functional programming language for music with a unique temporal semantics. He also played a central role in the development of Audacity, the audio editor with millions of users. Other innovations include the application of machine learning to music style classification and the automation of music structure analysis. As a trumpet player, he has performed in concert halls including the Apollo Theater in Harlem, and he is active in performing jazz, classical, and new works. His compositions have been performed by the Pittsburgh New Music Ensemble, the Pittsburgh Symphony, and at festivals such as the Foro de Musica Nueva, Callejon del Ruido, Spring in Havana, and the International Computer Music Conference.

James McCartney: SuperCollider and Time

21-11-2012, 12h, Salle Stravinsky, Ircam-Centre Pompidou. (Click for Video)

James McCartney: SuperCollider and Time

21-11-2012, 12h, Salle Stravinsky, Ircam-Centre Pompidou

James McCartney: SuperCollider and Time

21-11-2012, 12h, Salle Stravinsky, Ircam-Centre Pompidou

Download Video

Abstract: SuperCollider is an audio synthesis environment with a client-server architecture. This presents some problems in dealing with timing. This talk will cover the various ways that time is handled in SuperCollider on both the language (client) side and on the synthesis engine (server) side. Issues discussed will include Open Sound Control time stamps and NTP synchronization, coordination between real time and non real time threads, synchronizing multiple SC servers, drift between network time and sample time, accounting for latency when sending commands to the server, and trade offs involving timing between sample by sample vs block processing.

Speaker: James McCartney is the author of the audio synthesis and algorithmic composition programming environment named "SuperCollider". He studied computer science and electronic music at the University of Texas at Austin, composed music for local theater, modern dance and music performances, and performed with the group "Liquid Mice" which expored the boundaries of what one could get away with performing in Austin bars in the 1980's and 90's. He was a member of the Austin Robot Group which explored robotics, cybernetics and the arts. He worked for the NASA Astrometry Science team on the Hubble Space Telescope project. He now lives in San Jose, California and continues exploring sound.

Miller Puckette: Timeless problems in real-time audio software design

28-03-2012, Ircam-Centre Pompidou. (Click for Video)

Download Video

Abstract: Computer music researchers have been concerned at least since the 1970s with a fundamental problem: how to build systems that can simultaneously reach high levels of computation throughput, get things done at very short latencies, and offer a clear and consistent programming model (and even, perhaps, a decent user interface). This talk will address the choices and tradeoffs that beset the computer music system designer: how to use multiprocessors efficiently, how the memory model constrains scheduling; how to manage tasks with multiple, different latency requirements; the costs and benefits of making systems run deterministically; and the interface between sporadic event-driven processes and ones running at fixed sample rates.

Speaker: Miller Puckette is the author of Max and PureData real-time programming languages, and teaches Computer Music in University of California in San Diego. His website: http://crca.ucsd.edu/~msp/

 


mutant/rtmseminars.txt · Dernière modification: 2014/06/05 15:29 par Arshia Cont