Analytical Listening through Interactive Visualization

Elaine Chew   Elaine Chew and Alexandre Francois  Alexandre François,
    Radcliffe Institute for Advanced Study / University of Southern California
    echewat, afrancoi at

Seminar on People, Computers, and Design
Stanford University February 29, 2008

This talk introduces the project of our research cluster at the Radcliffe Institute for Advanced Study.  Our goal is to make discerning listening of music accessible by offering interactive visualizations of musical structures, captured and analyzed from music streams in real-time.  There are two components to the project: the mathematical model and algorithms for tonal analysis, and the underlying software architecture for enabling the real-time interaction. 

Our tonal analysis and visualization system, MuSA.RT, is based on Chew's Spiral Array model, a geometric model with algorithms to identify and track evolving tonal contexts.  The system displays the pitches played, and the closest triad and key, as the music piece unfolds in a performance.  The pitch spelling, chord, and key, are computed by a nearest neighbor search in the spiral array, using two centers of effect (CEs), which summarize the current short-term and long-term contexts.  The three-dimensional model dances to the rhythm of the music, spinning smoothly so that the current triad forms the background for the CE trails.

A challenge of building a system like MuSA.RT is that a human performer can never play a piece the same way twice.  Apart from natural perturbations in timing from one performance to the next, expert performers can deliberately use expressive devices, such as pedaling or tempo variations, to highlight different structures so as to produce different interpretations of the same piece.  A system for identifying and tracking evolving tonal structures must be robust to, yet flexible enough to capture, such performance variations.

MuSA.RT was designed using François' Software Architecture for Immersipresence (SAI), a general formalism for the design, analysis and implementation of complex software systems.  Based on a concurrent asynchronous processing model, SAI defines primitives and organizing principles that bridge the disconnect between mathematical models and natural interaction.  From its underlying principles to its graphical notation and derived tools, SAI embraces a human-centered approach to the design of computing artifacts.


Elaine Chew is an Associate Professor of Industrial and Systems Engineering and of Electrical Engineering at the University of Southern California (USC) Viterbi School of Engineering. She was the first honoree of the Viterbi Early Career Chair. She earned PhD and SM degrees in Operations Research from MIT, and a BAS in Mathematical and Computational Sciences and Music Performance from Stanford University.  Professor Chew also holds diplomas and degrees in piano performance from the Trinity College, London, and Stanford University.

Her research interests center on the computational modeling of music and its performance. She founded and heads the Music Computation and Cognition Laboratory at USC, where she conducts and directs research on music and computing. She received the US National Science Foundation Career Award and Presidential Early Career Award for Scientists and Engineers for her research and education activities at the intersection of music and engineering.

Professor Chew is on the founding editorial boards of the Journal of Mathematics and Music,  the Journal of Music and Meaning, and ACM Computers in Entertainment. She has served on numerous program committees for conferences in music and computing; this year, she is Program Co-Chair for the International Conference on Music Information Retrieval.

Professor Chew is on sabbatical in 2007-2008, during which she is the Edward, Frances, and Shirley B. Daniels Fellow at the Radcliffe Institute for Advanced Study.  At Radcliffe, she and her collaborator Alexandre François form a research cluster on Analytical Listening through Interactive Visualization.



Alexandre R.J. François ( is a 2007-8 Fellow of the Radcliffe Institute for Advanced Study at Harvard University.  He is on leave from the University of Southern California, where he currently holds an appointment as a Research Assistant Professor of Computer Science in the USC Viterbi School of Engineering. From 2001 to 2004 he was a Research Associate with the Integrated Media Systems Center and with the Institute for Robotics and Intelligent Systems, both at USC.

His research has focused on the modeling and design of complex dynamic (software) systems, as an enabling step towards the understanding of perception, cognition and interaction. He is creator of the Software Architecture for Immersipresence (SAI), a general formalism for the design, analysis and  implementation of complex software systems. His Modular Flow Scheduling Middleware (MFSM; provides an open source implementation of SAI's abstractions.  Leveraging the SAI/MFSM framework, his experimental courses in software development, graduate and undergraduate, pool the efforts of the entire class on a single, ambitious collaborative project.

Francois received the Diplome d'Ingenieur from the Institut National Agronomique Paris-Grignon (France), the Diplome d'Etudes Approfondies (MS) from the University Paris IX - Dauphine (France), and the MS and PhD degrees in Computer Science from USC.

View this talk on line at CS547 on Stanford OnLine or using this video link.

Titles and abstracts for previous years are available by year and by speaker.