Résumé

Musical notation can be described as an abstract language that composers use so that performers may interprets a score. Such notation comes before interpretation, is reproducible, and although it contains hints targeted at the performer on how to interpret, the actual performance is uniquely situated in time and space. The only way to record and re-experience a sound performance is to use microphones, who transform acoustic waves into an electrical signal and usually lose at least some spatial dimension in the process. This may be hindering in the field of experimental music, where physical limits of sound material may be put to the test. In this short paper, we discuss how motion capture could be an alternative to, or an expansion of the acoustic recording of a performance involving movement. By recording the performer’s movements, some of the dimensions that make their interpretation singular (i.e. character, accentuation, phrasing, and nuance) are retained. A method capturing sound through movement may be interesting in the context of sound synthesis with deep learning and hold potential advantages over current methods using MIDI or acoustic, that either lack dimensions or are very sensitive to noisy data. We briefly discuss rationale, practical and theoretical foundations for the development of potentially innovative outputs.

Détails

Actions

PDF