Text size
  • Small
  • Medium
  • Large
Contrast
  • Standard
  • Blue text on blue
  • High contrast (Yellow text on black)
  • Blue text on beige

    AniMorph: Animation Driven Audio Mosaicing

    Electronic Visualisation and the Arts (EVA 2015)

    London, UK, 7 - 9 July 2015

    AUTHORS

    Augoustinos Tsiros and Grégory Leplâtre

    ABSTRACT

    This paper describes AniMorph a system for animation driven Concatenative Sound Synthesis (CSS). We can distinguish between two main application domains of CSS in the context of music technology: target sound re-synthesis, and free sound synthesis. The difference between these two categories is that in target sound re-synthesis the aim is to re-create a sound or a sound’s characteristics by providing audio examples (see Schwarz & Schnell 2010, Stevens et al. 2012), while free sound synthesis focuses on exploration of the audio corpus in order to synthesise novel sounds that do not necessarily resemble the features of another sound, for examples see (Comajuncosas 2011, Navab et al. 2014, Schwarz & Hackbarth 2012).

    The main motivation for the present investigation is to (i) develop appropriate models of interaction for efficient exploration of the audio corpus, and (ii) develop perceptually meaningful mappings to enable practitioners to create novel sounds using CSS by specify the perceptual characteristic of the sound that they want to synthesise in visual terms. The present research considers that it is of paramount importance to achieve an intuitive mapping in order to enable interaction with concatenative synthesis for creative purposes (e.g. sound design, electroacoustic composition, live performance). AniMorph builds on the software developed through an earlier system: Morpheme that uses sketching as a model for interaction (Tsiros 2013). To expand upon this work, we modified the existing interface in order to make it work with animation as user input.

    PAPER FORMATS

    PDF file PDF Version of this Paper 384(kb)