• DocumentCode
    2174756
  • Title

    Acoustic-to-articulatory inversion using an episodic memory

  • Author

    Demange, S. ; Ouni, S.

  • Author_Institution
    LORIA, Vandoeuvre-les-Nancy, France
  • fYear
    2011
  • fDate
    22-27 May 2011
  • Firstpage
    4620
  • Lastpage
    4623
  • Abstract
    This paper presents a new acoustic-to-articulatory inversion method based on an episodic memory, which is an interesting model for two reasons. First, it does not rely on any assumptions about the map ping function but rather it relies on real synchronized acoustic and articulatory data streams. Second, the memory structurally embeds the naturalness of the articulatory dynamics. In addition, we introduce the concept of generative episodic memory, which enables the production of unseen articulatory trajectories according to the acoustic signals to be inverted. The proposed memory is evaluated on the MOCHA corpus. The results show its effectiveness and are very encouraging since they are comparable to those of recently proposed methods.
  • Keywords
    acoustic signal processing; speech recognition; MOCHA corpus; acoustic signals; acoustic-to-articulatory inversion; articulatory data streams; articulatory dynamics; episodic memory; mapping function; speech inversion; Acoustics; Correlation; Hidden Markov models; Speech; Speech processing; Synchronization; Trajectory; Episodic memory; acoustic-to-articulatory inversion; electromagnetic articulography (EMA);
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on
  • Conference_Location
    Prague
  • ISSN
    1520-6149
  • Print_ISBN
    978-1-4577-0538-0
  • Electronic_ISBN
    1520-6149
  • Type

    conf

  • DOI
    10.1109/ICASSP.2011.5947384
  • Filename
    5947384