• DocumentCode
    2601005
  • Title

    Speaker independent continuous voice to facial animation on mobile platforms

  • Author

    Feldhoffer, Gergely

  • Author_Institution
    Pazmany Peter Catholic Univ., Budapest
  • fYear
    2007
  • fDate
    12-14 Sept. 2007
  • Firstpage
    155
  • Lastpage
    158
  • Abstract
    In this paper a speaker independent training method is presented for continuous voice to facial animation systems. An audiovisual database with multiple voices and only one speaker´s video information was created using dynamic time warping. The video information is aligned to more speakers´ voice. The fit is measured with subjective and objective tests. Suitability of implementations on mobile devices is discussed.
  • Keywords
    audio databases; computer animation; neural nets; speaker recognition; video coding; visual databases; MPEG-4; audiovisual database; continuous voice; dynamic time warping; facial animation systems; mobile platforms; neural network; speaker independent training method; video information; Audio databases; Data mining; Deafness; Facial animation; Feature extraction; Principal component analysis; Speech; Testing; Video compression; Video recording; DTW; MPEG-4; facial animation; neural network;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    ELMAR, 2007
  • Conference_Location
    Zadar
  • ISSN
    1334-2630
  • Print_ISBN
    978-953-7044-05-3
  • Type

    conf

  • DOI
    10.1109/ELMAR.2007.4418820
  • Filename
    4418820