• DocumentCode
    518625
  • Title

    Speech-to-visual speech synthesis using Chinese visual triphone

  • Author

    Zhao, Hui ; Shen, Yamin ; Tang, Chaojing

  • Author_Institution
    Coll. of Electron. Sci. & Eng., Nat. Univ. of Defense Technol., Changsha, China
  • Volume
    2
  • fYear
    2010
  • fDate
    27-29 March 2010
  • Firstpage
    241
  • Lastpage
    245
  • Abstract
    A visual speech synthesis approach using Chinese visual triphone is presented. According to Mandarin Chinese pronunciation principle and the relationship between phoneme and viseme, “Chinese visual triphone” model is constructed. Triphone hidden Markov model is established based on visual triphones. Joint features composed of visual features and audio features are used in the training stage. In the synthesis stage, sentence HMM is constructed by concatenating triphone HMMs. With the features extracted from sentence HMM, visual speech is synthesized. From the scores of subjective and objective estimation, the synthesized video is realistic and satisfactory.
  • Keywords
    hidden Markov models; speech synthesis; Chinese visual triphone; Mandarin Chinese pronunciation principle; hidden Markov model; objective estimation; speech-to-visual speech synthesis; synthesized video; Acoustic noise; Auditory system; Chaos; Educational institutions; Feature extraction; Hidden Markov models; Natural languages; Signal synthesis; Speech synthesis; Working environment noise; Chinese visual triphone; hidden Markov model (HMM); joint features; visual speech synthesis;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Advanced Computer Control (ICACC), 2010 2nd International Conference on
  • Conference_Location
    Shenyang
  • Print_ISBN
    978-1-4244-5845-5
  • Type

    conf

  • DOI
    10.1109/ICACC.2010.5486681
  • Filename
    5486681