• DocumentCode
    2437185
  • Title

    Relative Speech Emotion Recognition Based Artificial Neural Network

  • Author

    Fu, Liqin ; Mao, Xia ; Chen, Lijiang

  • Author_Institution
    Sch. of Electron. & Inf. Eng., Beihang Univ., Beijing
  • Volume
    2
  • fYear
    2008
  • fDate
    19-20 Dec. 2008
  • Firstpage
    140
  • Lastpage
    144
  • Abstract
    Artificial neural network (ANN) models based on static features vector as well as normalized temporal features vector, were used to recognize emotion state from speech. Moreover, relative features obtained by computing the changes of acoustic features of emotional speech relative to those of neutral speech were adopted to weaken the influence from the individual difference. The methods to relativize static features and temporal features were introduced individually and experiments based Germany database and Mandarin database were implemented. The results show that the performance of relative features excels that of absolute features for emotion recognition as a whole. When speaker is independent, the hybrid of relative static features vector and relative temporal features normalized vector achieves the best results.
  • Keywords
    emotion recognition; feature extraction; neural nets; speech recognition; artificial neural network; normalized temporal features vector; speech emotion recognition; static features vector; Acoustic distortion; Artificial neural networks; Automatic speech recognition; Emotion recognition; Hidden Markov models; Humans; Loudspeakers; Shape; Spatial databases; Speech recognition; ANN; relative features; speech emotion recognition;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Computational Intelligence and Industrial Application, 2008. PACIIA '08. Pacific-Asia Workshop on
  • Conference_Location
    Wuhan
  • Print_ISBN
    978-0-7695-3490-9
  • Type

    conf

  • DOI
    10.1109/PACIIA.2008.355
  • Filename
    4756752