• DocumentCode
    80608
  • Title

    Speech Emotion Verification Using Emotion Variance Modeling and Discriminant Scale-Frequency Maps

  • Author

    Jia-Ching Wang ; Yu-Hao Chin ; Bo-Wei Chen ; Chang-Hong Lin ; Chung-Hsien Wu

  • Author_Institution
    Dept. of Comput. Sci. & Inf. Eng., Nat. Central Univ., Jhongli, Taiwan
  • Volume
    23
  • Issue
    10
  • fYear
    2015
  • fDate
    Oct. 2015
  • Firstpage
    1552
  • Lastpage
    1562
  • Abstract
    This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61%. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.
  • Keywords
    Gaussian processes; emotion recognition; feature extraction; signal classification; signal representation; speech recognition; time-frequency analysis; EAI; EER; Gabor dictionary; Gaussian-model residual error; auditory discriminability; discriminant scale-frequency map construction; emotion variance modeling; emotional agreement index; equal error rate; error-tolerance improvement; feature extraction; matching pursuit algorithm; sparse representation verification approach; speech emotion verification; speech recognition; Atomic clocks; Dictionaries; Feature extraction; Indexes; Matching pursuit algorithms; Speech; Speech processing; Emotional speech recognition; Gaussian-modeled residual error; scale-frequency map; sparse representation;
  • fLanguage
    English
  • Journal_Title
    Audio, Speech, and Language Processing, IEEE/ACM Transactions on
  • Publisher
    ieee
  • ISSN
    2329-9290
  • Type

    jour

  • DOI
    10.1109/TASLP.2015.2438535
  • Filename
    7114224