DocumentCode :
3721312
Title :
Deep emotion recognition using prosodic and spectral feature extraction and classification based on cross validation and bootstrap
Author :
Ayush Sharma;David V. Anderson
Author_Institution :
Georgia Institute of Technology, Atlanta, 30332-0250, United States of America
fYear :
2015
Firstpage :
421
Lastpage :
425
Abstract :
Despite the existence of a robust model to identify basic emotions, the ability to classify a large group of emotions with reliability is yet to be developed. Hence, objective of this paper is to develop an efficient technique to identify emotions with an accuracy comparable to humans. The array of emotions addressed in this paper go far beyond what are present on the circumflex diagram. Due to the nature of correlation and ambiguity present in emotions, both prosodic and spectral features of speech are considered during the feature extraction. Feature selection algorithms are applied to work on a subset of relevant features. Owing to the low dimensionality of the feature space, several cross validation methods are employed in combination with different classifiers and their performances are compared. In addition to cross validation, the bootstrap error estimate is also calculated and a combination of both is used as an overall estimate of the classification accuracy of the model.
Keywords :
"Feature extraction","Support vector machines","Signal processing","Speech","Principal component analysis","Mel frequency cepstral coefficient","Emotion recognition"
Publisher :
ieee
Conference_Titel :
Signal Processing and Signal Processing Education Workshop (SP/SPE), 2015 IEEE
Type :
conf
DOI :
10.1109/DSP-SPE.2015.7369591
Filename :
7369591
Link To Document :
بازگشت