DocumentCode :
1756556
Title :
Wearable Audio Monitoring: Content-Based Processing Methodology and Implementation
Author :
Bin Gao ; Wai Lok Woo
Author_Institution :
Sch. of Autom. Eng., Univ. of Electron. Sci. & Technol. of China, Chengdu, China
Volume :
44
Issue :
2
fYear :
2014
fDate :
41730
Firstpage :
222
Lastpage :
233
Abstract :
Developing audio processing tools for extracting social-audio features are just as important as conscious content for determining human behavior. Psychologists speculate these features may have evolved as a way to establish hierarchy and group cohesion because they function as a subconscious discussion about relationships, resources, risks, and rewards. In this paper, we present the design, implementation, and deployment of a wearable computing platform capable of automatically extracting and analyzing social-audio signals. Unlike conventional research that concentrates on data which have been recorded under constrained conditions, our data were recorded in completely natural and unpredictable situations. In particular, we benchmarked a set of integrated algorithms (sound speech detection and classification, sound level meter calculation, voice and nonvoice segmentation, speaker segmentation, and prediction) to obtain speech and environmental sound social-audio signals using an in-house built wearable device. In addition, we derive a novel method that incorporates the recently published audio feature extraction technique based on power normalized cepstral coefficient and gap statistics for speaker segmentation and prediction. The performance of the proposed integrated platform is robust to natural and unpredictable situations. Experiments show that the method has successfully segmented natural speech with 89.6% accuracy.
Keywords :
audio signal processing; content-based retrieval; feature extraction; mobile computing; pattern classification; speaker recognition; wearable computers; audio processing tools; content-based processing methodology; feature extraction; integrated algorithms; nonvoice segmentation; social-audio signals; sound level meter calculation; sound speech classification; sound speech detection; speaker prediction; speaker segmentation; voice segmentation; wearable audio monitoring; Biomedical monitoring; Feature extraction; Mel frequency cepstral coefficient; Speech; Speech recognition; Standards; Training; Audio detection and classification; social signal analysis; speaker segmentation; wearable device;
fLanguage :
English
Journal_Title :
Human-Machine Systems, IEEE Transactions on
Publisher :
ieee
ISSN :
2168-2291
Type :
jour
DOI :
10.1109/THMS.2014.2300698
Filename :
6732901
Link To Document :
بازگشت