DocumentCode :
1665965
Title :
Towards a universal representation for audio information retrieval and analysis
Author :
Jensen, Brian Sveistrup ; Troelsgaard, Rasmus ; Larsen, Jan ; Hansen, Lars Kai
Author_Institution :
DTU Compute, Tech. Univ. of Denmark, Lyngby, Denmark
fYear :
2013
Firstpage :
3168
Lastpage :
3172
Abstract :
A fundamental and general representation of audio and music which integrates multi-modal data sources is important for both application and basic research purposes. In this paper we address this challenge by proposing a multi-modal version of the Latent Dirichlet Allocation model which provides a joint latent representation. We evaluate this representation on the Million Song Dataset by integrating three fundamentally different modalities, namely tags, lyrics, and audio features. We show how the resulting representation is aligned with common ´cognitive´ variables such as tags, and provide some evidence for the common assumption that genres form an acceptable categorization when evaluating latent representations of music. We furthermore quantify the model by its predictive performance in terms of genre and style, providing benchmark results for the Million Song Dataset.
Keywords :
audio signal processing; information retrieval; music; speech processing; Latent Dirichlet allocation model; audio information analysis; audio information retrieval; cognitive variables; joint latent representation; multimodal data sources; music; universal representation; Computational modeling; Music; Mutual information; Resource management; Standards; Support vector machines; Vocabulary; Audio representation; Million Song Dataset; genre classification.; multi-modal LDA;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on
Conference_Location :
Vancouver, BC
ISSN :
1520-6149
Type :
conf
DOI :
10.1109/ICASSP.2013.6638242
Filename :
6638242
Link To Document :
بازگشت