DocumentCode :
2344394
Title :
Multimodal object categorization by a robot
Author :
Nakamura, Tomoaki ; Nagai, Takayuki ; Iwahashi, Naoto
Author_Institution :
Univ. of Electro-Commun., Tokyo
fYear :
2007
fDate :
Oct. 29 2007-Nov. 2 2007
Firstpage :
2415
Lastpage :
2420
Abstract :
In this paper unsupervised object categorization by robots is examined. We propose an unsupervised multimodal categorization based on audio-visual and haptic information. The robot uses its physical embodiment to grasp and observe an object from various view points as well as listen to the sound during the observation. The proposed categorization method is an extension of probabilistic latent semantic analysis(pLSA), which is a statistical technique. At the same time the proposed method provides a probabilistic framework for inferring the object property from limited observations. Validity of the proposed method is shown through some experimental results.
Keywords :
audio-visual systems; haptic interfaces; intelligent robots; object detection; probability; robot vision; audio-visual; haptic information; multimodal object categorization; probabilistic Latent Semantic Analysis(pLSA); robot; unsupervised object categorization; Grasping; Haptic interfaces; Intelligent robots; Knowledge engineering; Natural languages; Notice of Violation; Object recognition; Training data; USA Councils; Unsupervised learning; Object categorization; multimodal; pLSA; unsupervised learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on
Conference_Location :
San Diego, CA
Print_ISBN :
978-1-4244-0912-9
Electronic_ISBN :
978-1-4244-0912-9
Type :
conf
DOI :
10.1109/IROS.2007.4399634
Filename :
4399634
Link To Document :
بازگشت