DocumentCode :
399423
Title :
Integration of spatial and temporal contexts for action recognition by self organizing neural networks
Author :
Shimozaki, Moriaki ; Kuniyoshi, Yasuo
Author_Institution :
Sch. of Inf. Sci. & Technol., Tokyo Univ., Japan
Volume :
3
fYear :
2003
fDate :
27-31 Oct. 2003
Firstpage :
2385
Abstract :
We present a neural architecture which learns to recognize object-directed actions by visually observing examples. Our architecture learns to extract spatial (e.g. object relationships) and movement contexts, self-organizes symbolic representations of them, and integrates them in a temporal context, producing self-organized symbolic action classes. Each of the above functions is realized by a self organizing neural network module. A preprocessing module takes a video input and feeds object and movement features to the modules. The System can learn to recognize simple grasp-transfer-place actions performed by a human hand in 2D scenes by simply observing example performances. Intermediate and top level categorical representations are self organized without explicit external supervisory signals.
Keywords :
computer vision; gesture recognition; learning (artificial intelligence); neural net architecture; self-organising feature maps; temporal databases; action recognition; computer vision; external supervisory signals; gesture recognition; human hand; neural architecture; self organizing neural networks; spatial context; temporal contexts; visual observation; Biological neural networks; Data mining; Hidden Markov models; Information science; Intelligent robots; Intelligent systems; Layout; Neural networks; Organizing; Robot programming;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on
Print_ISBN :
0-7803-7860-1
Type :
conf
DOI :
10.1109/IROS.2003.1249227
Filename :
1249227
Link To Document :
بازگشت