DocumentCode :
1511667
Title :
Capture and representation of human walking in live video sequences
Author :
Cheng, Jia-Ching ; Moura, José M F
Author_Institution :
Dept. of Electr. & Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA, USA
Volume :
1
Issue :
2
fYear :
1999
fDate :
6/1/1999 12:00:00 AM
Firstpage :
144
Lastpage :
156
Abstract :
Extracting human representations from video has vast applications. In this paper, we present a knowledge-based framework to capture metarepresentations for real-life video with human walkers. The system models the human body as an articulated object and the human walking as a cyclic activity with highly correlated temporal patterns. We extract for each of the body parts its motion, shape, and texture. Once available, this structural information can be used to manipulate or synthesize the original video sequence, or animate the walker with a different motion in a new synthesized video
Keywords :
image motion analysis; image sequences; virtual reality; human representations; human walking; knowledge-based framework; live video sequences; metarepresentations; real-life video; temporal patterns; video sequence; Animation; Biological system modeling; Cameras; Collaboration; Data mining; Humans; Legged locomotion; Shape; Video sequences; Videoconference;
fLanguage :
English
Journal_Title :
Multimedia, IEEE Transactions on
Publisher :
ieee
ISSN :
1520-9210
Type :
jour
DOI :
10.1109/6046.766736
Filename :
766736
Link To Document :
بازگشت