DocumentCode
3748940
Title
Temporal Perception and Prediction in Ego-Centric Video
Author
Yipin Zhou;Tamara L. Berg
Author_Institution
Univ. of North Carolina at Chapel Hill, Chapel Hill, NC, USA
fYear
2015
Firstpage
4498
Lastpage
4506
Abstract
Given a video of an activity, can we predict what will happen next? In this paper we explore two simple tasks related to temporal prediction in egocentric videos of everyday activities. We provide both human experiments to understand how well people can perform on these tasks and computational models for prediction. Experiments indicate that humans and computers can do well on temporal prediction and that personalization to a particular individual or environment provides significantly increased performance. Developing methods for temporal prediction could have far reaching benefits for robots or intelligent agents to anticipate what a person will do, before they do it.
Keywords
"Context","Predictive models","Visualization","Computers","Footwear","Data collection","Computational modeling"
Publisher
ieee
Conference_Titel
Computer Vision (ICCV), 2015 IEEE International Conference on
Electronic_ISBN
2380-7504
Type
conf
DOI
10.1109/ICCV.2015.511
Filename
7410868
Link To Document