Abstract :
In wearable computing environments, digitization of personal experiences will be made possible by continuous recordings using a wearable video camera. This could lead to the ``automatic life-log application´´. However, it is evident that the resulting amount of video content will be enormous. Accordingly, to retrieve and browse desired scenes, a vast quantity of video data must be organized using structural information. We are developing a ``context-based video retrieval system for life-log applications´´. Our life log system captures video, audio, acceleration sensor, gyro, GPS, annotations, documents, web pages, and emails, and provides functions that make efficient video browsing and retrieval possible by using data from these sensors, some databases and various document data.