DocumentCode
2180795
Title
Multi-camera networks: eyes from eyes
Author
Fermüller, C. ; Aloimonos, Y. ; Baker, P. ; Pless, R. ; Neumann, J. ; Stuart, B.
Author_Institution
Comput. Vision Lab., Maryland Univ., College Park, MD, USA
fYear
2000
fDate
2000
Firstpage
11
Lastpage
18
Abstract
Autonomous or semi-autonomous intelligent systems, in order to function appropriately, need to create models of their environment, i.e., models of space time. These are descriptions of objects and scenes and descriptions of changes of space over time, that is, events and actions. Despite the large amount of research on this problem, as a community we are still far from developing robust descriptions of a system´s spatiotemporal environment using video input (image sequences). Undoubtedly, some progress has been made regarding the understanding of estimating the structure of visual space, but it has not led to solutions to specific applications. There is, however, an alternative approach which is in line with today´s “zeitgeist.” The vision of artificial systems can be enhanced by providing them with new eyes. If conventional video cameras are put together in various configurations, new sensors can be constructed that have much more power and the way they “see” the world makes it much easier to solve problems of vision. This research is motivated by examining the wide variety of eye design in the biological world and obtaining inspiration for an ensemble of computational studies that relate how a system sees to what that system does (i.e. relating perception to action). This, coupled with the geometry of multiple views that has flourished in terms of theoretical results in the past few years, points to new ways of constructing powerful imaging devices which suit particular tasks in robotics, visualization, video processing, virtual reality and various computer vision applications, better than conventional cameras. This paper presents a number of new sensors that we built using common video cameras and shows their superiority with regard to developing models of space and motion
Keywords
computer vision; video cameras; virtual reality; computer vision; image sequences; intelligent systems; Biosensors; Cameras; Eyes; Image sequences; Intelligent systems; Layout; Machine vision; Robot vision systems; Robustness; Spatiotemporal phenomena;
fLanguage
English
Publisher
ieee
Conference_Titel
Omnidirectional Vision, 2000. Proceedings. IEEE Workshop on
Conference_Location
Hilton Head Island, SC
Print_ISBN
0-7695-0704-2
Type
conf
DOI
10.1109/OMNVIS.2000.853797
Filename
853797
Link To Document