Title :
Video-Based Rendering using Feature Point Evolution
Author :
Zhang, Wensheng ; Chen, T.
Author_Institution :
Carnegie Mellon Univ., Pittsburgh, PA, USA
Abstract :
We propose a novel video-based rendering algorithm with a single moving camera. We reconstruct a dynamic 3D model of the scene with a feature point set that "evolves" over time. As the scene\´s appearance changes due to camera and object motions, some existing feature points dynamically disappear while some new feature points dynamically appear relative to the camera. The newly generated feature points\´ 3D positions and motions are initialized using nearby existing feature points\´ positions and motions. Our feature evolution, when incorporated into standard tracking and 3D reconstruction algorithms, provides for robust and dense 3D meshes, and their corresponding motions. Consequently, the evolution-based, time-dependent 3D meshes, motions, and textures render good-quality images at a virtual viewpoint and at a desired time instance. We also extend the proposed video-based rendering algorithm from using one single moving camera with one reconstructed depth map to using multiple moving cameras with multiple reconstructed depth maps to avoid occlusion and improve the rendering quality.
Keywords :
feature extraction; image motion analysis; image reconstruction; image texture; rendering (computer graphics); video cameras; video signal processing; dynamic 3D model reconstruction; feature point evolution; image texture; multiple moving cameras; object motion; standard tracking; time-dependent 3D mesh; video-based rendering algorithm; Cameras; Geometry; Image motion analysis; Image reconstruction; Image segmentation; Layout; Optical filters; Particle tracking; Rendering (computer graphics); Solid modeling; 3D reconstruction; feature extraction; motion analysis; rendering; stereo vision;
Conference_Titel :
Image Processing, 2006 IEEE International Conference on
Conference_Location :
Atlanta, GA
Print_ISBN :
1-4244-0480-0
DOI :
10.1109/ICIP.2006.312977