• DocumentCode
    433209
  • Title

    Fusing video and sparse depth data in structure from motion

  • Author

    Zhang, Qilong ; Pless, Robert

  • Author_Institution
    Dept. of Comput. Sci. & Eng., Washington Univ., St. Louis, MO, USA
  • Volume
    5
  • fYear
    2004
  • fDate
    24-27 Oct. 2004
  • Firstpage
    3403
  • Abstract
    This paper considers the geometric constraints to combine structure from motion with a sparse set of depth measurements. The goal is to improve the motion estimation for autonomous navigation, and to increase the fidelity of reconstructed 3D scene models. The system is implemented on an iRobot-B2ir Robot with a video camera and a planar laser range finder which gives relatively accurate depth measurements of a small set of scene points. Using a probabilistic model of scene smoothness, the depth information is used to modify the classical epipolar error function to simultaneously incorporate data from both sensors. We present the results of real-world experiments and experiment with different prior assumptions about the scene structure.
  • Keywords
    image reconstruction; laser ranging; motion estimation; navigation; probability; robot vision; sensor fusion; spatial variables measurement; video cameras; video signal processing; autonomous navigation; depth measurements; epipolar error function; fusing video; geometric constraint; iRobot-B2ir Robot; motion estimation; planar laser range finder; probabilistic model; reconstructed 3D scene model; sparse depth data; video camera; Cameras; Computer science; Image reconstruction; Laser modes; Layout; Motion estimation; Motion measurement; Pixel; Robot sensing systems; Sensor systems;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Image Processing, 2004. ICIP '04. 2004 International Conference on
  • ISSN
    1522-4880
  • Print_ISBN
    0-7803-8554-3
  • Type

    conf

  • DOI
    10.1109/ICIP.2004.1421845
  • Filename
    1421845