Title :
Automatic registration of LiDAR and optical imagery using depth map stereo
Author :
Hyojin Kim ; Correa, Carlos D. ; Max, Nelson
Author_Institution :
Lawrence Livermore Nat. Lab., Livermore, CA, USA
Abstract :
Automatic fusion of aerial optical imagery and untextured LiDAR data has been of significant interest for generating photo-realistic 3D urban models in recent years. However, unsupervised, robust registration still remains a challenge. This paper presents a new registration method that does not require priori knowledge such as GPS/INS information. The proposed algorithm is based on feature correspondence between a LiDAR depth map and a depth map from an optical image. Each optical depth map is generated from edge-preserving dense correspondence between the image and another optical image, followed by ground plane estimation and alignment for depth consistency. Our two-pass RANSAC with Maximum Likelihood estimation incorporates 2D-2D and 2D-3D correspondences to yield robust camera pose estimation. Experiments with a LiDAR-optical imagery dataset show promising results, without using initial pose information.
Keywords :
cameras; geophysical image processing; image fusion; image registration; maximum likelihood estimation; optical information processing; optical radar; pose estimation; random processes; remote sensing by radar; solid modelling; stereo image processing; LiDAR depth map; automatic LiDAR image registration; automatic aerial optical image fusion; automatic optical image registration; depth consistency; depth map stereo; ground plane estimation; maximum likelihood estimation; optical depth map; photorealistic 3D urban model generation; robust camera pose estimation; two-pass RANSAC algorithm; Adaptive optics; Cameras; Estimation; Feature extraction; Laser radar; Optical imaging; Three-dimensional displays;
Conference_Titel :
Computational Photography (ICCP), 2014 IEEE International Conference on
Conference_Location :
Santa Clara, CA
DOI :
10.1109/ICCPHOT.2014.6831821