DocumentCode :
3657062
Title :
RGBD data based pose estimation: Why sensor fusion?
Author :
O. Serdar Gedik;A. Aydin Alatan
Author_Institution :
Department of Computer Engineering, Yildirim Beyazit University, Ankara, Turkey
fYear :
2015
fDate :
7/1/2015 12:00:00 AM
Firstpage :
2129
Lastpage :
2136
Abstract :
Performing high accurate pose estimation has been an attractive research area in the field of computer vision; hence, there are a plenty of algorithms proposed for this purpose. Starting with RGB or gray scale image data, methods utilizing data from 3D sensors, such as Time of Flight (TOF) or laser range finder, and later those based on RGBD data have emerged chronologically. Algorithms that exploit image data mainly rely on minimization of image plane error, i.e. the reprojection error. On the other hand, methods utilizing 3D measurements from depth sensors estimate object pose in order to minimize the Euclidean distance between these measurements. However, although errors in associated domains can be minimized effectively by such methods, the resultant pose estimates may not be of sufficient accuracy, when the dynamics of the object motion is ignored. At this point, the proposed 3D rigid pose estimation algorithm fuses measurements from vision (RGB) and depth sensors in a probabilistic manner using Extended Kalman Filter (EKF). It is shown that such a procedure increases pose estimation performance significantly compared to single sensor approaches.
Keywords :
"Estimation","Three-dimensional displays","Robot sensing systems","Cameras","Mathematical model","Iterative closest point algorithm"
Publisher :
ieee
Conference_Titel :
Information Fusion (Fusion), 2015 18th International Conference on
Type :
conf
Filename :
7266817
Link To Document :
بازگشت