Abstract :
Nowadays, many cutting edge technologies, like 3D rendering, non-photorealistic rendering, digital refocusing etc., need accurate depth of the scene for post processing. Unfortunately, starting from the conventional stereo matching algorithm to modern day´s Kinect, most of the approaches to calculate depth of a scene are affected by various noises and distortions which make the depth estimation erroneous. Though the causes for these distortions are different for different algorithms, ultimately, all produce distorted point-to-point depth correspondence between RGB image and the depth map. As Human Vision System (HVS) is very sensitive to edges, any distortion near edges makes the final rendering perceptually artificial. In this paper, we present a novel approach to remove edge distortion from raw depth map, exploiting the contour information of different objects present in the scene, such that RGB edges and depth edges align exactly.