Abstract :
Images recorded in turbid waters suffer from various forms of signal degradation due to light absorption, scattering and backscatter. Much of the earlier work to enhance color, contrast and sharpness follow the single-image dehazing approach from the atmospheric imaging literature. Requiring knowledge of both range to scene objects and ambient lighting, various techniques differ in how they estimate the information from various image regions. Moreover, some assumptions are made that hold for most images recorded in air and clear waters, but are often violated in turbid environments, leading to poor results. Alternatively, stereo imaging and polarization have been explored for simultaneous range estimating and image dehazing, however, these can become ineffective with low visibility and (or) weak polarization cue. This work explores a methodology that utilizes the visual cues in multi-modal optical and sonar images, namely, the occluding contours of various scene objects that can be detected and matched more robustly than point features. Calculating the sparse 3-D positions of these contours from opti-acosutic stereo data, we infer a dense range map by exploiting an MRF-based statistical framework, where image intensities and range values serve as observation and hidden variables. Additionally, the opti-acoustic epipolar geometry guides the inference of the MRF by refining neighborhood pixels. The improved performance over other state-of-the-art techniques is demonstrated using images recorded under different turbidity conditions.