DocumentCode
16765
Title
Video De-Fencing
Author
Yadong Mu ; Wei Liu ; Shuicheng Yan
Author_Institution
AT&T Labs. - Res., Middletown, NJ, USA
Volume
24
Issue
7
fYear
2014
fDate
Jul-14
Firstpage
1111
Lastpage
1121
Abstract
This paper describes and provides an initial solution to a novel video editing task, i.e., video de-fencing. It targets automatic restoration of the video clips that are corrupted by fence-like occlusions during capture. Our key observation lies in the visual parallax between fences and background scenes, which is caused by the fact that the former are typically closer to the camera. Unlike in traditional image inpainting, fence-occluded pixels in the videos tend to appear later in the temporal dimension and are therefore recoverable via optimized pixel selection from relevant frames. To eventually produce fence-free videos, major challenges include cross-frame subpixel image alignment under diverse scene depth, and correct pixel selection that is robust to dominating fence pixels. Several novel tools are developed in this paper, including soft fence detection, weighted truncated optical flow method, and robust temporal median filter. The proposed algorithm is validated on several real-world video clips with fences.
Keywords
image resolution; image sequences; video signal processing; automatic restoration; cross-frame subpixel image alignment; fence-occluded pixels; novel video editing task; optimized pixel selection; real-world video clips; robust temporal median filter; soft fence detection; video clips; video de-fencing; weighted truncated optical flow method; Cameras; Estimation; Image restoration; Optical imaging; Rain; Robustness; Visualization; Subpixel alignment; Video de-fencing; sub-pixel alignment; video de-fencing; weighted truncated optical flow;
fLanguage
English
Journal_Title
Circuits and Systems for Video Technology, IEEE Transactions on
Publisher
ieee
ISSN
1051-8215
Type
jour
DOI
10.1109/TCSVT.2013.2241351
Filename
6415261
Link To Document