DocumentCode
522744
Title
Improved view synthesis prediction using decoder-side motion derivation for multiview video coding
Author
Shimizu, Shinya ; Kimata, Hideaki
fYear
2010
fDate
7-9 June 2010
Firstpage
1
Lastpage
4
Abstract
This paper proposes a novel method that uses temporal reference pictures to improve the quality of view synthesis prediction. Existing view synthesis prediction schemes generate image signals from just only inter-view reference pictures. However, there are many types of signal mismatch like illumination, color, and focus mismatch across views, and these mismatches decrease the prediction performance. The proposed method synthesizes an initial view using the existing depth-based warping, and then uses the initial synthesized view as the templates needed to derive fine motion vectors. The initial synthesized view is then updated by using the derived motion vectors and temporal reference pictures which yields the prediction output. Experiments show that the proposed method can improve the quality of view synthesis about 14 dB for ballet and 4 dB for breakdancers at high bitrate, and reduces the bitrate by about 2% relative to conventional view synthesis prediction.
Keywords
data compression; video coding; decoder-side motion; depth-based warping; image signal generation; multiview video coding; signal mismatch; view synthesis prediction; Bit rate; Cameras; Decoding; Displays; Image coding; Layout; Motion pictures; Signal synthesis; Video coding; Video compression; Deocder Side Motion Derivation; Multiview Video Coding; Multiview Video plus Depth Map; View Synthesis Prediction;
fLanguage
English
Publisher
ieee
Conference_Titel
3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2010
Conference_Location
Tampere
Print_ISBN
978-1-4244-6377-0
Electronic_ISBN
978-1-4244-6378-7
Type
conf
DOI
10.1109/3DTV.2010.5506523
Filename
5506523
Link To Document