DocumentCode
3380408
Title
A new perceptual model for video sequence encoding
Author
Yegeshwar, J. ; Mammone, R.J.
Author_Institution
Dept. of Electr. Eng., Rutgers Univ., NJ, USA
Volume
ii
fYear
1990
fDate
16-21 Jun 1990
Firstpage
188
Abstract
A method of encoding monochrome video sequences is presented. This method is applicable to slow-moving scenes with a zoom factor close to unity. It incorporates a perceptual model for the detection and tracking of image blocks perceived to be in apparent motion. The perceptual model is a threshold model based on contrast. Weber fractions in the spatiotemporal domain are used to set the threshold for the detection of motion. The threshold is determined on a scene-adaptive basis. The information content of the scene is derived from the spectral distribution of image data and the masking characteristics of the human visual system. The perceptual model is used in the block-classification process, which in turn determines the coding strategy used for each block type. The discrete cosine transform is used for encoding image blocks which have changed beyond a perceptual threshold. A comparison of the perceptually encoded image sequence with that using no perceptual model indicates substantially improved quality and performance of the perceptual encoder for compression ratios up to 100:1
Keywords
encoding; pattern recognition; picture processing; Weber fractions; discrete cosine transform; monochrome video sequences; motion detection threshold; slow-moving scenes; spatiotemporal domain; video sequence encoding; Discrete cosine transforms; Encoding; Humans; Image coding; Layout; Motion detection; Spatiotemporal phenomena; Tracking; Video sequences; Visual system;
fLanguage
English
Publisher
ieee
Conference_Titel
Pattern Recognition, 1990. Proceedings., 10th International Conference on
Conference_Location
Atlantic City, NJ
Print_ISBN
0-8186-2062-5
Type
conf
DOI
10.1109/ICPR.1990.119352
Filename
119352
Link To Document