Title :
Image compression using HLVQ neural network
Author :
Solaiman, Bassel ; Maillard, Eric P.
Author_Institution :
ENST de Bretagne, Brest, France
Abstract :
We apply a new neural network: HLVQ combining supervised and unsupervised learning to vector quantization. A supervised learning based on learning vector quantization 2 performs attention focusing over a background of a self-organizing feature map algorithm. It exhibits the salient features of both algorithms: the topology-preserving mapping characteristic is acquired through unsupervised learning while supervised learning keeps the overlap between classes to a minimum. Pattern labeling is carried out by a separate unsupervised network taking as input the discrete cosine transform of a pattern. First the labelling network is trained on the transform of sub-images. Each neuron of this network is considered as the prototype of one class. Once convergence is achieved, HLVQ is trained. Each sub-image is input to the network. The class of the input pattern is determined by the most activated neuron of the labelling network on presentation of the sub-image transform
Keywords :
discrete cosine transforms; image coding; learning (artificial intelligence); self-organising feature maps; transform coding; unsupervised learning; vector quantisation; HLVQ neural network; convergence; discrete cosine transform; hybrid learning vector quantization; image compression; input pattern; labelling network; pattern labeling; self-organizing feature map algorithm; subimage transform; supervised learning; topology-preserving mapping; unsupervised learning; unsupervised network; Discrete cosine transforms; Discrete transforms; Focusing; Image coding; Labeling; Neural networks; Neurons; Supervised learning; Unsupervised learning; Vector quantization;
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on
Conference_Location :
Detroit, MI
Print_ISBN :
0-7803-2431-5
DOI :
10.1109/ICASSP.1995.479727