Title :
Neural network vector quantizer design using sequential and parallel learning techniques
Author :
Wu, Frank H. ; Parhi, Keshab K. ; Ganesan, Kalyan
Author_Institution :
US West Adv. Technol. Inc., Englewood, CO, USA
Abstract :
Many techniques for quantizing large sets of input vectors into much smaller sets of output vectors have been developed. Various neural network based techniques for generating the input vectors via system training are studied. The variations are centered around a neural net vector quantization (NNVQ) method which combines the well-known conventional Linde, Buzo and Gray (1980) (LBG) technique and the neural net based Kohonen (1984) technique. Sequential and parallel learning techniques for designing efficient NNVQs are given. The schemes presented require less computation time due to a new modified gain formula, partial/zero neighbor updating, and parallel learning of the code vectors. Using Gaussian-Markov source and speech signal benchmarks, it is shown that these new approaches lead to distortion as good as or better than that obtained using the LBG and Kohonen approaches
Keywords :
Markov processes; data compression; encoding; learning systems; neural nets; parallel algorithms; speech analysis and processing; Gaussian-Markov source; Kohonen technique; LBG technique; code vectors; distortion; gain formula; input vectors; neural net vector quantization; parallel learning; partial/zero neighbor updating; sequential learning; speech data; speech signal benchmarks; system training; Concurrent computing; Counting circuits; Decoding; Distortion; Neural networks; Process design; Propagation losses; Signal design; Speech coding; Vector quantization;
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference on
Conference_Location :
Toronto, Ont.
Print_ISBN :
0-7803-0003-3
DOI :
10.1109/ICASSP.1991.150420