DocumentCode :
1907547
Title :
On decomposing MLPs
Author :
Lucas, S. ; Zhao, Z. ; Cawley, G. ; Noakes, P.
Author_Institution :
Dept. of Phys., Keele Univ., UK
fYear :
1993
fDate :
1993
Firstpage :
1414
Abstract :
The benefits of decomposing the multilayer preceptron (MLP) for pattern recognition tasks are investigated. For the case of N classes, instead of using 1 MLP with N outputs, N MLPs, each with a single output are used. In practice, this allows the use of fewer hidden units than would be used in the single MLP. It is found that decomposing the problem in this way allows convergence in fewer iterations. Not only does one save on the number of iterations as well as the time per iteration, but it becomes straightforward to distribute the training over as many workstations as there are pattern classes. The speedup is then linear in the number of pattern classes, assuming as many processors as classes. For the case of more classes than processors, the speedup is linear in the number of processors. It is shown that on a difficult hand-written optical character recognition (OCR) problem, the results obtained with the decomposed MLP are slightly superior than those for the conventional MLP, and are obtained in a fraction of the time
Keywords :
convergence; feedforward neural nets; iterative methods; pattern recognition; convergence; decomposing; hand-written optical character recognition; hidden units; iterations; multilayer preceptron; pattern classes; pattern recognition tasks; Convergence; Entropy; Optical character recognition software; Pattern recognition; Systems engineering and theory; Workstations;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1993., IEEE International Conference on
Conference_Location :
San Francisco, CA
Print_ISBN :
0-7803-0999-5
Type :
conf
DOI :
10.1109/ICNN.1993.298764
Filename :
298764
Link To Document :
بازگشت