DocumentCode :
275954
Title :
Building new layers on multi-layer perceptrons
Author :
Smyth, S.G.
Author_Institution :
BT Labs., London, UK
fYear :
1991
fDate :
18-20 Nov 1991
Firstpage :
276
Lastpage :
279
Abstract :
It is widely recognised that the more layers a multi-layer perceptron (MLP) has the longer it takes to train. This is due partly to the larger number of parameters to be calculated, but also to the fact that the back-propagated error is scaled by a number less than unity at each layer. The technique proposed in this paper allows a smaller (fewer layers) network to be created and (partially) trained, avoiding the increase in training time on both of the previous points. When this smaller network has converged satisfactorily, a new layer is `programmed´ and placed on the existing net. This is then trained further, and the whole process may be repeated if desired. The procedure has been applied to a `real-world´ problem of isolated word speech recognition
Keywords :
learning systems; neural nets; speech recognition; back-propagated error; isolated word; multi-layer perceptrons; speech recognition; training time;
fLanguage :
English
Publisher :
iet
Conference_Titel :
Artificial Neural Networks, 1991., Second International Conference on
Conference_Location :
Bournemouth
Print_ISBN :
0-85296-531-1
Type :
conf
Filename :
140331
Link To Document :
بازگشت