DocumentCode :
3623106
Title :
Supervised learning of the steady-state outputs in generalized cellular networks
Author :
C. Guzelis
Author_Institution :
Fac. of Electr.-Electron. Eng., Istanbul Tech. Univ., Turkey
fYear :
1992
fDate :
6/14/1905 12:00:00 AM
Firstpage :
74
Lastpage :
79
Abstract :
It is shown that the supervised learning of the steady-state outputs in a generalized cellular network (CNN) is, in general, equivalent to a kind of constrained optimization problem. The objective function, also known as the error function, is a measure of the distance between the sets of desired steady-state outputs and actual ones. The constraints are due to a set of design requirements which have to be met for providing the qualitative and quantitative properties for the network. The approach presented uses the idea of the penalty function method in optimization theory where the constrained optimization problem is transformed into an unconstrained one by adding to the error function the terms corresponding to the constraints. A gradient descent algorithm is proposed for solving the resulting unconstrained backpropagation algorithm into the generalized CNN.
Keywords :
"Supervised learning","Steady-state","Intelligent networks","Land mobile radio cellular systems","Cellular neural networks","Neural networks","Constraint optimization","Stability","Backpropagation algorithms","Circuits"
Publisher :
ieee
Conference_Titel :
Cellular Neural Networks and their Applications, 1992. CNNA-92 Proceedings., Second International Workshop on
Print_ISBN :
0-7803-0875-1
Type :
conf
DOI :
10.1109/CNNA.1992.274352
Filename :
274352
Link To Document :
بازگشت