• DocumentCode
    1482017
  • Title

    Adding learning to cellular genetic algorithms for training recurrent neural networks

  • Author

    Ku, Kim Wing C ; Mak, Man Wai ; Siu, Wan Chi

  • Author_Institution
    Dept. of Electron. & Inf. Eng., Hong Kong Polytech., Kowloon, Hong Kong
  • Volume
    10
  • Issue
    2
  • fYear
    1999
  • fDate
    3/1/1999 12:00:00 AM
  • Firstpage
    239
  • Lastpage
    252
  • Abstract
    This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA) for training recurrent neural nets (RNN). Each RNN weight is encoded as a floating point number, and a concatenation of numbers forms a chromosome. Reproduction takes place locally in a square grid, each grid point representing a chromosome. Lamarckian and Baldwinian (1896) mechanisms for combining cellular GA and learning are compared. Different hill-climbing algorithms are incorporated into the cellular GA. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. RTRL has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, the simplest form of learning, has been implemented by considering the RNN as feedforward networks. The hybrid algorithms are used to train the RNN to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA is the fastest method. Learning should not be too extensive
  • Keywords
    floating point arithmetic; genetic algorithms; learning (artificial intelligence); recurrent neural nets; search problems; Baldwinian learning convergence; Lamarckian mechanism; RNN weight; RTRL; cellular GA; cellular genetic algorithms; delta rule; feedforward networks; floating point number; hill-climbing algorithms; hybrid optimization algorithm; individual learning; learning; local search; long-term dependency problem; real-time recurrent learning; recurrent neural network training; Biological cells; Cellular networks; Convergence; Feedback loop; Genetic algorithms; Joining processes; Learning systems; Neural networks; Recurrent neural networks; Stochastic processes;
  • fLanguage
    English
  • Journal_Title
    Neural Networks, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1045-9227
  • Type

    jour

  • DOI
    10.1109/72.750546
  • Filename
    750546