• DocumentCode
    324546
  • Title

    Generalization and comparison of Alopex learning algorithm and random optimization method for neural networks

  • Author

    Peng, Pei-Yuan ; Sirag, David

  • Author_Institution
    United Technol. Res. Center, East Hartford, CT, USA
  • Volume
    2
  • fYear
    1998
  • fDate
    4-9 May 1998
  • Firstpage
    1147
  • Abstract
    For the minimum description of length, a penalty term is added to the cost function to reduce the network´s complexity. Alopex learning and random optimization method based neural networks are investigated. The Alopex algorithm is a stochastic learning algorithm for training neural networks of any topology, including feedback loops. The neurons are not restricted to any transfer function and the learning can use any error norm measure. The random optimization method by Matyas (1965) and its modified algorithm are studied and compared with the Alopex algorithm to some adaptive control problems. Simulation results show the pros and cons between two
  • Keywords
    adaptive control; backpropagation; generalisation (artificial intelligence); neural nets; optimisation; parallel algorithms; Alopex learning algorithm; adaptive control; backpropagation; neural networks; random optimization; stochastic learning algorithm; underwater vehicles; Adaptive control; Cathode ray tubes; Cost function; Feedback loop; Network topology; Neural networks; Neurons; Optimization methods; Silver; Stochastic processes;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks Proceedings, 1998. IEEE World Congress on Computational Intelligence. The 1998 IEEE International Joint Conference on
  • Conference_Location
    Anchorage, AK
  • ISSN
    1098-7576
  • Print_ISBN
    0-7803-4859-1
  • Type

    conf

  • DOI
    10.1109/IJCNN.1998.685934
  • Filename
    685934