• DocumentCode
    3428487
  • Title

    A proposal of neural network architecture for non-linear function approximation

  • Author

    Mizukami, Yoshiki ; Wakasa, Yuji ; Tanaka, Kanya

  • Author_Institution
    Fac. of Eng., Yamaguchi Univ., Ube, Japan
  • Volume
    4
  • fYear
    2004
  • fDate
    23-26 Aug. 2004
  • Firstpage
    605
  • Abstract
    In this paper, a neural network architecture for non-linear function approximation is proposed. We point out problems in non-linear function approximation with traditional neural networks, that is, difficulty in analyzing internal representation, no reproducibility in function approximation due to the random scheme for weight initialization, and the insufficient generalization ability in learning without enough samples. Based on these considerations, we suggest three main improvements. The first is the design of a sigmoidal function with localized derivative. The second is a deterministic scheme for weight initialization. The third is an updating rule for weight parameters. Simulation results show beneficial characteristics of our proposed method; low approximation error at the beginning of iterative calculation, smooth convergence of error and its improvement for difficulty in analyzing internal representation.
  • Keywords
    approximation theory; neural net architecture; nonlinear functions; localized derivative; neural network architecture; nonlinear function approximation; sigmoidal function; updating rule; weight initialization; weight parameters; Analytical models; Approximation error; Convergence; Function approximation; Iterative methods; Linear approximation; Linearity; Neural networks; Proposals; Reproducibility of results;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on
  • ISSN
    1051-4651
  • Print_ISBN
    0-7695-2128-2
  • Type

    conf

  • DOI
    10.1109/ICPR.2004.1333845
  • Filename
    1333845