• DocumentCode
    3208395
  • Title

    Scale equalization higher-order neural networks

  • Author

    Wang, Jung-Hua ; Wu, Keng-Hsuan ; Chang, Fu-Chiang

  • Author_Institution
    Dept. of Electr. Eng., Nat. Taiwan Ocean Univ., Keelung, Taiwan
  • fYear
    2004
  • fDate
    8-10 Nov. 2004
  • Firstpage
    612
  • Lastpage
    617
  • Abstract
    This paper presents a novel approach, called scale equalization (SE), to implement higher-order neural networks. SE is particularly useful in eliminating the scale divergence problem commonly encountered in higher order networks. Generally, the larger the scale divergence is, the more the number of training steps required to complete the training process. Effectiveness of SE is illustrated with an exemplar higher-order network built on the Sigma-Pi network (SESPN) applied to function approximation. SESPN requires the same computation time as SPN per epoch, but it takes much less number of epochs to compete the training process. Empirical results are provided to verify that SESPN outperforms other higher-order neural networks in terms of computation efficiency.
  • Keywords
    function approximation; higher order statistics; learning (artificial intelligence); neural nets; Sigma-Pi network; function approximation; higher-order neural networks; scale divergence problem; scale equalization; training process; Computer networks; Equations; Error correction; Function approximation; Image processing; Neural networks; Oceans; Polynomials;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Information Reuse and Integration, 2004. IRI 2004. Proceedings of the 2004 IEEE International Conference on
  • Print_ISBN
    0-7803-8819-4
  • Type

    conf

  • DOI
    10.1109/IRI.2004.1431529
  • Filename
    1431529