• DocumentCode
    409958
  • Title

    Simplification of a specific two-hidden-layer feedforward networks

  • Author

    Chen, Lei ; Huang, Guang-Bin ; Siew, Chee-Kheong

  • Author_Institution
    Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore
  • Volume
    2
  • fYear
    2003
  • fDate
    15-18 Dec. 2003
  • Firstpage
    1000
  • Abstract
    A specific two-hidden-layer feedforward networks (TLFNs) proposed by G.B. Huang (2003) is presented in this paper. A method is introduced to simplify the structure of the TLFNs by introducing a new type of quantizers that unite two previous neurons A(p) and B(p) into a single neuron. Those new quantizers choose a special type of function as the neural network´s activation function, which leads to the new TLFNs with 2√((m+1)N) hidden neurons can learn N distinct samples (xi, ti) with negligibly small error, where m is the number of output neurons, and unlike Huang´s TLFNs require 2√((m+2)N) hidden neurons. Moreover, it is not necessary to estimate the quantizer value U defined in Huang´s TLFNs, which is fixed in our new model of TLFNs. It can reduce significantly and markedly the complexity and computation of neural networks.
  • Keywords
    feedforward neural nets; quantisation (signal); TLFN; neural network activation function; neuron A(p); neuron B(p); quantizer; two-hidden-layer feedforward network; Computer networks; Electronic mail; Feedforward neural networks; Multi-layer neural network; Neural networks; Neurons; Upper bound;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Information, Communications and Signal Processing, 2003 and Fourth Pacific Rim Conference on Multimedia. Proceedings of the 2003 Joint Conference of the Fourth International Conference on
  • Print_ISBN
    0-7803-8185-8
  • Type

    conf

  • DOI
    10.1109/ICICS.2003.1292609
  • Filename
    1292609