• DocumentCode
    2605745
  • Title

    Scaling properties in neural network learning

  • Author

    Schiminsky, M.C. ; Onaral, B.

  • Author_Institution
    Dept. of Biomed. Eng. & Sci., Drexel Univ., Philadelphia, PA, USA
  • fYear
    1991
  • fDate
    4-5 Apr 1991
  • Firstpage
    49
  • Lastpage
    50
  • Abstract
    Working definitions of learning and learners are examined from the scaling point of view. A back-error propagation neural network was trained to plot sin (x) given an input x (-π ⩾ x ⩾ π). The parameters for the simulation are the following: input-output (pattern) pairs=200; input units=1; hidden units=20; output units=1; learning rates=0.1; momentum term constant=0.2; weights and thresholds set to random values between [-1, 1]; number of trials where input samples were randomly selected=8585. The performance curve consists of the cumulated number of errors less than 0.1 in absolute value vs trial number. It is noted that the scaling exponent ranges from a value of 0.34 in the lower decade (10-100) to 0.66 in the upper decade (1000-10000), reflecting the heterogeneities in scaling along the training process
  • Keywords
    learning systems; neural nets; absolute value; back-error propagation neural network; hidden units; input units; neural network learning; output units; performance curve; scaling heterogeneities; scaling properties; trial number; Biomedical engineering; Biomedical measurements; Design engineering; Geometry; Intelligent networks; Microscopy; Neural networks; Power engineering and energy; Psychology; Systems engineering and theory;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Bioengineering Conference, 1991., Proceedings of the 1991 IEEE Seventeenth Annual Northeast
  • Conference_Location
    Hartford, CT
  • Print_ISBN
    0-7803-0030-0
  • Type

    conf

  • DOI
    10.1109/NEBC.1991.154575
  • Filename
    154575