• DocumentCode
    1906546
  • Title

    Learning in the hypercube

  • Author

    Pedroni, Volnei A. ; Yariv, Amnon

  • Author_Institution
    Dept. of Eng. & Appl. Phys., California Inst. of Technol., Pasadena, CA, USA
  • fYear
    1993
  • fDate
    1993
  • Firstpage
    1168
  • Abstract
    A discussion on learning from a geometric point of view is presented. Its main purpose is to obtain ways of pre-estimating the weights and thresholds of a neural network and of analyzing the corresponding effects on the learning procedure speed. It is conceivable that even a rough preliminary evaluation of the position of the hyperplanes will improve the learning process. It is also conceivable that, in spaces with well-defined geometric properties, the information inherently contained in the training data set can be helpful in making such an evaluation. This subject is discussed, and basic alternatives for doing so are shown. It is concluded that, depending on the configuration of the space, geometric tools can indeed lead to improvements
  • Keywords
    hypercube networks; learning (artificial intelligence); neural nets; geometric properties; geometric tools; hypercube; hyperplanes; learning procedure speed; neural network; pre-estimating; thresholds; training data set; weights; Algorithm design and analysis; Computer networks; Distributed computing; Feedforward neural networks; Hypercubes; Neural networks; Neurons; Partitioning algorithms; Physics; Vectors;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1993., IEEE International Conference on
  • Conference_Location
    San Francisco, CA
  • Print_ISBN
    0-7803-0999-5
  • Type

    conf

  • DOI
    10.1109/ICNN.1993.298722
  • Filename
    298722