• DocumentCode
    3324650
  • Title

    Learning in the cerebellum with sparse conjunctions and linear separator algorithms

  • Author

    Harris, Harlan D. ; Reichler, Jesse A.

  • Author_Institution
    Dept. of Comput. Sci., Illinois Univ., Urbana, IL, USA
  • Volume
    2
  • fYear
    2001
  • fDate
    2001
  • Firstpage
    1071
  • Abstract
    Investigates potential learning rules in the cerebellum. We review evidence that input to the cerebellum is sparsely expanded by granule cells into a very wide basis vector, and that Purkinje cells learn to compute a linear separation using that basis. We review learning rules employed by existing cerebellar models, and show that results from computational learning theory suggest that the standard delta rule would not be efficient. We suggest that alternative, attribute-efficient learning rules, such as Winnow or incremental delta-bar-delta, are more appropriate for cerebellar modeling, and support this position with results from a computational model
  • Keywords
    brain models; learning (artificial intelligence); Purkinje cells; Winnow learning rule; attribute-efficient learning rules; cerebellum; computational learning theory; incremental delta-bar-delta learning rule; learning rules; linear separation; linear separator algorithms; sparse conjunctions; Brain modeling; Circuits; Computational modeling; Computer science; Motor drives; Nerve fibers; Particle separators; Personal communication networks; Power system modeling; Vectors;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on
  • Conference_Location
    Washington, DC
  • ISSN
    1098-7576
  • Print_ISBN
    0-7803-7044-9
  • Type

    conf

  • DOI
    10.1109/IJCNN.2001.939509
  • Filename
    939509