• DocumentCode
    291983
  • Title

    Learning linear threshold functions

  • Author

    Bylander, Tom

  • Author_Institution
    Div. of Math. Comput. Sci. & Stat., Texas Univ., San Antonio, TX, USA
  • Volume
    2
  • fYear
    1994
  • fDate
    2-5 Oct 1994
  • Firstpage
    1166
  • Abstract
    This paper surveys recent theoretical results on algorithms for learning linear threshold functions (aka perceptrons, linear discriminants). When the examples are linearly separable, Littlestone has introduced algorithms that are much faster than the perceptron algorithm when there are many irrelevant attributes. When the examples are not linearly separable, Gallant´s Ratchet algorithm will converge to the optimal linear threshold function, albeit slowly, and Bylander´s perceptron-noise algorithm will converge quickly to optimal in the special case of classification noise. Empirical comparisons on some nonlinearly-separable real-world datasets are presented
  • Keywords
    computational complexity; learning (artificial intelligence); perceptrons; Bylander perceptron-noise algorithm; Gallant´s Ratchet algorithm; Littlestone algorithm; aka perceptrons; convergence; linear discriminants; linear threshold functions; linear threshold learning; machine learning; Algorithm design and analysis; Computer science; Convergence; Linear programming; Machine learning; Machine learning algorithms; Mathematics; Neural networks; Nonhomogeneous media; Statistics;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Systems, Man, and Cybernetics, 1994. Humans, Information and Technology., 1994 IEEE International Conference on
  • Conference_Location
    San Antonio, TX
  • Print_ISBN
    0-7803-2129-4
  • Type

    conf

  • DOI
    10.1109/ICSMC.1994.400002
  • Filename
    400002