• DocumentCode
    916334
  • Title

    Convergence of the nearest neighbor rule

  • Author

    Wagner, Terry J.

  • Volume
    17
  • Issue
    5
  • fYear
    1971
  • fDate
    9/1/1971 12:00:00 AM
  • Firstpage
    566
  • Lastpage
    571
  • Abstract
    If the nearest neighbor rule (NNR) is used to classify unknown samples, then Cover and Hart [1] have shown that the average probability of error using n known samples (denoted by R_n ) converges to a number R as n tends to infinity, where R^ {\\ast } \\leq R \\leq 2R^ {\\ast } (1 - R^ {\\ast }) , and R^ {\\ast } is the Bayes probability of error. Here it is shown that when the samples lie in n -dimensional Euclidean space, the probability of error for the NNR conditioned on the n known samples (denoted by L_n . so that EL_n = R_n) converges to R with probability 1 for mild continuity and moment assumptions on the class densities. Two estimates of R from the n known samples are shown to be consistent. Rates of convergence of L_n to R are also given.
  • Keywords
    Pattern classification; Bismuth; Convergence; Frequency; H infinity control; Nearest neighbor searches; Random variables;
  • fLanguage
    English
  • Journal_Title
    Information Theory, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9448
  • Type

    jour

  • DOI
    10.1109/TIT.1971.1054698
  • Filename
    1054698