DocumentCode
916334
Title
Convergence of the nearest neighbor rule
Author
Wagner, Terry J.
Volume
17
Issue
5
fYear
1971
fDate
9/1/1971 12:00:00 AM
Firstpage
566
Lastpage
571
Abstract
If the nearest neighbor rule (NNR) is used to classify unknown samples, then Cover and Hart [1] have shown that the average probability of error using
known samples (denoted by
) converges to a number
as
tends to infinity, where
, and
is the Bayes probability of error. Here it is shown that when the samples lie in
-dimensional Euclidean space, the probability of error for the NNR conditioned on the
known samples (denoted by
. so that
converges to
with probability 1 for mild continuity and moment assumptions on the class densities. Two estimates of
from the
known samples are shown to be consistent. Rates of convergence of
to
are also given.
known samples (denoted by
) converges to a number
as
tends to infinity, where
, and
is the Bayes probability of error. Here it is shown that when the samples lie in
-dimensional Euclidean space, the probability of error for the NNR conditioned on the
known samples (denoted by
. so that
converges to
with probability 1 for mild continuity and moment assumptions on the class densities. Two estimates of
from the
known samples are shown to be consistent. Rates of convergence of
to
are also given.Keywords
Pattern classification; Bismuth; Convergence; Frequency; H infinity control; Nearest neighbor searches; Random variables;
fLanguage
English
Journal_Title
Information Theory, IEEE Transactions on
Publisher
ieee
ISSN
0018-9448
Type
jour
DOI
10.1109/TIT.1971.1054698
Filename
1054698
Link To Document