Title :
Cone algorithm: an extension of the perceptron algorithm
Author_Institution :
Dept. of Comput. Sci., Regina Univ., Sask., Canada
fDate :
10/1/1994 12:00:00 AM
Abstract :
The perceptron convergence theorem played an important role in the early development of machine learning. Mathematically, the perceptron learning algorithm is an iterative procedure for finding a separating hyperplane for a finite set of linearly separable vectors, or equivalently, for finding a separating hyperplane for a finite set of linearly contained vectors. In this paper, the author shows that the perceptron algorithm can be extended to a more general algorithm, called the cone algorithm, for finding a covering cone for a finite set of linearly contained vectors. A proof of the convergence of the cone algorithm is given. The relationship between the cone algorithm and other related algorithms is discussed. The equivalence of the problem of finding a covering cone for a set of linearly contained vectors and the problem of finding a solution cone for a system of homogeneous linear inequalities is established
Keywords :
convergence; iterative methods; learning (artificial intelligence); neural nets; set theory; cone algorithm; convergence; covering cone; homogeneous linear inequalities; iterative procedure; linearly contained vectors; linearly separable vectors; machine learning; perceptron algorithm; separating hyperplane; Brain modeling; Computational geometry; Convergence; Humans; Iterative algorithms; Machine learning; Machine learning algorithms; Neurons; Stability analysis; Vectors;
Journal_Title :
Systems, Man and Cybernetics, IEEE Transactions on