Title : 
Sparse training procedure for kernel neuron
         
        
            Author : 
Xu, Jianhua ; Zhang, Xuegong ; Li, Yanda
         
        
            Author_Institution : 
Sch. of Math. & Comput. Sci., Nanjing Normal Univ., China
         
        
        
        
        
        
            Abstract : 
The kernel neuron is the generalization of classical McCulloch-Pitts neuron using Mercer kernels. In order to control generalization ability and prune structure of kernel neuron, we construct a regularized risk functional including both empirical risk functional and Laplace regularization term in this paper. Based on the gradient descent method, a novel training algorithm is designed, which is referred to as the sparse training procedure for kernel neuron. Such a procedure can realize three main ideas: kernel, regularization (or large margin) and sparseness in the kernel machines (e.g. support vector machines, kernel Fisher discriminant analysis, etc.), and can deal with the nonlinear classification and regression problems effectively.
         
        
            Keywords : 
generalisation (artificial intelligence); learning (artificial intelligence); neural nets; regression analysis; Laplace regularization; generalization ability; kernel neuron; nonlinear classification; regression problems; training algorithm; Algorithm design and analysis; Automatic control; Automation; Intelligent systems; Kernel; Laboratories; Neural networks; Neurons; Support vector machine classification; Support vector machines;
         
        
        
        
            Conference_Titel : 
Neural Networks and Signal Processing, 2003. Proceedings of the 2003 International Conference on
         
        
            Conference_Location : 
Nanjing
         
        
            Print_ISBN : 
0-7803-7702-8
         
        
        
            DOI : 
10.1109/ICNNSP.2003.1279210