Title : 
Function approximation using generalized adalines
         
        
            Author : 
Jiann-Ming Wu ; Zheng-Han Lin ; Hsu, P.-H.
         
        
            Author_Institution : 
Dept. of Appl. Math., Nat. Dong Hwa Univ., Hualien
         
        
        
        
        
            fDate : 
5/1/2006 12:00:00 AM
         
        
        
        
            Abstract : 
This paper proposes neural organization of generalized adalines (gadalines) for data driven function approximation. By generalizing the threshold function of adalines, we achieve the K-state transfer function of gadalines which responds a unitary vector of K binary values to the projection of a predictor on a receptive field. A generative component that uses the K-state activation of a gadaline to trigger K posterior independent normal variables is employed to emulate stochastic predictor-oriented target generation. The fitness of a generative component to a set of paired data mathematically translates to a mixed integer and linear programming. Since consisting of continuous and discrete variables, the mathematical framework is resolved by a hybrid of the mean field annealing and gradient descent methods. Following the leave-one-out learning strategy, the obtained learning method is extended for optimizing multiple generative components. The learning result leads to parameters of a deterministic gadaline network for function approximation. Numerical simulations further test the proposed learning method with paired data oriented from a variety of target functions. The result shows that the proposed learning method outperforms the MLP and RBF learning methods for data driven function approximation
         
        
            Keywords : 
function approximation; gradient methods; integer programming; linear programming; transfer functions; K binary values; K posterior independent normal variables; K-state transfer function; adaptive linear elements; data driven function approximation; generalized adalines; gradient descent methods; leave-one-out learning strategy; mean field annealing; mixed integer-linear programming; multiple generative components optimization; neural organization; stochastic predictor-oriented target generation; Annealing; Computer science; Cost function; Design optimization; Encoding; Function approximation; Learning systems; Neural networks; Stochastic processes; Transfer functions; Adalines; generative models; mean field annealing; perceptron; postnonlinear projection; potts encoding; supervised learning; Algorithms; Artificial Intelligence; Information Storage and Retrieval; Neural Networks (Computer); Pattern Recognition, Automated; Systems Theory;
         
        
        
            Journal_Title : 
Neural Networks, IEEE Transactions on
         
        
        
        
        
            DOI : 
10.1109/TNN.2006.873284