Title :
A new cascaded projection pursuit network for nonlinear regression
Author :
You, Shih-Shien ; Hwang, Jenq-Neng ; Jou, I-Chang ; Lay, Shyh-Rong
Author_Institution :
Telecommun. Labs., Minist. of Transp. & Commun., Chung-Li, Taiwan
Abstract :
Cascaded correlation is a popular supervised learning architecture that dynamically grows layers of hidden neurons of fixed nonlinear activations (e.g., sigmoids), so that the network topology (size, depth) can be efficiently determined. Similar to a cascaded correlation learning network (CCLN), a projection pursuit learning network (PPLN) also dynamically grows the hidden neurons. Unlike a CCLN where cascaded connections from the existing hidden units to the new candidate hidden unit are required to establish high-order nonlinearity in approximating the residual error, a PPLN approximates the high-order nonlinearity by using trainable nonlinear nodal activation functions (e.g., hermite polynomials). To relax the necessity of predefined smoothness of nonlinearity (e.g., the order of hermite polynomials) in a PPLN, the authors propose a new learning network, called a cascaded projection pursuit network (CPPN), which combines both advantages of a PPLN and a CCLN. The training strategy of a CPPN is similar to that of PPLN and the added cascaded connections allow the CPNN to better capture the high order features without the necessity of proper selection of orders of polynomials. Simulation results show that a CPPN is suitable for nonlinear regression and outperforms a PPLN when the polynomial order used in the activation functions is less than the order of target functions
Keywords :
Hermitian matrices; cascade networks; higher order statistics; learning (artificial intelligence); neural net architecture; polynomials; statistical analysis; transfer functions; cascaded connections; cascaded correlation; cascaded projection pursuit network; hermite polynomials; hidden neurons; high-order nonlinearity; network topology; nonlinear regression; polynomial order; predefined smoothness; residual error; supervised learning architecture; trainable nonlinear nodal activation functions; training strategy; Ear; Laboratories; Mathematics; NASA; Network topology; Neurons; Polynomials; Supervised learning; Telecommunications; Transportation;
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1994. ICASSP-94., 1994 IEEE International Conference on
Conference_Location :
Adelaide, SA
Print_ISBN :
0-7803-1775-0
DOI :
10.1109/ICASSP.1994.389588