Title :
On the optimality of neural-network approximation using incremental algorithms
Author :
Meir, Ron ; Maiorov, Vitaly E.
Author_Institution :
Dept. of Electr. Eng., Technion-Israel Inst. of Technol., Haifa, Israel
fDate :
3/1/2000 12:00:00 AM
Abstract :
The problem of approximating functions by neural networks using incremental algorithms is studied. For functions belonging to a rather general class, characterized by certain smoothness properties with respect to the L2 norm, we compute upper bounds on the approximation error where error is measured by the Lq norm, 1⩽q⩽∞. These results extend previous work, applicable in the case q=2, and provide an explicit algorithm to achieve the derived approximation error rate. In the range q⩽2 near-optimal rates of convergence are demonstrated. A gap remains, however, with respect to a recently established lower bound in the case q>2, although the rates achieved are provably better than those obtained by optimal linear approximation. Extensions of the results from the L2 norm to Lp are also discussed. A further interesting conclusion from our results is that no loss of generality is suffered using networks with positive hidden-to-output weights. Moreover, explicit bounds on the size of the hidden-to-output weights are established, which are sufficient to guarantee the established convergence rates
Keywords :
convergence; function approximation; neural nets; optimisation; L2 norm; approximation error rate; incremental algorithms; near-optimal convergence rates; neural networks; optimal function approximation; positive hidden-to-output weights; smoothness properties; upper bounds; Absorption; Adaptive estimation; Approximation algorithms; Approximation error; Convergence; Function approximation; Linear approximation; Neural networks; Surges; Upper bound;
Journal_Title :
Neural Networks, IEEE Transactions on