DocumentCode :
3863202
Title :
New DTZNN model for future minimization with cube steady-state error pattern using Taylor finite-difference formula
Author :
Yunong Zhang;Ying Fang;Bolin Liao;Tianjian Qiao;Hongzhou Tan
Author_Institution :
School of Information Science and Technology, Sun Yat-sen University (SYSU) Guangzhou 510006, China
fYear :
2015
Firstpage :
128
Lastpage :
133
Abstract :
In this paper, a discrete-time Zhang neural network (DTZNN) model, discretized from continuous-time Zhang neural network, is proposed and investigated for performing the online future minimization (OFM). In order to approximate more accurately the 1st-order derivative in computation and discretize more effectively the continuous-time Zhang neural network, a new Taylor-type numerical differentiation formula, together with the optimal sampling-gap rule, is presented and utilized to obtain the Taylor-type DTZNN model. For comparison, Euler-type DTZNN model and Newton iteration, with an interesting link being found, are also presented. Moreover, theoretical results of stability and convergence are presented, which show that the steady-state residual errors of the presented Taylor-type DTZNN model, Euler-type DTZNN model and Newton iteration have a pattern of 0(t3), 0(t2) and 0(t), respectively, with t denoting the sampling gap. Numerical experimental results further substantiate the effectiveness and advantages of the Taylor-type DTZNN model for solving the OFM problem.
Keywords :
"Numerical models","Minimization","Computational modeling","Mathematical model","Neural networks","Convergence","Steady-state"
Publisher :
ieee
Conference_Titel :
Intelligent Control and Information Processing (ICICIP), 2015 Sixth International Conference on
Print_ISBN :
978-1-4799-1715-0
Type :
conf
DOI :
10.1109/ICICIP.2015.7388156
Filename :
7388156
Link To Document :
بازگشت