Abstract :
Almost-sure (a.s.) bounds for linear, constant-gain, adaptive filtering algorithms are investigated. For instance, under general pseudo-stationarity and dependence conditions on the driving data {ψk,k=1,2,3,...}, {Yk,k=0,1,2,...} a.s. convergence and rates of a.s. convergence (as the algorithm gain ε→0) are established for the LMS algorithm hk+1ε=h kε+εYk(ψk+1 -YkThkε) subject to some nonrandom initial condition h0ε=h 0. In particular, defining {gkε} k=0∞ by g0ε=h 0 and gk+1ε=gkε +ε(E[Ykψk+1]-E[YkY kT]gkε) for k=0,1,2,..., the author shows that for any γ>0 max0⩽k⩽γε(-1/)|hsub k/ε-gkε|→0 as ε→0 a.s. and under a stronger dependency condition, the author shows that for any 0<ζ⩽1 and γ>0, max0⩽k⩽γε(-ζ/)|hsub k/ε -gkε| converges (as ε→0) a.s. At a rate marginally slower than O((ε2-ζlog log(ε-ζ))½). Then, under a stronger pseudostationarity assumption it is shown that similar results hold if the sequences {gkε}k=0 ∞,ε>0 in the above results are replaced with the solution g0(·) of a nonrandom linear ordinary differential equation, i.e. one has, max0⩽k⩽[γε(-ζ/])|hsub k/ε -g0(εk)|→0 as ε→0 a.s., where one can attach a rate to this convergence under the stronger dependency condition. The almost-sure bounds contained in the paper complement previously developed weak convergence results in Kushner and Shwartz [IEEE Trans. Information Theory, IT-30(2), 177-182, 1984] and, as are “near optimal”. Moreover, the proofs used to establish these bounds are quite elementary
Keywords :
adaptive filters; convergence of numerical methods; filtering and prediction theory; least squares approximations; LMS algorithm; almost-sure bounds; convergence; dependence conditions; general pseudostationarity; linear constant-gain adaptive filtering algorithms; nonrandom initial condition; Adaptive algorithm; Adaptive filters; Approximation algorithms; Convergence; Differential equations; Filtering algorithms; Information theory; Least squares approximation; Linear approximation; Rats; Stochastic processes;