A floating point error analysis of the Recursive Least Squares and Least Mean Squares (LMS) algorithms is presented. Both the prewindowed growing memory RLS algorithm (λ=1) for stationary systems and the exponential sliding window RLS algorithm (λ < 1) for time varying systems are studied. For both algorithms the expression for the mean square prediction error and the expected value of the weight error vector norm are derived in terms of the variance of the floating point noise sources. The results point to a trade off in the choice of the forgetting factor, λ. In order to reduce the effects of additive noise and the floating point noise due to the inner product calculation of the desired signal, λ must be chosen close to one. On the other hand, the floating point noise due to floating point addition in the weight vector update recursion increases as

. Floating point errors in the calculation of the weight vector correction term, however, do not effect the steady state error and have a transient effect. Similar results are obtained for the LMS algorithm where a tradeoff exists in the choice of the loop gain.