A method is developed for finding the ordinates of a digital filter which will produce a general linear operator of the signal

such that the mean square error of prediction will be a minimum. The input to the filter is sampled at intervals

. The samples contain stationary noise

, a stationary signal component,

, and a nonrandom signal component, begin{equation} P(jDelta t) = sum_{k=0}^n a_k P_k (jDelta t) end{equation} where the subset of nonrandom functions

are known a priori, but the parameter vector

need not be. The solution is obtained as a matrix equation which relates the ordinates of the digital filter to the autocorrelation properties of

and

and the nature of the prediction operation.