The optimum filtering problem for a general class of linear distributed-parameter systems with colored observation noise is studied. The input stochastic disturbance is assumed to be white in time, but it may have correlation in space of any type. The optimum filter is derived through a learning theorem which gives the mean value and covariance matrix of a conditional distributed-parameter random variable

given

where

and

are Gaussian variables with known mean values and covariance matrices. The fixed-interval smoothing problem for the same class of systems is then considered and solved with the aid of a distributed-parameter theorem concerning the combination of two independent estimates of the state based on different data records. A numerical filtering example is included to illustrate the theory. The results of the paper may find applications in all fields where the information to be processed is distributed in space and depends either on continuous or on discrete time.