Title :
Selecting concise training sets from clean data
Author :
Plutowski, Mark ; White, Halbert
Author_Institution :
California Univ., San Diego, CA, USA
fDate :
3/1/1993 12:00:00 AM
Abstract :
The authors derive a method for selecting exemplars for training a multilayer feedforward network architecture to estimate an unknown (deterministic) mapping from clean data, i.e., data measured either without error or with negligible error. The objective is to minimize the data requirement of learning. The authors choose a criterion for selecting training examples that works well in conjunction with the criterion used for learning, here, least squares. They proceed sequentially, selecting an example that, when added to the previous set of training examples and learned, maximizes the decrement of network squared error over the input space. When dealing with clean data and deterministic relationships, concise training sets that minimize the integrated squared bias (ISB) are desired. The ISB is used to derive a selection criterion for evaluating individual training examples, the DISB, that is maximized to select new exemplars. They conclude with graphical illustrations of the method, and demonstrate its use during network training. Experimental results indicate that training upon exemplars selected in this fashion can save computation in general purpose use as well
Keywords :
feedforward neural nets; learning (artificial intelligence); clean data; concise training sets selection; integrated squared bias; least squares; mapping; multilayer feedforward network architecture; network training; neural nets; Computer science; Costs; Data compression; Data engineering; Helium; Interpolation; Least squares methods; Neural networks; Nonhomogeneous media; State estimation;
Journal_Title :
Neural Networks, IEEE Transactions on