DocumentCode
314385
Title
MDL regularizer: a new regularizer based on the MDL principle
Author
Saito, Kazumi ; Nakano, Ryohei
Author_Institution
NTT Commun. Sci. Lab., Kyoto, Japan
Volume
3
fYear
1997
fDate
9-12 Jun 1997
Firstpage
1833
Abstract
This paper proposes a new regularization method based on the MDL (minimum description length) principle. An adequate precision weight vector is trained by approximately truncating the maximum likelihood weight vector. The main advantage of the proposed regularizer over existing ones is that it automatically determines a regularization factor without assuming any specific prior distribution with respect to the weight values. Our experiments using a regression problem showed that the MDL regularizer significantly improves the generalization error of a second-order learning algorithm and shows a comparable generalization performance to the best tuned weight-decay regularizer
Keywords
generalisation (artificial intelligence); learning (artificial intelligence); minimisation; multilayer perceptrons; statistical analysis; generalization error; maximum likelihood weight vector; minimum description length principle; regression problem; regularization method; second-order learning algorithm; Arithmetic; Bayesian methods; Context modeling; Gaussian noise; Laboratories; Maximum likelihood estimation; Neural networks; Slabs;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks,1997., International Conference on
Conference_Location
Houston, TX
Print_ISBN
0-7803-4122-8
Type
conf
DOI
10.1109/ICNN.1997.614177
Filename
614177
Link To Document