Title :
Gradient Hyper-parameter Optimization for Manifold Regularization
Author :
Becker, Cassiano O. ; Ferreira, Paulo A. V.
Author_Institution :
Sch. of Electr. & Comput. Eng., State Univ. of Campinas (Unicamp), Campinas, Brazil
Abstract :
Semi-supervised learning can be defined as the ability to improve the predictive performance of an algorithm by providing it with data which hasn´t been previously labeled. Manifold Regularization is a semi-supervised learning approach that extends the regularization framework so as to include additional regularization penalties that are based on the graph Laplacian as the empirical estimator of the underlying manifold. The incorporation of such terms rely on additional hyper-parameters, which, together with the original kernel and regularization parameters, are known to influence algorithm behavior. This paper proposes a gradient approach to the optimization of such hyper-parameters which is based on the closed form for the generalized cross validation estimate, being valid when the learning optimality conditions can be represented as a linear system, such as is the case for Laplacian Regularized Least Squares. For the subset hyper-parameters that are integer quantities, as is the case for the Laplacian matrix hyper-parameters, we propose the optimization of the weight components of a sum of base terms. Results of computational experiments are presented to illustrate the technique proposed.
Keywords :
gradient methods; graph theory; learning (artificial intelligence); least squares approximations; linear systems; matrix algebra; optimisation; Laplacian matrix hyper-parameters; Laplacian regularized least squares; algorithm behavior; algorithm predictive performance; empirical estimator; generalized cross validation estimate; gradient approach; gradient hyper-parameter optimization; graph Laplacian; integer quantities; kernel parameters; learning optimality conditions; linear system; manifold regularization; regularization framework; regularization parameters; regularization penalties; semisupervised learning; subset hyper-parameters; weight components; Error analysis; Kernel; Laplace equations; Manifolds; Optimization; Training; Vectors; Hyper-parameter Optimization; Machine Learning; Manifold Regularization; Model Selection; Semi-supervised Learning;
Conference_Titel :
Machine Learning and Applications (ICMLA), 2013 12th International Conference on
Conference_Location :
Miami, FL
DOI :
10.1109/ICMLA.2013.145