DocumentCode :
1735630
Title :
Stochastic optimization for learning Non-Convex Linear Support Vector Machines
Author :
Chen, Jifei ; Wang, Jiabao ; Zhang, Yafei ; Lu, Jianjiang ; Li, Yang
Author_Institution :
Inst. of Command Autom., PLA Univ. of Sci. & Technol., Nanjing, China
fYear :
2012
Firstpage :
35
Lastpage :
39
Abstract :
In this paper, a fast optimization algorithm was proposed to learn the Non-Convex Linear Support Vector Machines (LSVM-NC) based on stochastic optimization, in which the non-convex function, Ramp Loss, was used to suppress the influence of noisy data in the case of large-scale learning problems. As for solving the non-convex linear SVMs, the traditional methods make use of the ConCave-Convex Procedure (CCCP) based on the Sequential Minimal Optimization (SMO) algorithm from dual, which is a time-consuming process and impractical for learning large-scale problems. To tackle this, we resorted to CCCP based on Stochastic Gradient Descent (SGD) algorithm from primal, and experimental results proved that our method could reduce the training time largely and improve the generalization performance.
Keywords :
gradient methods; learning (artificial intelligence); optimisation; support vector machines; CCCP; LSVM-NC; SGD; SMO; concave-convex procedure; fast optimization algorithm; nonconvex linear support vector machine learning; ramp loss; sequential minimal optimization algorithm; stochastic gradient descent algorithm; stochastic optimization; Machine learning; Machine learning algorithms; Optimization; Stochastic processes; Support vector machines; Testing; Training; Large-Scale Machine Learning; Non-Convex Linear Support Vector Machines; Stochastic Gradient Descent;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Control, Automatic Detection and High-End Equipment (ICADE), 2012 IEEE International Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4673-1331-5
Type :
conf
DOI :
10.1109/ICADE.2012.6330094
Filename :
6330094
Link To Document :
بازگشت