• DocumentCode
    44422
  • Title

    Stochastic Learning via Optimizing the Variational Inequalities

  • Author

    Qing Tao ; Qian-Kun Gao ; De-Jun Chu ; Gao-Wei Wu

  • Author_Institution
    New Star Inst. of Appl. Technol., Hefei, China
  • Volume
    25
  • Issue
    10
  • fYear
    2014
  • fDate
    Oct. 2014
  • Firstpage
    1769
  • Lastpage
    1778
  • Abstract
    A wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the learning speed. To this end, we avoid the hard concept of regret in online learning and directly discuss the stochastic learning algorithms. We first cast the regularized learning problem as a VI. Then, we present a stochastic version of alternating direction method of multipliers (ADMMs) to solve the induced VI. We define a new VI-criterion to measure the convergence of stochastic algorithms. While the rate of convergence for any iterative algorithms to solve nonsmooth convex optimization problems cannot be better than O(1/√t), the proposed stochastic ADMM (SADMM) is proved to have an O(1/t) VI-convergence rate for the l1-regularized hinge loss problems without strong convexity and smoothness. The derived VI-convergence results also support the viewpoint that the standard online analysis is too loose to analyze the stochastic setting properly. The experiments demonstrate that SADMM has almost the same performance as the state-of-the-art stochastic learning algorithms but its O(1/t) VI-convergence rate is capable of tightly characterizing the real learning speed.
  • Keywords
    convergence; learning (artificial intelligence); optimisation; stochastic processes; variational techniques; SADMM; VI convergence; alternating direction method of multipliers; convex optimization; l1-regularized hinge loss problems; online learning; regularized learning problem; stochastic ADMM; stochastic learning algorithms; variational inequality optimization; Accuracy; Algorithm design and analysis; Closed-form solutions; Convergence; Convex functions; Optimization; Standards; Alternating direction method of multiplier (ADMM); machine learning; online learning; optimization; regret; stochastic learning; variational inequality (VI); variational inequality (VI).;
  • fLanguage
    English
  • Journal_Title
    Neural Networks and Learning Systems, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    2162-237X
  • Type

    jour

  • DOI
    10.1109/TNNLS.2013.2294741
  • Filename
    6698348