• DocumentCode
    1268506
  • Title

    \\ell _{p}-\\ell _{q} Penalty for Sparse Linear and Sparse Multiple Kernel Multitask Learning

  • Author

    Rakotomamonjy, Alain ; Flamary, Rémi ; Gasso, Gilles ; Canu, Stéphane

  • Author_Institution
    Univ. of Rouen, St. Etienne du Rouvray, France
  • Volume
    22
  • Issue
    8
  • fYear
    2011
  • Firstpage
    1307
  • Lastpage
    1320
  • Abstract
    Recently, there has been much interest around multitask learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on ℓp-ℓq (with 0 ≤ p ≤ 1 and 1 ≤ q ≤ 2) mixed norms as sparsity-inducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the ℓ1-ℓq penalty which helps us in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the ℓ1-ℓq penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the ℓ1-ℓq proximal operator. Then, for the more general case, when 0 <; p <; 1, we solve the resulting nonconvex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted ℓ1-ℓq sparse MTL problem. Empirical evidences from toy dataset and real-word datasets dealing with brain-computer interface single-trial electroencephalogram classification and protein subcellular localization show the benefit of the proposed approaches and algorithms.
  • Keywords
    optimisation; support vector machines; brain computer interface; electroencephalogram classification; iterative scheme; joint sparsity pattern; majorization minimization approach; multitask learning; nonconvex problem; optimization algorithm; sparse linear; sparse multiple kernel; variational formulation; Context; Fasteners; Joints; Kernel; Optimization; Signal processing algorithms; Support vector machines; Mixed norm; multiple kernel learning; multitask learning; sparsity; support vector machines; Artificial Intelligence; Databases, Factual; Linear Models; Pattern Recognition, Automated; Psychomotor Performance;
  • fLanguage
    English
  • Journal_Title
    Neural Networks, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1045-9227
  • Type

    jour

  • DOI
    10.1109/TNN.2011.2157521
  • Filename
    5948411