• DocumentCode
    15850
  • Title

    Double Shrinking Sparse Dimension Reduction

  • Author

    Tianyi Zhou ; Dacheng Tao

  • Author_Institution
    Fac. of Eng. & Inf. Technol., Univ. of Technol., Sydney, NSW, Australia
  • Volume
    22
  • Issue
    1
  • fYear
    2013
  • fDate
    Jan. 2013
  • Firstpage
    244
  • Lastpage
    257
  • Abstract
    Learning tasks such as classification and clustering usually perform better and cost less (time and space) on compressed representations than on the original data. Previous works mainly compress data via dimension reduction. In this paper, we propose “double shrinking” to compress image data on both dimensionality and cardinality via building either sparse low-dimensional representations or a sparse projection matrix for dimension reduction. We formulate a double shrinking model (DSM) as an l1 regularized variance maximization with constraint ||x||2=1, and develop a double shrinking algorithm (DSA) to optimize DSM. DSA is a path-following algorithm that can build the whole solution path of locally optimal solutions of different sparse levels. Each solution on the path is a “warm start” for searching the next sparser one. In each iteration of DSA, the direction, the step size, and the Lagrangian multiplier are deduced from the Karush-Kuhn-Tucker conditions. The magnitudes of trivial variables are shrunk and the importances of critical variables are simultaneously augmented along the selected direction with the determined step length. Double shrinking can be applied to manifold learning and feature selections for better interpretation of features, and can be combined with classification and clustering to boost their performance. The experimental results suggest that double shrinking produces efficient and effective data compression.
  • Keywords
    data compression; image coding; learning (artificial intelligence); pattern classification; pattern clustering; Karush-Kuhn-Tucker conditions; Lagrangian multiplier; classification; clustering; double shrinking model; double shrinking sparse dimension reduction; feature selections; image data compression; learning tasks; manifold learning; path-following algorithm; sparse low-dimensional representations; sparse projection matrix; Compressed sensing; Equations; Image coding; Optimization; Principal component analysis; Sparse matrices; Vectors; $ell_{1}$ regularization; compressed sensing; dimension reduction; image compression; manifold embedding; path-following; sparse learning;
  • fLanguage
    English
  • Journal_Title
    Image Processing, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1057-7149
  • Type

    jour

  • DOI
    10.1109/TIP.2012.2202678
  • Filename
    6212355