Title :
Efficient
-Norm-Based Low-Rank Matrix Approximations for Large-Scale Problems Using Alternating Rectified Gradient Method
Author :
Eunwoo Kim ; Minsik Lee ; Chong-Ho Choi ; Nojun Kwak ; Songhwai Oh
Author_Institution :
Dept. of Electr. & Comput. Eng., Seoul Nat. Univ., Seoul, South Korea
Abstract :
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2-norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2-norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1-norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for realworld problems. In this paper, we propose two efficient low-rank factorization methods based on the l1-norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
Keywords :
approximation theory; computer vision; gradient methods; matrix decomposition; principal component analysis; Frobenius norm; PCA; alternating rectified gradient method; coefficient matrices; computer vision; image processing; l1-norm-based low-rank matrix approximation; large-scale problems; low-rank factorization methods; principal component analysis; projection matrices; Approximation algorithms; Approximation methods; Convergence; Cost function; Gradient methods; Principal component analysis; Robustness; $l_{1}$ -norm; Alternating rectified gradient method; l₁-norm; low-rank matrix approximation; matrix completion (MC); principal component analysis (PCA); proximal gradient method; proximal gradient method.;
Journal_Title :
Neural Networks and Learning Systems, IEEE Transactions on
DOI :
10.1109/TNNLS.2014.2312535