DocumentCode :
3540733
Title :
FASt global convergence of gradient methods for solving regularized M-estimation
Author :
Agarwal, Alekh ; Negahban, Sahand ; Wainwright, Martin J.
Author_Institution :
Dept. of EECS, UC Berkeley, Berkeley, CA, USA
fYear :
2012
fDate :
5-8 Aug. 2012
Firstpage :
409
Lastpage :
412
Abstract :
We analyze the convergence rates of composite gradient methods for solving problems based on regularized M-estimators, working within a high-dimensional framework that allows the data dimension d to grow with (and possibly exceed) the sample size n. This high-dimensional structure precludes the usual global assumptions-namely, strong convexity and smoothness conditions-that underlie much of classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that composite gradient descent has a globally geometric rate of convergence up to the statistical precision of the model, meaning the typical distance between the true unknown parameter θ* and an optimal solution θ̂. This result is substantially sharper than previous results, which yielded sublinear convergence or linear convergence up to the noise level, and builds on our earlier work for constrained estimation problems. Our analysis applies to a wide range of M-estimators and statistical models, including sparse linear regression using Lasso (ℓ1-regularized regression); group Lasso for block sparsity; log-linear models with regularization; low-rank matrix recovery using nuclear norm regularization; and matrix decomposition. Overall, our analysis reveals interesting connections between statistical precision and computational efficiency in high-dimensional estimation.
Keywords :
convergence of numerical methods; estimation theory; gradient methods; matrix decomposition; regression analysis; ℓ1-regularized regression; Lasso regularized regression; block sparsity; composite gradient descent; fast global convergence; globally geometric rate; gradient methods; group Lasso; log linear models; low rank matrix recovery; matrix decomposition; nuclear norm regularization; optimization analysis; regularized M-estimation; smoothness condition; sparse linear regression; statistical model; strong convexity; sublinear convergence; Convergence; Gradient methods; Linear regression; Matrix decomposition; Sparse matrices; Vectors; High-dimensional statistics; convex optimization; gradient methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Statistical Signal Processing Workshop (SSP), 2012 IEEE
Conference_Location :
Ann Arbor, MI
ISSN :
pending
Print_ISBN :
978-1-4673-0182-4
Electronic_ISBN :
pending
Type :
conf
DOI :
10.1109/SSP.2012.6319717
Filename :
6319717
Link To Document :
بازگشت