DocumentCode :
115400
Title :
Data-driven first-order methods for misspecified convex optimization problems: Global convergence and Rate estimates
Author :
Ahmadi, Hesam ; Shanbhag, Uday V.
Author_Institution :
Dept. of Ind. & Manuf. Eng., Pennsylvania State Univ., University Park, PA, USA
fYear :
2014
fDate :
15-17 Dec. 2014
Firstpage :
4228
Lastpage :
4233
Abstract :
We consider a misspecified optimization problem that requires minimizing of a convex function f(x; θ*) in x over a closed and convex set X where θ* is an unknown vector of parameters. Suppose θ* may be learnt by a parallel learning process that generates a sequence of estimators θk, each of which is an increasingly accurate approximation of θ*. In this context, we examine the development of coupled schemes that generate iterates (xk, θk) such that as the iteration index k → ∞, then xk → x*, a minimizer of f(x; θ*) over X and θk → θ*.We make two sets of contributions along this direction. First, we consider the use of gradient methods and proceed to show that such techniques are globally convergent. In addition, such schemes show a quantifiable degradation in the linear rate of convergence observed for strongly convex optimization problems. When strong convexity assumptions are weakened, we see a modification in the convergence rate in function values of O(1/K) by an additive factor ≑θ0-θ*≑O(qKg +1/K) where ≑θ0-θ*≑ represents the initial misspecification in θ* and qg denotes the contractive factor associated with the learning process. Second, we present an averaging-based subgradient scheme and show that the optimal constant steplength leads to a modification in the rate by ≑θ0-θ*≑O(qKg +1/K), implying no effect on the standard rate of O(1/√K).
Keywords :
convex programming; gradient methods; averaging-based subgradient scheme; contractive factor; convex function values; convex set; data-driven first order methods; global convergence; gradient methods; iteration index; linear rate; misspecified convex optimization problems; optimal constant steplength; parallel learning process; quantifiable degradation; rate estimates; strong convexity assumptions; Algorithm design and analysis; Convergence; Convex functions; Degradation; Gradient methods; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on
Conference_Location :
Los Angeles, CA
Print_ISBN :
978-1-4799-7746-8
Type :
conf
DOI :
10.1109/CDC.2014.7040048
Filename :
7040048
Link To Document :
بازگشت