DocumentCode :
3161527
Title :
Distributed Nesterov-like gradient algorithms
Author :
Jakovetic, Dusan ; Moura, Jose M. F. ; Xavier, Joao
Author_Institution :
Inst. for Syst. & Robot. (ISR), Tech. Univ. of Lisbon, Lisbon, Portugal
fYear :
2012
fDate :
10-13 Dec. 2012
Firstpage :
5459
Lastpage :
5464
Abstract :
In classical, centralized optimization, the Nesterov gradient algorithm reduces the number of iterations to produce an ε-accurate solution (in terms of the cost function) with respect to ordinary gradient from O(1/ε) to equation. This improvement is achieved on a class of convex functions with Lipschitz continuous first derivative, and it comes at a very small additional computational cost per iteration. In this paper, we consider distributed optimization, where nodes in the network cooperatively minimize the sum of their private costs subject to a global constraint. To solve this problem, recent literature proposes distributed (sub)gradient algorithms, that are attractive due to computationally inexpensive iterations, but that converge slowly-the ε error is achieved in O(1/ε2) iterations. Here, building from the Nesterov gradient algorithm, we present a distributed, constant step size, Nesterov-like gradient algorithm that converges much faster than existing distributed (sub)gradient methods, with zero additional communications and very small additional computations per iteration k. We show that our algorithm converges to a solution neighborhood, such that, for a convex compact constraint set and optimized stepsize, the convergence time is O(1/ε). We achieve this on a class of convex, coercive, continuously differentiable private costs with Lipschitz first derivative. We derive our algorithm through a useful penalty, network´s Laplacian matrix-based reformulation of the original problem (referred to as the clone problem) - the proposed method is precisely the Nesterov-gradient applied on the clone problem. Finally, we illustrate the performance of our algorithm on distributed learning of a classifier via logistic loss.
Keywords :
Laplace equations; convex programming; gradient methods; iterative methods; matrix algebra; Laplacian matrix based reformulation; Lipschitz continuous first derivative; centralized optimization; computational cost per iteration; convex compact constraint set; convex functions; cost function; distributed Nesterov like gradient algorithms; distributed learning; distributed optimization; global constraint; gradient algorithms; iteration number; Approximation algorithms; Cloning; Convergence; Cost function; Logistics; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control (CDC), 2012 IEEE 51st Annual Conference on
Conference_Location :
Maui, HI
ISSN :
0743-1546
Print_ISBN :
978-1-4673-2065-8
Electronic_ISBN :
0743-1546
Type :
conf
DOI :
10.1109/CDC.2012.6425938
Filename :
6425938
Link To Document :
بازگشت