Title :
Directed-Distributed Gradient Descent
Author :
Chenguang Xi;Usman A. Khan
Author_Institution :
Department of Electrical and Computer Engineering, Tufts University, 161 College Ave, Medford, MA 02155, United States
Abstract :
Distributed Gradient Descent (DGD) is a well established algorithm to solve the minimization of a sum of multi-agents´ objective functions in the network, with the assumption that the network is undirected, i.e., requiring the weight matrices to be doubly-stochastic. In this paper, we present a distributed algorithm, called Directed-Distributed Gradient Descent (D-DGD), to solve the same problem over directed graphs. In each iteration of D-DGD, we augment an additional variable at each agent to record the change in the state evolution. The algorithm simultaneously constructs a row-stochastic matrix and a column-stochastic matrix instead of only a doubly-stochastic matrix. The analysis shows that D-DGD converges at a rate of O(ln k/√k).
Keywords :
"Convergence","Optimization","Linear programming","Algorithm design and analysis","Eigenvalues and eigenfunctions","Distributed algorithms","Indexes"
Conference_Titel :
Communication, Control, and Computing (Allerton), 2015 53rd Annual Allerton Conference on
DOI :
10.1109/ALLERTON.2015.7447120