Author :
Mokhtari, Aryan ; Qing Ling ; Ribeiro, Alejandro
Author_Institution :
Dept. of Electr. & Syst. Eng., Univ. of Pennsylvania, Philadelphia, PA, USA
Abstract :
We consider minimization of a sum of convex objective functions where the components of the objective are available at different nodes of a network and nodes are allowed to only communicate with their neighbors. The use of distributed subgradient or gradient methods is widespread but they often suffer from slow convergence since they rely on first order information, which leads to a large number of local communications between nodes in the network. In this paper we propose the Network Newton (NN) method as a distributed algorithm that incorporates second order information via distributed evaluation of approximations to Newton steps. We also introduce adaptive (A)NN in order to establish exact convergence. Numerical analyses show significant improvement in both convergence time and number of communications for NN relative to existing (first order) alternatives.
Keywords :
Newton method; approximation theory; convergence of numerical methods; distributed algorithms; minimisation; network theory (graphs); ANN method; adaptive-NN method; communications improvement; convergence; convergence time improvement; convex objective function sum minimization; distributed algorithm; distributed approximation evaluation; first-order information; local communication; network Newton method; network nodes; numerical analysis; second-order information; Approximation algorithms; Approximation methods; Artificial neural networks; Convergence; Linear programming; Nickel; Optimization;
Conference_Titel :
Signals, Systems and Computers, 2014 48th Asilomar Conference on
Print_ISBN :
978-1-4799-8295-0
DOI :
10.1109/ACSSC.2014.7094740