DocumentCode :
1403545
Title :
Higher Dimensional Consensus: Learning in Large-Scale Networks
Author :
Khan, Usman A. ; Kar, Soummya ; Moura, José M F
Author_Institution :
Dept. of Electr. & Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA, USA
Volume :
58
Issue :
5
fYear :
2010
fDate :
5/1/2010 12:00:00 AM
Firstpage :
2836
Lastpage :
2849
Abstract :
The paper considers higher dimensional consensus (HDC). HDC is a general class of linear distributed algorithms for large-scale networks that generalizes average-consensus and includes other interesting distributed algorithms, like sensor localization, leader-follower algorithms in multiagent systems, or distributed Jacobi algorithm. In HDC, the network nodes are partitioned into ??anchors??, nodes whose states are fixed over the HDC iterations, and ??sensors??, nodes whose states are updated by the algorithm. The paper starts by briefly considering what we call the forward problem by presenting the conditions for HDC to converge, the limiting state to which it converges, and what is its convergence rate. The main focus of the paper is the inverse or design problem, i.e., learning the weights or parameters of the HDC so that the algorithm converges to a desired prespecified state. This generalizes the well-known problem of designing the weights in average-consensus. We pose learning as a constrained nonconvex optimization problem that we cast in the framework of multiobjective optimization (MOP) and to which we apply Pareto optimality. We derive the solution to the learning problem by proving relevant properties satisfied by the MOP solutions and by the Pareto front. Finally, the paper shows how the MOP approach leads to interesting tradeoffs (speed of convergence versus performance) arising in resource constrained networks. Simulation studies illustrate our approach for a leader-follower architecture in multiagent systems.
Keywords :
Pareto optimisation; distributed algorithms; inverse problems; large-scale systems; Pareto optimality; design problem; forward problem; higher dimensional consensus; inverse problem; large-scale networks; learning; linear distributed algorithms; multiobjective optimization; Distributed algorithms; Pareto optimality; higher dimensional consensus; large-scale networks; leader-follower; multiobjective optimization; spectral graph theory;
fLanguage :
English
Journal_Title :
Signal Processing, IEEE Transactions on
Publisher :
ieee
ISSN :
1053-587X
Type :
jour
DOI :
10.1109/TSP.2010.2042482
Filename :
5406090
Link To Document :
بازگشت