DocumentCode :
760599
Title :
Quantized incremental algorithms for distributed optimization
Author :
Rabbat, Michael G. ; Nowak, Robert D.
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Wisconsin-Madison, Madison, WI, USA
Volume :
23
Issue :
4
fYear :
2005
fDate :
4/1/2005 12:00:00 AM
Firstpage :
798
Lastpage :
808
Abstract :
Wireless sensor networks are capable of collecting an enormous amount of data. Often, the ultimate objective is to estimate a parameter or function from these data, and such estimators are typically the solution of an optimization problem (e.g., maximum likelihood, minimum mean-squared error, or maximum a posteriori). This paper investigates a general class of distributed optimization algorithms for "in-network" data processing, aimed at reducing the amount of energy and bandwidth used for communication. Our intuition tells us that processing the data in-network should, in general, require less energy than transmitting all of the data to a fusion center. In this paper, we address the questions: When, in fact, does in-network processing use less energy, and how much energy is saved? The proposed distributed algorithms are based on incremental optimization methods. A parameter estimate is circulated through the network, and along the way each node makes a small gradient descent-like adjustment to the estimate based only on its local data. Applying results from the theory of incremental subgradient optimization, we find that the distributed algorithms converge to an approximate solution for a broad class of problems. We extend these results to the case where the optimization variable is quantized before being transmitted to the next node and find that quantization does not affect the rate of convergence. Bounds on the number of incremental steps required for a certain level of accuracy provide insight into the tradeoff between estimation performance and communication overhead. Our main conclusion is that as the number of sensors in the network grows, in-network processing will always use less energy than a centralized algorithm, while maintaining a desired level of accuracy.
Keywords :
distributed algorithms; gradient methods; optimisation; wireless sensor networks; convergence rate; distributed optimization; energy-accuracy tradeoff; gradient methods; innetwork data processing; quantized incremental algorithms; wireless sensor networks; Bandwidth; Data processing; Distributed algorithms; Energy resources; Gradient methods; Maximum likelihood estimation; Optimization methods; Parameter estimation; Quantization; Wireless sensor networks; Distributed algorithms; energy-accuracy tradeoff; gradient methods; wireless sensor networks;
fLanguage :
English
Journal_Title :
Selected Areas in Communications, IEEE Journal on
Publisher :
ieee
ISSN :
0733-8716
Type :
jour
DOI :
10.1109/JSAC.2005.843546
Filename :
1413472
Link To Document :
بازگشت