DocumentCode :
1968814
Title :
Online learning and adaptation over networks: More information is not necessarily better
Author :
Sayed, Ali H. ; Sheng-Yuan Tu ; Jianshu Chen
Author_Institution :
Dept. of Electr. Eng., Univ. of California, Los Angeles, Los Angeles, CA, USA
fYear :
2013
fDate :
10-15 Feb. 2013
Firstpage :
1
Lastpage :
8
Abstract :
We examine the performance of stochastic-gradient learners over connected networks for global optimization problems involving risk functions that are not necessarily quadratic. We consider two well-studied classes of distributed schemes including consensus strategies and diffusion strategies. We quantify how the mean-square-error and the convergence rate of the network vary with the combination policy and with the fraction of informed agents. Several combination policies are considered including doubly-stochastic rules, the averaging rule, Metropolis rule, and the Hastings rule. It will be seen that the performance of the network does not necessarily improve with a larger proportion of informed agents. A strategy to counter the degradation in performance is presented.
Keywords :
knowledge based systems; learning (artificial intelligence); mean square error methods; multi-agent systems; agent learning; averaging rule; combination policy; consensus strategy; diffusion strategy; distributed scheme; doubly-stochastic rule; global optimization; hastings rule; mean square error; metropolis rule; network convergence rate; online learning; risk function; stochastic-gradient learner; Approximation methods; Convergence; Cost function; Erbium; Nickel; Noise; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Information Theory and Applications Workshop (ITA), 2013
Conference_Location :
San Diego, CA
Print_ISBN :
978-1-4673-4648-1
Type :
conf
DOI :
10.1109/ITA.2013.6502975
Filename :
6502975
Link To Document :
بازگشت