DocumentCode :
2887636
Title :
Opinion dynamics and distributed learning of distributions
Author :
Sarwate, Anand D. ; Javidi, Tara
Author_Institution :
Inf. Theor. & Applic. Center, UC San Diego, La Jolla, CA, USA
fYear :
2011
fDate :
28-30 Sept. 2011
Firstpage :
1151
Lastpage :
1158
Abstract :
A protocol for distributed estimation of discrete distributions is proposed. Each agent begins with a single sample from the distribution, and the goal is to learn the empirical distribution of the samples. The protocol is based on a simple message-passing model motivated by communication in social networks. Agents sample a message randomly from their current estimates of the distribution, resulting in a protocol with quantized messages. Using tools from stochastic approximation, the algorithm is shown to converge almost surely. Simulations demonstrate this convergence and give some insight into the effect of network topology.
Keywords :
approximation theory; learning (artificial intelligence); message passing; multi-agent systems; protocols; social networking (online); stochastic processes; agent; discrete distribution distributed estimation; distribution distributed learning; message-passing model; network topology; opinion dynamics; protocol; social networks; stochastic approximation; Approximation methods; Convergence; Histograms; Protocols; Silicon; Stochastic processes; TV;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Communication, Control, and Computing (Allerton), 2011 49th Annual Allerton Conference on
Conference_Location :
Monticello, IL
Print_ISBN :
978-1-4577-1817-5
Type :
conf
DOI :
10.1109/Allerton.2011.6120297
Filename :
6120297
Link To Document :
بازگشت