• DocumentCode
    43237
  • Title

    Distributed Learning of Distributions via Social Sampling

  • Author

    Sarwate, Anand D. ; Javidi, Tara

  • Author_Institution
    Dept. of Electr. & Comput. Eng., Rutgers Univ., Piscataway, NJ, USA
  • Volume
    60
  • Issue
    1
  • fYear
    2015
  • fDate
    Jan. 2015
  • Firstpage
    34
  • Lastpage
    45
  • Abstract
    A protocol for distributed estimation of discrete distributions is proposed. Each agent begins with a single sample from the distribution, and the goal is to learn the empirical distribution of the samples. The protocol is based on a simple message-passing model motivated by communication in social networks. Agents sample a message randomly from their current estimates of the distribution, resulting in a protocol with quantized messages. Using tools from stochastic approximation, the algorithm is shown to converge almost surely. Examples illustrate three regimes with different consensus phenomena. Simulations demonstrate this convergence and give some insight into the effect of network topology.
  • Keywords
    learning (artificial intelligence); multi-agent systems; sampling methods; discrete distribution; distributed learning; message passing model; network topology; social networks; social sampling; stochastic approximation; Approximation methods; Convergence; Histograms; Noise; Protocols; Stochastic processes; Vectors; Distributions; independent and identically distributed (i.i.d.);
  • fLanguage
    English
  • Journal_Title
    Automatic Control, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9286
  • Type

    jour

  • DOI
    10.1109/TAC.2014.2329611
  • Filename
    6827923