DocumentCode :
2173093
Title :
On the generalization ability of distributed online learners
Author :
Towfic, Zaid J. ; Chen, Jianshu ; Sayed, Ali H.
Author_Institution :
Electr. Eng. Dept., Univ. of California, Los Angeles, Los Angeles, CA, USA
fYear :
2012
fDate :
23-26 Sept. 2012
Firstpage :
1
Lastpage :
6
Abstract :
We propose a fully-distributed stochastic-gradient strategy based on diffusion adaptation techniques. We show that, for strongly convex risk functions, the excess-risk at every node decays at the rate of O(1/Ni), where N is the number of learners and i is the iteration index. In this way, the distributed diffusion strategy, which relies only on local interactions, is able to achieve the same convergence rate as centralized strategies that have access to all data from the nodes at every iteration. We also show that every learner is able to improve its excess-risk in comparison to the non-cooperative mode of operation where each learner would operate independently of the other learners.
Keywords :
computational complexity; gradient methods; iterative methods; learning (artificial intelligence); optimisation; O(1/Ni); convex risk functions; diffusion adaptation techniques; distributed diffusion strategy; distributed online learners; fully-distributed stochastic-gradient strategy; generalization ability; iteration index; Approximation algorithms; Approximation methods; Convergence; Nickel; Noise; Optimization; Vectors; convergence rate; diffusion adaptation; distributed optimization; mean-square-error; risk function;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Machine Learning for Signal Processing (MLSP), 2012 IEEE International Workshop on
Conference_Location :
Santander
ISSN :
1551-2541
Print_ISBN :
978-1-4673-1024-6
Electronic_ISBN :
1551-2541
Type :
conf
DOI :
10.1109/MLSP.2012.6349778
Filename :
6349778
Link To Document :
بازگشت