DocumentCode :
2116984
Title :
Distributed Q-learning for energy harvesting Heterogeneous Networks
Author :
Miozzo, Marco ; Giupponi, Lorenza ; Rossi, Michele ; Dini, Paolo
Author_Institution :
CTTC, Av. Carl Friedrich Gauss, 7, 08860, Castelldefels, Barcelona, Spain
fYear :
2015
fDate :
8-12 June 2015
Firstpage :
2006
Lastpage :
2011
Abstract :
We consider a two-tier urban Heterogeneous Network where small cells powered with renewable energy are deployed in order to provide capacity extension and to offload macro base stations. We use reinforcement learning techniques to concoct an algorithm that autonomously learns energy inflow and traffic demand patterns. This algorithm is based on a decentralized multi-agent Q-learning technique that, by interacting with the environment, obtains optimal policies aimed at improving the system performance in terms of drop rate, throughput and energy efficiency. Simulation results show that our solution effectively adapts to changing environmental conditions and meets most of our performance objectives. At the end of the paper we identify areas for improvement.
Keywords :
Algorithm design and analysis; Batteries; Bismuth; Energy harvesting; Renewable energy sources; Switches; Throughput; Energy Efficiency; HetNet; Mobile Networks; Q-Learning; Renewable Energy; Sustainability;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Communication Workshop (ICCW), 2015 IEEE International Conference on
Conference_Location :
London, United Kingdom
Type :
conf
DOI :
10.1109/ICCW.2015.7247475
Filename :
7247475
Link To Document :
بازگشت