DocumentCode
1835
Title
TACT: A Transfer Actor-Critic Learning Framework for Energy Saving in Cellular Radio Access Networks
Author
Rongpeng Li ; Zhifeng Zhao ; Xianfu Chen ; Palicot, Jacques ; Honggang Zhang
Author_Institution
Dept. of Inf. Sci. & Electron. Eng., Zhejiang Univ., Hangzhou, China
Volume
13
Issue
4
fYear
2014
fDate
Apr-14
Firstpage
2000
Lastpage
2011
Abstract
Recent works have validated the possibility of improving energy efficiency in radio access networks (RANs), achieved by dynamically turning on/off some base stations (BSs). In this paper, we extend the research over BS switching operations, which should match up with traffic load variations. Instead of depending on the dynamic traffic loads which are still quite challenging to precisely forecast, we firstly formulate the traffic variations as a Markov decision process. Afterwards, in order to foresightedly minimize the energy consumption of RANs, we design a reinforcement learning framework based BS switching operation scheme. Furthermore, to speed up the ongoing learning process, a transfer actor-critic algorithm (TACT), which utilizes the transferred learning expertise in historical periods or neighboring regions, is proposed and provably converges. In the end, we evaluate our proposed scheme by extensive simulations under various practical configurations and show that the proposed TACT algorithm contributes to a performance jump start and demonstrates the feasibility of significant energy efficiency improvement at the expense of tolerable delay performance.
Keywords
Markov processes; cellular radio; decision theory; learning (artificial intelligence); radio access networks; telecommunication computing; telecommunication power management; telecommunication traffic; BS switching operation scheme; Markov decision process; RANs; TACT algorithm; base stations; cellular radio access networks; energy consumption; energy efficiency; energy saving; neighboring regions; reinforcement learning framework; tolerable delay performance; traffic load variations; transfer actor-critic learning framework; Algorithm design and analysis; Energy consumption; Heuristic algorithms; Learning (artificial intelligence); Radio access networks; Switches; Radio access networks; actor-critic algorithm; base stations; energy saving; green communications; reinforcement learning; sleeping mode; transfer learning;
fLanguage
English
Journal_Title
Wireless Communications, IEEE Transactions on
Publisher
ieee
ISSN
1536-1276
Type
jour
DOI
10.1109/TWC.2014.022014.130840
Filename
6747280
Link To Document