DocumentCode
112176
Title
Energy Sharing for Multiple Sensor Nodes With Finite Buffers
Author
Padakandla, Sindhu ; Prabuchandran, K.J. ; Bhatnagar, Shalabh
Author_Institution
Dept. of Comput. Sci. & Autom., Indian Inst. of Sci., Bangalore, India
Volume
63
Issue
5
fYear
2015
fDate
May-15
Firstpage
1811
Lastpage
1823
Abstract
We consider the problem of finding optimal energy sharing policies that maximize the network performance of a system comprising of multiple sensor nodes and a single energy harvesting (EH) source. Sensor nodes periodically sense the random field and generate data, which is stored in the corresponding data queues. The EH source harnesses energy from ambient energy sources and the generated energy is stored in an energy buffer. Sensor nodes receive energy for data transmission from the EH source. The EH source has to efficiently share the stored energy among the nodes to minimize the long-run average delay in data transmission. We formulate the problem of energy sharing between the nodes in the framework of average cost infinite-horizon Markov decision processes (MDPs). We develop efficient energy sharing algorithms, namely Q-learning algorithm with exploration mechanisms based on the ε-greedy method as well as upper confidence bound (UCB) . We extend these algorithms by incorporating state and action space aggregation to tackle state-action space explosion in the MDP. We also develop a cross entropy based method that incorporates policy parameterization to find near optimal energy sharing policies. Through simulations, we show that our algorithms yield energy sharing policies that outperform the heuristic greedy method.
Keywords
Markov processes; buffer storage; data communication; decision theory; energy harvesting; greedy algorithms; learning (artificial intelligence); sensor fusion; telecommunication power management; telecommunication power supplies; wireless sensor networks; ε-greedy method; EH source; MDPs; Q-learning algorithm; UCB; action space aggregation; average cost infinite-horizon Markov decision processes; cross entropy based method; data transmission; energy buffer; energy sharing algorithms; finite buffers; heuristic greedy method; long-run average delay; multiple sensor nodes; network performance maximization; optimal energy sharing policy; policy parameterization; single energy harvesting source; state space aggregation; state-action space explosion; upper confidence bound; Approximation algorithms; Batteries; Data communication; Delays; Energy harvesting; Heuristic algorithms; Transmitters; Energy harvesting sensor nodes; Markov decision process; Q-learning; energy sharing; state aggregation;
fLanguage
English
Journal_Title
Communications, IEEE Transactions on
Publisher
ieee
ISSN
0090-6778
Type
jour
DOI
10.1109/TCOMM.2015.2415777
Filename
7065316
Link To Document