DocumentCode :
3605734
Title :
Adaptive Duty Cycling in Sensor Networks With Energy Harvesting Using Continuous-Time Markov Chain and Fluid Models
Author :
Chan, Wai Hong Ronald ; Pengfei Zhang ; Nevat, Ido ; Nagarajan, Sai Ganesh ; Valera, Alvin C. ; Hwee-Xian Tan ; Gautam, Natarajan
Author_Institution :
Mech. Eng. Dept., Stanford Univ., Stanford, CA, USA
Volume :
33
Issue :
12
fYear :
2015
Firstpage :
2687
Lastpage :
2700
Abstract :
The dynamic and unpredictable nature of energy harvesting sources available for wireless sensor networks, and the time variation in network statistics like packet transmission rates and link qualities, necessitate the use of adaptive duty cycling techniques. Such adaptive control allows sensor nodes to achieve long-run energy neutrality, where energy supply and demand are balanced in a dynamic environment such that the nodes function continuously. In this paper, we develop a new framework enabling an adaptive duty cycling scheme for sensor networks that takes into account the node battery level, ambient energy that can be harvested, and application-level QoS requirements. We model the system as a Markov decision process (MDP) that modifies its state transition policy using reinforcement learning. The MDP uses continuous time Markov chains (CTMCs) to model the network state of a node to obtain key QoS metrics like latency, loss probability, and power consumption, as well as to model the node battery level taking into account physically feasible rates of change. We show that with an appropriate choice of the reward function for the MDP, as well as a suitable learning rate, exploitation probability, and discount factor, the need to maintain minimum QoS levels for optimal network performance can be balanced with the need to promote the maintenance of a finite battery level to ensure node operability. Extensive simulation results show the benefit of our algorithm for different reward functions and parameters.
Keywords :
Markov processes; adaptive control; energy harvesting; learning (artificial intelligence); power consumption; probability; quality of service; telecommunication computing; wireless sensor networks; CTMC; MDP; Markov decision process; adaptive control; adaptive duty cycling; application-level QoS requirement; continuous-time Markov chain; energy harvesting; fluid model; long-run energy neutrality; loss probability; power consumption; reinforcement learning; state transition; wireless sensor network; Adaptive systems; Batteries; Energy harvesting; Green communications; Measurement; Power demand; Quality of service; Wireless sensor networks; Markov decision process; Wireless sensor networks; adaptive duty cycle; continuous-time Markov chain; fluid model; reinforcement learning;
fLanguage :
English
Journal_Title :
Selected Areas in Communications, IEEE Journal on
Publisher :
ieee
ISSN :
0733-8716
Type :
jour
DOI :
10.1109/JSAC.2015.2478717
Filename :
7264968
Link To Document :
بازگشت