DocumentCode
64442
Title
Dynamic Activation Policies for Event Capture in Rechargeable Sensor Network
Author
Zhu Ren ; Peng Cheng ; Jiming Chen ; Yau, David K. Y. ; Youxian Sun
Author_Institution
State Key Lab. of Ind. Control Technol., Zhejiang Univ., Hangzhou, China
Volume
25
Issue
12
fYear
2014
fDate
Dec. 2014
Firstpage
3124
Lastpage
3134
Abstract
We consider the problem of event capture by a rechargeable sensor network. We assume that the events of interest follow a renewal process whose event inter-arrival times are drawn from a general probability distribution, and that a stochastic recharge process is used to provide energy for the sensors´ operation. Dynamics of the event and recharge processes make the optimal sensor activation problem highly challenging. In this paper we first consider the single-sensor problem. Using dynamic control theory, we consider a full-information model in which, independent of its activation schedule, the sensor will know whether an event has occurred in the last time slot or not. In this case, a simple and optimal greedy policy for the solution is developed. We then further consider a partial-information model where the sensor knows about the occurrence of an event only when it is active. This problem falls into the class of partially observable Markov decision processes (POMDP). Since the POMDP´s optimal policy has exponential computational complexity and is intrinsically hard to solve, we propose an efficient heuristic clustering policy and evaluate its performance. Finally, our solutions are extended to handle a network setting in which multiple sensors collaborate to capture the events. We also provide extensive simulation results to evaluate the performance of our solutions.
Keywords
Markov processes; computational complexity; decision theory; government policies; greedy algorithms; pattern clustering; wireless sensor networks; POMDP; activation schedule; dynamic activation policies; dynamic control theory; event capture; event interarrival times; exponential computational complexity; full-information model; general probability distribution; heuristic clustering policy; network setting; optimal greedy policy; optimal sensor activation problem; partial-information model; partially observable Markov decision processes; rechargeable sensor network; renewal process; single-sensor problem; stochastic recharge process; Educational institutions; Markov processes; Monitoring; Optimization; Schedules; Wireless communication; Markov decision process; Rechargeable sensors; dynamic activation; event capture;
fLanguage
English
Journal_Title
Parallel and Distributed Systems, IEEE Transactions on
Publisher
ieee
ISSN
1045-9219
Type
jour
DOI
10.1109/TPDS.2013.2297096
Filename
6714601
Link To Document