DocumentCode :
760590
Title :
Near-optimal reinforcement learning framework for energy-aware sensor communications
Author :
Pandana, Charles ; Liu, K. J Ray
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Maryland, College Park, MD, USA
Volume :
23
Issue :
4
fYear :
2005
fDate :
4/1/2005 12:00:00 AM
Firstpage :
788
Lastpage :
797
Abstract :
We consider the problem of average throughput maximization per total consumed energy in packetized sensor communications. Our study results in a near-optimal transmission strategy that chooses the optimal modulation level and transmit power while adapting to the incoming traffic rate, buffer condition, and the channel condition. We investigate the point-to-point and multinode communication scenarios. Many solutions of the previous works require the state transition probability, which may be hard to obtain in a practical situation. Therefore, we are motivated to propose and utilize a class of learning algorithms [called reinforcement learning (RL)] to obtain the near-optimal policy in point-to-point communication and a good transmission strategy in multinode scenario. For comparison purpose, we develop the stochastic models to obtain the optimal strategy in the point-to-point communication. We show that the learned policy is close to the optimal policy. We further extend the algorithm to solve the optimization problem in a multinode scenario by independent learning. We compare the learned policy to a simple policy, where the agent chooses the highest possible modulation and selects the transmit power that achieves a predefined signal-to-interference ratio (SIR) given one particular modulation. The proposed learning algorithm achieves more than twice the throughput per energy compared with the simple policy, particularly, in high packet arrival regime. Beside the good performance, the RL algorithm results in a simple, systematic, self-organized, and distributed way to decide the transmission strategy.
Keywords :
Markov processes; learning (artificial intelligence); optical modulation; optimisation; telecommunication traffic; wireless sensor networks; MDP; Markov decision process; SIR; channel condition; energy-aware sensor communication; incoming traffic rate; near-optimal reinforcement learning algorithm; optimal modulation level; optimization problem; point-to-point communication; signal-to-interference ratio; throughput maximization; Energy efficiency; Intelligent sensors; Learning; Monitoring; Power control; Resource management; Sensor systems; Stochastic processes; Throughput; Wireless sensor networks; Energy-aware sensor communications; Markov decision process (MDP); reinforcement learning (RL);
fLanguage :
English
Journal_Title :
Selected Areas in Communications, IEEE Journal on
Publisher :
ieee
ISSN :
0733-8716
Type :
jour
DOI :
10.1109/JSAC.2005.843547
Filename :
1413471
Link To Document :
بازگشت