DocumentCode :
2674203
Title :
A dynamic checkpointing scheme based on reinforcement learning
Author :
Okamura, Hiroyuki ; Nishimura, Yuki ; Dohi, Tadashi
Author_Institution :
Dept. of Inf. Eng., Hiroshima Univ., Japan
fYear :
2004
fDate :
3-5 March 2004
Firstpage :
151
Lastpage :
158
Abstract :
We develop a new checkpointing scheme for a uniprocess application. First, we model the checkpointing scheme by a semiMarkov decision process, and apply the reinforcement learning algorithm to estimate statistically the optimal checkpointing policy. More specifically, the representative reinforcement learning algorithm, called the Q-learning algorithm, is used to develop an adaptive checkpointing scheme. In simulation experiments, we examine the asymptotic behavior of the system overhead with adaptive checkpointing and show quantitatively that the proposed dynamic checkpoint algorithm is useful and robust under an incomplete knowledge on the failure time distribution.
Keywords :
Markov processes; learning (artificial intelligence); software fault tolerance; system recovery; Q-learning algorithm; adaptive checkpointing scheme; asymptotic behavior system; dynamic checkpointing scheme; failure time distribution; optimal checkpointing policy; reinforcement learning algorithm; semiMarkov decision process; uniprocess application; Adaptive systems; Availability; Checkpointing; Databases; Delay; Dynamic programming; Fault tolerant systems; Heuristic algorithms; Learning; Robustness;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Dependable Computing, 2004. Proceedings. 10th IEEE Pacific Rim International Symposium on
Print_ISBN :
0-7695-2076-6
Type :
conf
DOI :
10.1109/PRDC.2004.1276566
Filename :
1276566
Link To Document :
بازگشت