Title of article :
Performance evaluation with temporal rewards
Author/Authors :
Voeten، نويسنده , , Jeroen P.M.، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2002
Pages :
30
From page :
189
To page :
218
Abstract :
Today many formalisms exist for specifying complex Markov chains. In contrast, formalisms for specifying rewards, enabling the analysis of long-run average performance properties, have remained quite primitive. Basically, they only support the analysis of relatively simple performance metrics that can be expressed as long-run averages of atomic rewards, i.e. rewards that are deductible directly from the individual states of the initial Markov chain specification. To deal with complex performance metrics that are dependent on the accumulation of atomic rewards over sequences of states, the initial specification has to be extended explicitly to provide the required state information. ve this problem, we introduce in this paper a new formalism of temporal rewards that allows complex quantitative properties to be expressed in terms of temporal reward formulas. Together, an initial (discrete-time) Markov chain and the temporal reward formulas implicitly define an extended Markov chain that allows the determination of the quantitative property by traditional techniques for computing long-run averages. A method to construct the extended chain is given and it is proved that this method leaves long-run averages invariant for atomic rewards. We further establish conditions that guarantee the preservation of ergodicity. The construction method can build the extended chain in an on-the-fly manner allowing for efficient simulation.
Keywords :
Path-based reward variables , Temporal logic , Markov chains , Performance Evaluation , Reward functions
Journal title :
Performance Evaluation
Serial Year :
2002
Journal title :
Performance Evaluation
Record number :
1569646
Link To Document :
بازگشت