Title of article :
The factored policy-gradient planner Original Research Article
Author/Authors :
Olivier Buffet، نويسنده , , Douglas Aberdeen، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2009
Pages :
26
From page :
722
To page :
747
Abstract :
We present an any-time concurrent probabilistic temporal planner (CPTP) that includes continuous and discrete uncertainties and metric functions. Rather than relying on dynamic programming our approach builds on methods from stochastic local policy search. That is, we optimise a parameterised policy using gradient ascent. The flexibility of this policy-gradient approach, combined with its low memory use, the use of function approximation methods and factorisation of the policy, allow us to tackle complex domains. This factored policy gradient (FPG) planner can optimise steps to goal, the probability of success, or attempt a combination of both. We compare the FPG planner to other planners on CPTP domains, and on simpler but better studied non-concurrent non-temporal probabilistic planning (PP) domains. We present FPG-ipc, the PP version of the planner which has been successful in the probabilistic track of the fifth international planning competition.
Keywords :
Policy-gradient , AI planning , Concurrent probabilistic temporal planning , Reinforcement learning
Journal title :
Artificial Intelligence
Serial Year :
2009
Journal title :
Artificial Intelligence
Record number :
1207684
Link To Document :
بازگشت