• DocumentCode
    250552
  • Title

    Bayesian exploration and interactive demonstration in continuous state MAXQ-learning

  • Author

    Grave, Kathrin ; Behnke, Sven

  • Author_Institution
    Dept. of Comput. Sci., Autonomous Intell. Syst. Group, Univ. of Bonn, Bonn, Germany
  • fYear
    2014
  • fDate
    May 31 2014-June 7 2014
  • Firstpage
    3323
  • Lastpage
    3330
  • Abstract
    Deploying robots for service tasks requires learning algorithms that scale to the combinatorial complexity of our daily environment. Inspired by the way humans decompose complex tasks, hierarchical methods for robot learning have attracted significant interest. In this paper, we apply the MAXQ method for hierarchical reinforcement learning to continuous state spaces. By using Gaussian Process Regression for MAXQ value function decomposition, we obtain probabilistic estimates of primitive and completion values for every subtask within the MAXQ hierarchy. From these, we recursively compute probabilistic estimates of state-action values. Based on the expected deviation of these estimates, we devise a Bayesian exploration strategy that balances optimization of expected values and risk from exploring unknown actions. To further reduce risk and to accelerate learning, we complement MAXQ with learning from demonstrations in an interactive way. In every situation and subtask, the system may ask for a demonstration if there is not enough knowledge available to determine a safe action for exploration. We demonstrate the ability of the proposed system to efficiently learn solutions to complex tasks on a box stacking scenario.
  • Keywords
    Bayes methods; Gaussian processes; learning (artificial intelligence); regression analysis; robots; Bayesian exploration; Gaussian process regression; MAXQ method; MAXQ value function decomposition; combinatorial complexity; continuous state MAXQ-learning; continuous state spaces; hierarchical reinforcement learning; humans decompose complex; interactive demonstration; learning algorithms; robot learning; Approximation methods; Bayes methods; Learning (artificial intelligence); Learning systems; Optimization; Robots; Uncertainty;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Robotics and Automation (ICRA), 2014 IEEE International Conference on
  • Conference_Location
    Hong Kong
  • Type

    conf

  • DOI
    10.1109/ICRA.2014.6907337
  • Filename
    6907337