• DocumentCode
    1576909
  • Title

    Artificial curiosity with planning for autonomous perceptual and cognitive development

  • Author

    Luciw, Matthew ; Graziano, Vincent ; Ring, Mark ; Schmidhuber, Jürgen

  • Author_Institution
    IDSIA, Univ. of Lugano, Manno-Lugano, Switzerland
  • Volume
    2
  • fYear
    2011
  • Firstpage
    1
  • Lastpage
    8
  • Abstract
    Autonomous agents that learn from reward on high-dimensional visual observations must learn to simplify the raw observations in both space (i.e., dimensionality reduction) and time (i.e., prediction), so that reinforcement learning becomes tractable and effective. Training the spatial and temporal models requires an appropriate sampling scheme, which cannot be hard-coded if the algorithm is to be general. Intrinsic rewards are associated with samples that best improve the agent´s model of the world. Yet the dynamic nature of an intrinsic reward signal presents a major obstacle to successfully realizing an efficient curiosity-drive. TD-based incremental reinforcement learning approaches fail to adapt quickly enough to effectively exploit the curiosity signal. In this paper, a novel artificial curiosity system with planning is implemented, based on developmental or continual learning principles. Least-squares policy iteration is used with an agent´s internal forward model, to efficiently assign values for maximizing combined external and intrinsic reward. The properties of this system are illustrated in a high-dimensional, noisy, visual environment that requires the agent to explore. With no useful external value information early on, the self-generated intrinsic values lead to actions that improve both its spatial (perceptual) and temporal (cognitive) models. Curiosity also leads it to learn how it could act to maximize external reward.
  • Keywords
    learning (artificial intelligence); planning (artificial intelligence); artificial curiosity signal; artificial curiosity system; autonomous agents; autonomous perceptual; cognitive development; continual learning principles; high-dimensional visual observation; incremental reinforcement learning; intrinsic reward signal; least squares policy iteration; planning; raw observation; self-generated intrinsic values; temporal model;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Development and Learning (ICDL), 2011 IEEE International Conference on
  • Conference_Location
    Frankfurt am Main
  • ISSN
    2161-9476
  • Print_ISBN
    978-1-61284-989-8
  • Type

    conf

  • DOI
    10.1109/DEVLRN.2011.6037356
  • Filename
    6037356