• DocumentCode
    416669
  • Title

    Acquisition of box pushing by direct-vision-based reinforcement learning

  • Author

    Shibata, Katsunari ; Iida, Masaru

  • Author_Institution
    Dept. of Electr. & Electron. Eng., Oita Univ., Japan
  • Volume
    3
  • fYear
    2003
  • fDate
    4-6 Aug. 2003
  • Firstpage
    2322
  • Abstract
    In this paper, it was confirmed that a real mobile robot with a CCD camera could learn appropriate actions to reach and push a lying box only by direct-vision-based reinforcement learning (RL). In direct-vision-based RL raw visual sensor signals are the inputs of a layered neural network; the neural network is trained by backpropagation using the training signal that is generated based on reinforcement learning. In other words, no image processing, no control methods, and no task information are given at premise even if as many as 1536 monochrome visual signals and 4 infrared signals are the inputs. The box-pushing task is rather difficult than reaching task for the reason that not only the center of gravity, but also the direction, weight and sliding character of the box should be considered. Nevertheless, the robot could learn appropriate actions even if the reward was given only when the robot was pushing the box. It was also observed that the neural network obtained global representation of the box location through the learning.
  • Keywords
    learning (artificial intelligence); mobile robots; neural nets; robot vision; CCD camera; backpropagation; box pushing acquisition; direct-vision-based reinforcement learning; layered neural network; real mobile robot;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    SICE 2003 Annual Conference
  • Conference_Location
    Fukui, Japan
  • Print_ISBN
    0-7803-8352-4
  • Type

    conf

  • Filename
    1323606