• DocumentCode
    1662472
  • Title

    Stochastic field model for autonomous robot learning

  • Author

    Enokida, Shuichi ; Ohashi, Takeshi ; Yoshida, Takaichi ; Ejima, Toshiaki

  • Author_Institution
    Dept. of Artificial Intelligence, Kyushu Inst. of Technol., Iizuka, Japan
  • Volume
    2
  • fYear
    1999
  • fDate
    6/21/1905 12:00:00 AM
  • Firstpage
    752
  • Abstract
    Through reinforcement learning, an autonomous robot creates an optimal policy which maps state space to action space. The mapping is obtained by trial and error through the interaction with a given environment. The mapping is represented as an action-value function. The environment accords an information in the form of scalar feedback known as a reinforcement signal. As a result of reinforcement learning, an action has the high action-value in each state. The optimal policy is equivalent to choosing an action which has the highest action-value in each state. Typically, even if an autonomous robot has continuous sensor values, the summation of discrete values is used as an action-value function to reduce learning time. However, the reinforcement learning algorithms including Q-learning suffer from errors due to state space sampling. To overcome the above, we propose an EQ-learning (extended Q-learning) based on a SFM (stochastic field model). EQ-learning is designed in order to accommodate continuous state space directly and to improve its generalization capability. Through EQ-learning, an action-value function is represented by the summation of weighted base functions, and an autonomous robot adjusts weights of base functions at learning stage. Other parameters (center coordinates, variance and so on) are adjusted at the unification stage where two similar functions are unified to a simpler function
  • Keywords
    function approximation; learning (artificial intelligence); mobile robots; stochastic processes; action space; action-value function; autonomous robot learning; extended Q-learning; learning time; optimal policy; reinforcement learning; state space sampling; stochastic field model; trial and error; Algorithm design and analysis; Intelligent robots; Learning; Orbital robotics; Quantization; Robot sensing systems; Robotics and automation; Space technology; State-space methods; Stochastic processes;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Systems, Man, and Cybernetics, 1999. IEEE SMC '99 Conference Proceedings. 1999 IEEE International Conference on
  • Conference_Location
    Tokyo
  • ISSN
    1062-922X
  • Print_ISBN
    0-7803-5731-0
  • Type

    conf

  • DOI
    10.1109/ICSMC.1999.825356
  • Filename
    825356