• DocumentCode
    173590
  • Title

    Multiple-model Q-learning for stochastic reinforcement delays

  • Author

    Campbell, Jeffrey S. ; Givigi, Sidney N. ; Schwartz, Howard M.

  • Author_Institution
    Syst. & Comput. Eng., Carleton Univ., Ottawa, ON, Canada
  • fYear
    2014
  • fDate
    5-8 Oct. 2014
  • Firstpage
    1611
  • Lastpage
    1617
  • Abstract
    The main contribution of this work is a novel machine reinforcement learning algorithm for problems where a Poissonian stochastic time delay is present in the agent´s reinforcement signal. Despite the presence of the reinforcement noise, the algorithm can craft a suitable control policy for the agent´s environment. The novel approach can deal with reinforcements which may be received out of order in time or may even overlap, which was not previously considered in the literature. The proposed algorithm is simulated and its performance is compared to a standard Q-learning algorithm. Through simulation, the proposed method is found to improve the performance of a learning agent in an environment with Poissonian-type stochastically delayed rewards.
  • Keywords
    delays; learning (artificial intelligence); stochastic processes; Poissonian stochastic time delay; Poissonian-type stochastically delayed rewards; agent reinforcement signal; machine reinforcement learning algorithm; multiple-model Q-learning; reinforcement noise; stochastic reinforcement delays; Computers; Delay effects; Delays; Learning (artificial intelligence); Markov processes; Robots; Markov Decision Process; Reinforcement learning; cost; jitter; multiple models; reward; stochastic time delay;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on
  • Conference_Location
    San Diego, CA
  • Type

    conf

  • DOI
    10.1109/SMC.2014.6974146
  • Filename
    6974146