DocumentCode :
3861871
Title :
Simulation with learning agents
Author :
E. Gelenbe;E. Seref;Z. Xu
Author_Institution :
Sch. of Electr. Eng. & Comput. Sci., Central Florida Univ., Orlando, FL, USA
Volume :
89
Issue :
2
fYear :
2001
Firstpage :
148
Lastpage :
157
Abstract :
We propose that learning agents (LAs) be incorporated into simulation environments in order to model the adaptive behavior of humans. These LAs adapt to specific circumstances and events during the simulation run. They would select tasks to be accomplished among a given set of tasks as the simulation progresses, or synthesize tasks for themselves based on their observations of the environment and on information they may receive from other agents. We investigate an approach in which agents are assigned goals when the simulation starts and then pursue these goals autonomously and adaptively. During the simulation, agents progressively improve their ability to accomplish their goals effectively and safely. Agents learn from their own observations and from the experience of other agents with whom they exchange information. Each LA starts with a given representation of the simulation environment from which it progressively constructs its own internal representation and uses it to make decisions. The paper describes how learning neural networks can support this approach and shows that goal based learning may be used effectively used in this context. An example simulation is presented in which agents represent manned vehicles; they are assigned the goal of traversing a dangerous metropolitan grid safely and rapidly using goal based reinforcement learning with neural networks and compared to three other algorithms.
Keywords :
"Discrete event simulation","Humans","Computational modeling","Random number generation","Vehicle safety","Vehicle dynamics","Computer science","Stress","Neural networks"
Journal_Title :
Proceedings of the IEEE
Publisher :
ieee
ISSN :
0018-9219
Type :
jour
DOI :
10.1109/5.910851
Filename :
910851
Link To Document :
بازگشت