DocumentCode :
2007915
Title :
Satisficing vs exploring when learning a constrained environment
Author :
Shervais, S. ; Shannon, T.T.
Author_Institution :
Coll. of Bus. & Public Adm., Eastern Washington Univ., Cheney, WA, USA
fYear :
2012
fDate :
20-24 Nov. 2012
Firstpage :
2088
Lastpage :
2091
Abstract :
Satisficing is an efficient strategy for applying existing knowledge in a complex, constrained, environment. We present a set of agent-based simulations that demonstrate a higher payoff for satisficing strategies than for exploring strategies when using approximate dynamic programming methods for learning complex environments. In our constrained learning environment, satisficing agents outperformed exploring agent by approximately six percent, in terms of the number of tasks completed.
Keywords :
approximation theory; dynamic programming; learning (artificial intelligence); multi-agent systems; agent-based simulation; approximate dynamic programming method; constrained learning environment; exploring agent; exploring strategy; satisficing agent; satisficing strategy; Q learning; agent-based simulation; approximate dynamic programming; satisficing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), 2012 Joint 6th International Conference on
Conference_Location :
Kobe
Print_ISBN :
978-1-4673-2742-8
Type :
conf
DOI :
10.1109/SCIS-ISIS.2012.6505338
Filename :
6505338
Link To Document :
بازگشت