Title :
Satisficing vs exploring when learning a constrained environment
Author :
Shervais, S. ; Shannon, T.T.
Author_Institution :
Coll. of Bus. & Public Adm., Eastern Washington Univ., Cheney, WA, USA
Abstract :
Satisficing is an efficient strategy for applying existing knowledge in a complex, constrained, environment. We present a set of agent-based simulations that demonstrate a higher payoff for satisficing strategies than for exploring strategies when using approximate dynamic programming methods for learning complex environments. In our constrained learning environment, satisficing agents outperformed exploring agent by approximately six percent, in terms of the number of tasks completed.
Keywords :
approximation theory; dynamic programming; learning (artificial intelligence); multi-agent systems; agent-based simulation; approximate dynamic programming method; constrained learning environment; exploring agent; exploring strategy; satisficing agent; satisficing strategy; Q learning; agent-based simulation; approximate dynamic programming; satisficing;
Conference_Titel :
Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), 2012 Joint 6th International Conference on
Conference_Location :
Kobe
Print_ISBN :
978-1-4673-2742-8
DOI :
10.1109/SCIS-ISIS.2012.6505338