DocumentCode
1892489
Title
Too much, too little, or just right? Ways explanations impact end users´ mental models
Author
Kulesza, Todd ; Stumpf, Simone ; Burnett, Margaret ; Yang, Songping ; Kwan, Irwin ; Weng-Keen Wong
Author_Institution
Sch. of EECS, Oregon State Univ., Corvallis, OR, USA
fYear
2013
fDate
15-19 Sept. 2013
Firstpage
3
Lastpage
10
Abstract
Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly “debug” an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users´ mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants´ mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all.
Keywords
program debugging; software agents; information types; intelligent agent debugging; intelligent agents; mental models; Cognition; Cognitive science; Computers; Decision trees; Intelligent agents; Prototypes; Recommender systems; end-user debugging; explanations; intelligent agents; mental models; recommender systems;
fLanguage
English
Publisher
ieee
Conference_Titel
Visual Languages and Human-Centric Computing (VL/HCC), 2013 IEEE Symposium on
Conference_Location
San Jose, CA
ISSN
1943-6092
Type
conf
DOI
10.1109/VLHCC.2013.6645235
Filename
6645235
Link To Document