DocumentCode :
2973987
Title :
Representing the Reinforcement Learning state in a negotiation dialogue
Author :
Heeman, Peter A.
Author_Institution :
Center for Spoken Language Understanding, Oregon Health & Sci. Univ., Beaverton, OR, USA
fYear :
2009
fDate :
Nov. 13 2009-Dec. 17 2009
Firstpage :
450
Lastpage :
455
Abstract :
Most applications of reinforcement learning (RL) for dialogue have focused on slot-filling tasks. In this paper, we explore a task that requires negotiation, in which conversants need to exchange information in order to decide on a good solution. We investigate what information should be included in the system´s RL state so that an optimal policy can be learned and so that the state space stays reasonable in size. We propose keeping track of the decisions that the system has made, and using them to constrain the system´s future behavior in the dialogue. In this way, we can compositionally represent the strategy that the system is employing. We show that this approach is able to learn a good policy for the task. This work is a first step to a more general exploration of applying RL to negotiation dialogues.
Keywords :
interactive systems; learning (artificial intelligence); negotiation dialogue; reinforcement learning state representation; slot-filling tasks; Cost function; Learning; Natural languages; Speech recognition; State estimation; State-space methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automatic Speech Recognition & Understanding, 2009. ASRU 2009. IEEE Workshop on
Conference_Location :
Merano
Print_ISBN :
978-1-4244-5478-5
Electronic_ISBN :
978-1-4244-5479-2
Type :
conf
DOI :
10.1109/ASRU.2009.5373413
Filename :
5373413
Link To Document :
بازگشت