DocumentCode
565497
Title
Crossmodal content binding in information-processing architectures
Author
Jacobsson, Henrik ; Hawes, Nick ; Kruijff, Geert-Jan ; Wyatt, Jeremy
Author_Institution
Language Technol. Lab., DFKI GmbH, Kaiserslautern, Germany
fYear
2008
fDate
12-15 March 2008
Firstpage
81
Lastpage
88
Abstract
Operating in a physical context, an intelligent robot faces two fundamental problems. First, it needs to combine information from its different sensors to form a representation of the environment that is more complete than any representation a single sensor could provide. Second, it needs to combine high-level representations (such as those for planning and dialogue) with sensory information, to ensure that the interpretations of these symbolic representations are grounded in the situated context. Previous approaches to this problem have used techniques such as (low-level) information fusion, ontological reasoning, and (highlevel) concept learning. This paper presents a framework in which these, and related approaches, can be used to form a shared representation of the current state of the robot in relation to its environment and other agents. Preliminary results from an implemented system are presented to illustrate how the framework supports behaviours commonly required of an intelligent robot.
Keywords
inference mechanisms; intelligent robots; learning (artificial intelligence); ontologies (artificial intelligence); sensor fusion; concept learning; crossmodal content binding; high-level representation; information fusion; information-processing architectures; intelligent robot; ontological reasoning; sensory information; symbolic representation; Cognition; Grounding; Humans; Monitoring; Planning; Robots; Visualization;
fLanguage
English
Publisher
ieee
Conference_Titel
Human-Robot Interaction (HRI), 2008 3rd ACM/IEEE International Conference on
Conference_Location
Amsterdam
ISSN
2167-2121
Print_ISBN
978-1-60558-017-3
Type
conf
Filename
6249470
Link To Document