• DocumentCode
    565469
  • Title

    Integrating vision and audition within a cognitive architecture to track conversations

  • Author

    Trafton, J. Gregory ; Bugajska, Magdalena D. ; Fransen, Benjamin R. ; Ratwani, Raj M.

  • Author_Institution
    Naval Res. Lab., Washington, DC, USA
  • fYear
    2008
  • fDate
    12-15 March 2008
  • Firstpage
    201
  • Lastpage
    208
  • Abstract
    We describe a computational cognitive architecture for robots which we call ACT-R/E (ACT-R/Embodied). ACT-R/E is based on ACT-R [1, 2] but uses different visual, auditory, and movement modules. We describe a model that uses ACT-R/E to integrate visual and auditory information to perform conversation tracking in a dynamic environment. We also performed an empirical evaluation study which shows that people see our conversational tracking system as extremely natural.
  • Keywords
    audio signal processing; cognitive systems; human-robot interaction; robot vision; tracking; ACT-R/Embodied; auditory information; auditory modules; computational cognitive architecture; conversation tracking; conversational tracking system; dynamic environment; empirical evaluation study; integrating audition; integrating vision; movement modules; robots; visual information; visual modules; Cameras; Facial animation; Humans; Robot vision systems; Visualization; ACT-R; Cognitive modeling; Conversation following; human-robot interaction;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Human-Robot Interaction (HRI), 2008 3rd ACM/IEEE International Conference on
  • Conference_Location
    Amsterdam
  • ISSN
    2167-2121
  • Print_ISBN
    978-1-60558-017-3
  • Type

    conf

  • Filename
    6249436