Author/Authors :
R.، Cole, نويسنده , , S.، Van Vuuren, نويسنده , , B.، Pellom, نويسنده , , K.، Hacioglu, نويسنده , , Ma، Jiyong نويسنده , , J.، Movellan, نويسنده , , S.، Schwartz, نويسنده , , D.، Wade-Stein, نويسنده , , W.، Ward, نويسنده , , Yan، Jie نويسنده ,
Abstract :
This paper presents a vision of the near future in which computer interaction is characterized by natural face-toface conversations with lifelike characters that speak, emote, and gesture. These animated agents will converse with people much like people converse effectively with assistants in a variety of focused applications. Despite the research advances required to realize this vision, and the lack of strong experimental evidence that animated agents improve human-computer interaction, we argue that initial prototypes of perceptive animated interfaces can be developed today, and that the resulting systems will provide more effective and engaging communication experiences than existing systems. In support of this hypothesis, we first describe initial experiments using an animated character to teach speech and language skills to children with hearing problems, and classroom subjects and social skills to children with autistic spectrum disorder. We then show how existing dialogue system architectures can be transformed into perceptive animated interfaces by integrating computer vision and animation capabilities. We conclude by describing the Colorado Literacy Tutor, a computer-based literacy program that provides an ideal testbed for research and development of perceptive animated interfaces, and consider next steps required to realize the vision.
Keywords :
robotic airships , unmanned aerial vehicles (UAVs) , Autonomous robots , internet working , intelligent robots , programming environment