DocumentCode :
3196066
Title :
Automatic content generation for video self modeling
Author :
Shen, Ju ; Raghunathan, Anusha ; Cheung, Sen-ching S. ; Patel, Rita
Author_Institution :
University of Kentucky, USA
fYear :
2011
fDate :
11-15 July 2011
Firstpage :
1
Lastpage :
6
Abstract :
Video self modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of him or herself. Its effectiveness in rehabilitation and education has been repeatedly demonstrated but technical challenges remain in creating video contents that depict previously unseen behaviors. In this paper, we propose a novel system that re-renders new talking-head sequences suitable to be used for VSM treatment of patients with voice disorder. After the raw footage is captured, a new speech track is either synthesized using text-to-speech or selected based on voice similarity from a database of clean speeches. Voice conversion is then applied to match the new speech to the original voice. Time markers extracted from the original and new speech track are used to re-sample the video track for lip synchronization. We use an adaptive re-sampling strategy to minimize motion jitter, and apply bilinear and optical-flow based interpolation to ensure the image quality. Both objective measurements and subjective evaluations demonstrate the effectiveness of the proposed techniques.
Keywords :
computational multimedia; frame interpolation; positive feedforward; video self modeling; voice disorder; voice imitation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Multimedia and Expo (ICME), 2011 IEEE International Conference on
Conference_Location :
Barcelona, Spain
ISSN :
1945-7871
Print_ISBN :
978-1-61284-348-3
Electronic_ISBN :
1945-7871
Type :
conf
DOI :
10.1109/ICME.2011.6011997
Filename :
6011997
Link To Document :
بازگشت