DocumentCode :
1133520
Title :
Distributed speech processing in miPad´s multimodal user interface
Author :
Deng, Li ; Wang, Kuansan ; Acero, Alex ; Hon, Hsiao-Wuen ; Droppo, Jasha ; Boulis, Constantinos ; Wang, Ye-Yi ; Jacoby, Derek ; Mahajan, Milind ; Chelba, Ciprian ; Huang, Xuedong D.
Author_Institution :
Microsoft Res., Redmond, WA, USA
Volume :
10
Issue :
8
fYear :
2002
fDate :
11/1/2002 12:00:00 AM
Firstpage :
605
Lastpage :
619
Abstract :
This paper describes the main components of MiPad (multimodal interactive PAD) and especially its distributed speech processing aspects. MiPad is a wireless mobile PDA prototype that enables users to accomplish many common tasks using a multimodal spoken language interface and wireless-data technologies. It fully integrates continuous speech recognition and spoken language understanding, and provides a novel solution for data entry in PDAs or smart phones, often done by pecking with tiny styluses or typing on minuscule keyboards. Our user study indicates that the throughput of MiPad is significantly superior to that of the existing pen-based PDA interface. Acoustic modeling and noise robustness in distributed speech recognition are key components in MiPad´s design and implementation. In a typical scenario, the user speaks to the device at a distance so that he or she can see the screen. The built-in microphone thus picks up a lot of background noise, which requires MiPad be noise robust. For complex tasks, such as dictating e-mails, resource limitations demand the use of a client-server (peer-to-peer) architecture, where the PDA performs primitive feature extraction, feature quantization, and error protection, while the transmitted features to the server are subject to further speech feature enhancement, speech decoding and understanding before a dialog is carried out and actions rendered. Noise robustness can be achieved at the client, at the server or both. Various speech processing aspects of this type of distributed computation as related to MiPad´s potential deployment are presented. Previous user interface study results are also described. Finally, we point out future research directions as related to several key MiPad functionalities.
Keywords :
client-server systems; feature extraction; mobile handsets; notebook computers; speech enhancement; speech recognition; speech-based user interfaces; MiPad multimodal user interface; acoustic modeling; background noise; client-server architecture; continuous speech recognition; data entry; distributed speech processing; e-mails; error protection; feature extraction; feature quantization; microphone; multimodal interactive PAD; multimodal spoken language interface; noise robustness; screen; smart phones; speech decoding; speech feature enhancement; spoken language understanding; wireless mobile PDA prototype; wireless-data technology; Atherosclerosis; Background noise; Feature extraction; Natural languages; Noise robustness; Personal digital assistants; Speech enhancement; Speech processing; Speech recognition; User interfaces;
fLanguage :
English
Journal_Title :
Speech and Audio Processing, IEEE Transactions on
Publisher :
ieee
ISSN :
1063-6676
Type :
jour
DOI :
10.1109/TSA.2002.804538
Filename :
1175532
Link To Document :
بازگشت