DocumentCode :
381272
Title :
Latent maximum entropy principle for statistical language modeling
Author :
Wang, Shaojun ; Feld, Romald Rosen ; Zhao, Yunxin
Author_Institution :
Sch. of Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA, USA
fYear :
2001
fDate :
2001
Firstpage :
182
Lastpage :
185
Abstract :
We describe a unified probabilistic framework for statistical language modeling, the latent maximum entropy principle. The salient feature of this approach is that the hidden causal hierarchical dependency structure can be encoded into the statistical model in a principled way by mixtures of exponential families with a rich expressive power. We first show the problem formulation, solution, and certain convergence properties. We then describe how to use this machine learning technique to model various aspects of natural language, such as syntactic structure of sentences, semantic information in a document. Finally, we draw a conclusion and point out future research directions.
Keywords :
learning (artificial intelligence); linguistics; maximum entropy methods; maximum likelihood estimation; natural languages; statistical analysis; text analysis; causal hierarchical dependency structure; exponential families; latent maximum entropy principle; machine learning; maximum likelihood estimation; natural language; parameter estimation; real text training data; semantic information; sentence syntactic structure; statistical language modeling; Computer science; Content management; Ear; Entropy; Error analysis; Interpolation; Machine learning; Natural languages; Speech recognition; Tree data structures;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automatic Speech Recognition and Understanding, 2001. ASRU '01. IEEE Workshop on
Print_ISBN :
0-7803-7343-X
Type :
conf
DOI :
10.1109/ASRU.2001.1034617
Filename :
1034617
Link To Document :
بازگشت