• DocumentCode
    2979366
  • Title

    A whole sentence maximum entropy language model

  • Author

    Rosenfeld, R.

  • Author_Institution
    Sch. of Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA, USA
  • fYear
    1997
  • fDate
    14-17 Dec 1997
  • Firstpage
    230
  • Lastpage
    237
  • Abstract
    Introduces a new kind of language model, which models whole sentences or utterances directly using the maximum entropy (ME) paradigm. The new model is conceptually simpler, and more naturally suited to modeling whole-sentence phenomena, than the conditional ME models proposed to date. By avoiding the chain rule, the model treats each sentence or utterance as a “bag of features”, where features are arbitrary computable properties of the sentence. The model is unnormalizable, but this does not interfere with training (done via sampling) or with use. Using the model is computationally straightforward. The main computational cost of training the model is in generating sample sentences from a Gibbs distribution. Interestingly, this cost has different dependencies, and is potentially lower than in the comparable conditional ME model
  • Keywords
    maximum entropy methods; natural languages; probability; Gibbs distribution; arbitrary computable properties; bag of features; chain rule; computational cost; sample sentence generation; sampling; training; unnormalizable model; utterances; whole-sentence maximum entropy language model; Computational Intelligence Society; Computational efficiency; Computational modeling; Computer science; Costs; Entropy; Exponential distribution; Probability; Sampling methods; Solid modeling;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on
  • Conference_Location
    Santa Barbara, CA
  • Print_ISBN
    0-7803-3698-4
  • Type

    conf

  • DOI
    10.1109/ASRU.1997.659010
  • Filename
    659010