• DocumentCode
    960446
  • Title

    On Growing and Pruning Kneser–Ney Smoothed  N -Gram Models

  • Author

    Siivola, Vesa ; Hirsimaki, Teemu ; Virpioja, Sami

  • Author_Institution
    Helsinki Univ. of Technol., Helsinki
  • Volume
    15
  • Issue
    5
  • fYear
    2007
  • fDate
    7/1/2007 12:00:00 AM
  • Firstpage
    1617
  • Lastpage
    1624
  • Abstract
    N-gram models are the most widely used language models in large vocabulary continuous speech recognition. Since the size of the model grows rapidly with respect to the model order and available training data, many methods have been proposed for pruning the least relevant -grams from the model. However, correct smoothing of the N-gram probability distributions is important and performance may degrade significantly if pruning conflicts with smoothing. In this paper, we show that some of the commonly used pruning methods do not take into account how removing an -gram should modify the backoff distributions in the state-of-the-art Kneser-Ney smoothing. To solve this problem, we present two new algorithms: one for pruning Kneser-Ney smoothed models, and one for growing them incrementally. Experiments on Finnish and English text corpora show that the proposed pruning algorithm provides considerable improvements over previous pruning algorithms on Kneser-Ney smoothed models and is also better than the baseline entropy pruned Good-Turing smoothed models. The models created by the growing algorithm provide a good starting point for our pruning algorithm, leading to further improvements. The improvements in the Finnish speech recognition over the other Kneser-Ney smoothed models are statistically significant, as well.
  • Keywords
    computational linguistics; natural language processing; smoothing methods; speech recognition; statistical distributions; English text corpora; Finnish text corpora; Kneser-Ney smoothed n-gram models pruning; baseline entropy; good-turing smoothed models; gram probability distributions; language models; vocabulary continuous speech recognition; Context modeling; Degradation; Entropy; Informatics; Natural languages; Probability distribution; Smoothing methods; Speech recognition; Training data; Vocabulary; Modeling; natural languages; smoothing methods; speech recognition;
  • fLanguage
    English
  • Journal_Title
    Audio, Speech, and Language Processing, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1558-7916
  • Type

    jour

  • DOI
    10.1109/TASL.2007.896666
  • Filename
    4244538