• DocumentCode
    1394232
  • Title

    Comments on "Efficient training algorithms for HMMs using incremental estimation"

  • Author

    Byrne, William ; Gunawardana, Asela

  • Author_Institution
    Dept. of Electr. & Comput. Eng., Johns Hopkins Univ., Baltimore, MD, USA
  • Volume
    8
  • Issue
    6
  • fYear
    2000
  • Firstpage
    751
  • Lastpage
    754
  • Abstract
    The paper entitled "Efficient training algorithms for HMMs using incremental estimation" by Gotoh et al. (IEEE Trans. Speech Audio Processing, vol.6, p.539-48, Nov. 1998) investigated expectation maximization (EM) procedures that increase training speed. The claim of Gotoh et al. that these procedures are generalized EM (Dempster et al. 1977) procedures is shown to be incorrect in the present paper. We discuss why this is so, provide an example of nonmonotonic convergence to a local maximum in likelihood, and outline conditions that guarantee such convergence.
  • Keywords
    convergence of numerical methods; hidden Markov models; iterative methods; maximum likelihood estimation; speech processing; EM procedures; GEM; HMM; efficient training algorithms; expectation maximization algorithm; generalized EM methods; incremental estimation; local maximum likelihood; nonmonotonic convergence; training speed; Convergence; Hidden Markov models; Iterative algorithms; Iterative methods; Maximum likelihood estimation; Natural languages; Solids; Speech processing; Tomography; Training data;
  • fLanguage
    English
  • Journal_Title
    Speech and Audio Processing, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1063-6676
  • Type

    jour

  • DOI
    10.1109/89.876315
  • Filename
    876315