• DocumentCode
    584587
  • Title

    An Efficient Method for Incremental Learning of GMM Using CUDA

  • Author

    Chen, Chunlei ; Zhang, Ning ; Shi, Shuang ; Mu, Dejun

  • Author_Institution
    Sch. of Autom., Northwestern Polytech. Univ., Xi´´an, China
  • fYear
    2012
  • fDate
    11-13 Aug. 2012
  • Firstpage
    2141
  • Lastpage
    2144
  • Abstract
    Incremental learning algorithms of the Gaussian Mixture Model can find applications in various scenarios. This paper proposes a CUDA-based method to accelerate incremental learning of GMM. Different from existing methods towards GMM on GPU, our method aims to hide data transfer latency instead of accelerating the algorithm itself. Due to the inherent characteristic of memory-critical incremental learning applications, loading data from external memory and copying data from host to device will inevitably contributes to the overall time consumption. CUDA capabilities called "concurrent execution" and "overlap data transfer" are leveraged to implement incremental GMM learning in a pipelined pattern. The efficiency of our method is validated through preliminary experiments, which demonstrate improved performance over the non-pipelined method.
  • Keywords
    Gaussian processes; learning (artificial intelligence); parallel architectures; CUDA; GPU; Gaussian mixture model; concurrent execution; hide data transfer latency; incremental GMM learning algorithm; memory critical incremental learning application; nonpipelined method; overlap data transfer; pipelined pattern; Approximation algorithms; Data models; Graphics processing units; Instruction sets; Kernel; Standards; CUDA; GMM; concurrent execution; incremental learning; overlap data transfer;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Computer Science & Service System (CSSS), 2012 International Conference on
  • Conference_Location
    Nanjing
  • Print_ISBN
    978-1-4673-0721-5
  • Type

    conf

  • DOI
    10.1109/CSSS.2012.532
  • Filename
    6394850