• DocumentCode
    244785
  • Title

    Massive parallelization technique for random linear network coding

  • Author

    Seong-Min Choi ; Joon-Sang Park

  • Author_Institution
    Dept. of Comput. Eng., Hongik Univ., Seoul, South Korea
  • fYear
    2014
  • fDate
    15-17 Jan. 2014
  • Firstpage
    296
  • Lastpage
    299
  • Abstract
    Random linear network coding (RLNC) has gain popularity as a useful performance-enhancing tool for communications networks. In this paper, we propose a RLNC parallel implementation technique for General Purpose Graphical Processing Units (GPGPUs.) Recently, GPGPU technology has paved the way for parallelizing RLNC; however, current state-of-the-art parallelization techniques for RLNC are unable to fully utilize GPGPU technology in many occasions. Addressing this problem, we propose a new RLNC parallelization technique that can fully exploit GPGPU architectures. Our parallel method shows over 4 times higher throughput compared to existing state-of-the-art parallel RLNC decoding schemes for GPGPU and 20 times higher throughput over the state-of-the-art serial RLNC decoders.
  • Keywords
    graphics processing units; network coding; parallel algorithms; GPGPU technology; RLNC parallelization technique; communications networks; general purpose graphical processing units; massive parallelization technique; parallel RLNC decoding schemes; performance-enhancing tool; random linear network coding; Decoding; Graphics processing units; Instruction sets; Network coding; Parallel processing; Throughput; Vectors; GPGPU; Network Coding; Parallel algorithm;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Big Data and Smart Computing (BIGCOMP), 2014 International Conference on
  • Conference_Location
    Bangkok
  • Type

    conf

  • DOI
    10.1109/BIGCOMP.2014.6741456
  • Filename
    6741456