• DocumentCode
    468014
  • Title

    Fine-Grain Parallelization of Recurrent Neural Networks Training

  • Author

    Turchenko, Volodymyr

  • Author_Institution
    Dept. of Inf. Comput. Syst. & Control, Ternopil State Econ. Univ., Ternopil
  • fYear
    2006
  • fDate
    Feb. 28 2006-March 4 2006
  • Firstpage
    208
  • Lastpage
    211
  • Abstract
    An approach to development of fine-grain parallel algorithm of artificial neural network training using parallelization of computational operations of each elementary neuron is presented in this paper. A training algorithm of back error propagation is described and parallel section of the algorithm is developed. The results of experimental research of the parallel algorithm are given using analysis of parallelization speedup and efficiency on parallel computer Origin 300.
  • Keywords
    backpropagation; parallel algorithms; parallel machines; parallel processing; recurrent neural nets; Origin 300 parallel computer; artificial neural network training; back error propagation; computational parallelization; fine-grain parallel algorithm; fine-grain parallelization; parallelization efficiency; parallelization speedup; recurrent neural networks training; training algorithm; Artificial neural networks; Computer errors; Computer networks; Concurrent computing; Hardware; Neural networks; Neurons; Parallel algorithms; Parallel processing; Recurrent neural networks; fine-grain parallelization; parallel computer; recurrent neural network;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Modern Problems of Radio Engineering, Telecommunications, and Computer Science, 2006. TCSET 2006. International Conference
  • Conference_Location
    Lviv-Slavsko
  • Print_ISBN
    966-553-507-2
  • Type

    conf

  • DOI
    10.1109/TCSET.2006.4404497
  • Filename
    4404497