DocumentCode :
3782962
Title :
A full-parallel digital implementation for pre-trained NNs
Author :
T. Szabo;L. Antoni;G. Horvath;B. Feher
Author_Institution :
Dept. of Meas. & Instrum. Syst., Tech. Univ. Budapest, Hungary
Volume :
2
fYear :
2000
Firstpage :
49
Abstract :
In many applications the most significant advantages of neural networks come mainly from their parallel architectures ensuring rather high operation speed. The difficulties of parallel digital hardware implementation arise mostly from the high complexity of the parallel many-multiplier structure. This paper suggests a new bit-serial/parallel neural network implementation method for pre-trained networks. The method makes possible significant hardware cost savings. The proposed approach-which is based on the results of a previously suggested method for efficient implementation of digital filters-uses bit-serial distributed arithmetic. The efficient implementation of a matrix-vector multiplier is based on an optimization algorithm which utilizes the advantages of CSD (canonic signed digit) encoding and bit-level pattern coincidences. The resulting architecture performs full-precision computation and allows high-speed bit-level pipeline operation. The proposed approach seems to be a promising one for FPGA and ASIC realization of pre-trained neural networks and can be integrated into automatic neural network design environments. However, these implementation methods can be useful in many other fields of digital signal processing.
Keywords :
"Neural networks","Hardware","Parallel architectures","Costs","Digital filters","Digital arithmetic","Signal processing algorithms","Encoding","Computer architecture","High performance computing"
Publisher :
ieee
Conference_Titel :
Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on
ISSN :
1098-7576
Print_ISBN :
0-7695-0619-4
Type :
conf
DOI :
10.1109/IJCNN.2000.857873
Filename :
857873
Link To Document :
بازگشت