Title :
On-chip learning of FPGA-inspired neural nets
Author_Institution :
LORIA, Inst. Nat. de Recherche en Inf. et Autom., Vandoeuvre-les-Nancy, France
Abstract :
Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graphs of standard neural models can not be handled by digital hardware devices. A new theoretical and practical framework allows to reconcile simple hardware topologies with complex neural architectures: field programmable neural arrays (FPNA) lead to powerful neural architectures that are easy to map onto digital hardware, thanks to a simplified topology and an original data exchange scheme. The paper focuses on a class of synchronous FPNAs, for which an efficient implementation with on-chip learning is described. Application and implementation results are discussed
Keywords :
field programmable gate arrays; learning (artificial intelligence); neural chips; neural net architecture; FPGA-inspired neural nets; complex connection graphs; complex neural architectures; data exchange scheme; digital hardware; naturally parallel computing models; neural architectures; on-chip learning; simple hardware topologies; standard neural models; synchronous field programmable neural arrays; Computer architecture; Computer networks; Field programmable gate arrays; Multicast protocols; Network topology; Neural network hardware; Neural networks; Neurons; Parallel processing; Programmable logic arrays;
Conference_Titel :
Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-7044-9
DOI :
10.1109/IJCNN.2001.939021