Abstract :
Some learning algorithms for neural networks adapt the size of the learning step using some kind of second order information. It is sometimes difficult to implement them on SIMD-neurocomputers, optimized for multiply-accumulate operations, because no division routine is provided, or because the standard algorithms are very inefficient for this kind of architecture. In this paper, the author introduces QuickDiv, a fast method to approximate the integer division for signed 16-bit integers. The method is a mixture of table lookup and interpolation. Although QuickDiv does not yield exact results, the error between the approximation and the exact integer division is small enough for neural network learning. The steps of the algorithm are almost independent from the arguments, so that it can be efficiently implemented in SIMD-parallel computers
Keywords :
interpolation; learning (artificial intelligence); neural net architecture; parallel algorithms; parallel architectures; table lookup; QuickDiv; SIMD-neurocomputers; fast division algorithm; integer division; interpolation; learning algorithms; learning step; signed 16-bit integers; table lookup; Clocks; Computer errors; Computer networks; Concurrent computing; Feedforward neural networks; Interpolation; Iterative algorithms; Neural network hardware; Neural networks; Table lookup;
Conference_Titel :
Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on