Abstract :
This paper introduces Lie algebra-valued feedforward neural networks, for which the inputs, outputs, weights and biases are all from a Lie algebra. This type of networks represents an alternative generalization of the real-valued neural networks besides the complex-, hyperbolic-, quaternion-, and Clifford-valued neural networks that have been intensively studied over the last few years. The full deduction of the gradient descent algorithm for training such networks is presented. The proposed networks are tested on two synthetic function approximation problems and on geometric transformations, the results being promising for the future of Lie algebra-valued neural networks.