Abstract :
An attempt is made to generate natural language by using a recurrent neural network with the temporal supervised learning algorithm (TSLA), developed by R.J. Williams and D. Zipser (1989). As TSLA uses explicit representation of consecutive events, it can deal with time-changing phenomena without increasing the number of units in the network. However, its performance has been evaluated exclusively upon the limited short sequences or sequences with explicit regularity and not for the sequences of natural language, which show complex and long-distance correlation. It was found that TSLA showed extreme instability in the learning process, and it took a long time to finish the learning. Thus, the author proposes two methods to improve the performance of TSLA. The first is the variable learning rate method, which is used to remove the instability of the learning process. The second is Minkowski-r power metrics, which is used to improve the learning time