Basis of AI Backprop Hypertext Documentation

Copyright (c) 1990-97 by Donald R. Tveter

References

* "Faster Learning Variations on Back-Propagation: An Empirical Study" by Scott Fahlman from the Ohio State neuroprose archive or from Carnegie-Mellon. This paper shows a series of experiments to try to improve backprop and finishes with quickprop which may be one of the best ways to speed up the training of a network.

* J. R. Chen and P. Mars, "Stepsize Variation Methods for Accelerating the Back-Propagation Algorithm", IJCNN-90-WASH-DC volume 1, pp 601-604, Lawrence Erlbaum, 1990.

* "Speeding up Backpropagation Algorithms by using Cross-Entropy combined with Pattern Normalization" by Merten Joost and Wolfram Schiffmann", from University of Koblenz-Landau, Germany The authors show that by using the cross-entropy error measure on classification problems rather than the traditional sum squared error the net effect is to simply skip the derivative term in the traditional formulation. Apparently this skip the derivative term originated with the Differential Step Size method of Chen and Mars.

* "Increased Rates of Convergence" by Robert A. Jacobs, in Neural Networks, Volume 1, Number 4, 1988.