Track:
Contents
Downloads:
Abstract:
Traditional supervised neural network trainers have deviated little from the fundamental back propagation algorithm popularized in 1986 by Rumelhart, Hinton, and Williams. Typically, the training process begins with the collection of a fixed database of input and output vectors. The operator then adjusts additional parameters such as network architecture, learning rate, momentum, and annealing noise, based upon their past experience in network training. Optimizing the network’s generalization capacity usually involves either experiments with various hidden layer architectures or similar automated investigations using genetic algorithms. Along with these often-complex procedural issues, usable networks generally lack flexibility, beginning at the level of the individual processing unit. Normally, the user finds him or her confined to a limited range of unit activation functions, usually including linear, linear threshold, and sigmoidal analytical forms whose partial derivatives with respect to net input are definable through a similar continuous, analytical expression. Generally, the demand is for a more flexible and user friendly system that will not only lessen the technical confusion for non-connectionist end-users, but also create expanded utility for those demanding more architectural freedom and adaptability in their artificial neural networks or ANNs.