Parallel Neural Network Training

Ed P. Andert Jr. and Thomas Bartolac

Connectionist approaches in artificial intelligence can benefit from implementations that take advantage of massive parallelism. The network topologies encountered in connectionist work often involve such massive parallelism. This paper discusses additional parallelism utilized prior to the actual network implementation. Parallelism is used to facilitate aggressive neural network training algorithms that enhance the accuracy of the resulting network. The computational expense of the training approach to training is minimized through the use of massive parallelism. The training algorithms are being implemented in a prototype for a challenging sensor processing application. Our neural network training approach produces networks that are superior to the limited accuracy of typical connectionist applications. The approach addresses function complexity, network capacity, and training algorithm aggressiveness. Symbolicomputing tools are used to develop algebraic representations for the gradient and Hessian of the least-squares cost functions for feed-forward network topologies, with fully connected and locally-connected layers. These representations are then transformed into block-structured form. They are then integrated with a full-Newton network training algorithm and executed on vector/parallel computers.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.