Track:
Contents
Downloads:
Abstract:
The performance of Automatic Speech Recognition (ASR) systems can be greatly enhanced by incorporating phonologically relevant features contained in video sequences of speakers. Humans, particularly the hearing impaired, can utilize visual information- through lipreading - for improved accuracy. We present a system for doing ASR that relies on both classical and learning algorithms to identify and process relevant visual information. Classical image processing operations such as convolution and thresholding are used to reduce each frame to a small intermediate (vector) representation of the image content. Sequences of these vectors are used as inputs to a modified time delay neural network, which is trained to output the corresponding phoneme. The network learns to extract visual-temporal features which are usefill in classification. Eventually, the learning procedure may be extented to directly utilize the "raw" pixels, however , in present practice, the classical operations act as a means of compressing the visual informatiou down to manageable amounts. Our current visual speech recognizer, in combination with a similar acoustic recognizer reduces the error rate by 75 % when compared with the acoustic subsystem alone.