h7 this paper, we present an automatic system ,/'or analyzing and annotating video sequences of technical talks. Our method uses a robust motion estimation technique to detect key ,frames and segmenthe video sequence into subsequences containing a single overhead slide. The subsequences are stabilized to remove motion that occurs when the speaker adjusts their slides. Any changes remaining between frames in the stabilized sequences may be due to speaker gestures such as pointing or writing and we use active contours to automatically track these potential gestures. Given the constrained domain we de[ine a simple "vocabulary" of actions which can easily be recognized based on the active contour shape and motion. The recognized actions provide a rich annotation of the sequence that can be used to access a condensed version of the talk from a web page.