DOI:
Abstract:
We present a system for extracting useful information from multi-party meetings and presenting the results to users via a browser. Users can view automatically extracted discussion topics and action items, initially seeing high-level descriptions, but with the ability to click through to meeting audio and video. Users can also add value by defining and searching for new topics and editing, correcting, deleting, or confirming action items. These feedback actions are used as implicit supervision by the understanding agents, retraining classifier models for improved or user-tailored performance.