Evaluation of Automatic Text Summarization across Multiple Documents

Mary McKenna and Elizabeth Liddy

This paper describes an ongoing research effort to produce multiple document summaries in response to information requests. Given the absence of tools to evaluate multiple document summaries, this research will test an evaluation method and metric to compare human assessments with machine output of newstext multiple document summaries. Using the DR-link information retrieval and analysis system, components of documents and metadata generated during document processing become candidates tbr use in multiple document summaries. This research is sponsored by the U.S. Government through the Tipster Phase Ill Text Summarization project.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.