DocumentCode :
3133029
Title :
Ecological validity and the evaluation of speech summarization quality
Author :
McCallum, A. ; Penn, Gerald ; Munteanu, Calin ; Xiaodan Zhu
Author_Institution :
Dept. of Comput. Sci., Univ. of Toronto, Toronto, ON, Canada
fYear :
2012
fDate :
2-5 Dec. 2012
Firstpage :
467
Lastpage :
472
Abstract :
There is little evidence of widespread adoption of speech summarization systems. This may be due in part to the fact that the natural language heuristics used to generate summaries are often optimized with respect to a class of evaluation measures that, while computationally and experimentally inexpensive, rely on subjectively selected gold standards against which automatically generated summaries are scored. This evaluation protocol does not take into account the usefulness of a summary in assisting the listener in achieving his or her goal. In this paper we study how current measures and methods for evaluating summarization systems compare to human-centric evaluation criteria. For this, we have designed and conducted an ecologically valid evaluation that determines the value of a summary when embedded in a task, rather than how closely a summary resembles a gold standard. The results of our evaluation demonstrate that in the domain of lecture summarization, the well-known baseline of maximal marginal relevance [1] is statistically significantly worse than human-generated extractive summaries, and even worse than having no summary at all in a simple quiz-taking task. Priming seems to have no statistically significant effect on the usefulness of the human summaries. This is interesting because priming had been proposed as a technique for increasing kappa scores and/or maintaining goal orientation among summary authors. In addition, our results suggest that ROUGE scores, regardless of whether they are derived from numerically-ranked reference data or ecologically valid human-extracted summaries, may not always be reliable as inexpensive proxies for task-embedded evaluations. In fact, under some conditions, relying exclusively on ROUGE may lead to scoring human-generated summaries very favourably even when a task-embedded score calls their usefulness into question relative to using no summaries at all.
Keywords :
human computer interaction; natural language processing; speech processing; ROUGE scores; automatic summary generation; ecologically valid human-extracted summaries; evaluation protocol; human computer interaction; kappa scores; lecture summarization; maximal marginal relevance; natural language heuristics; natural language processing; numerically-ranked reference data; speech analysis; speech summarization quality evaluation; speech summarization systems; summary authors; task-embedded evaluations; task-embedded score; Correlation; Educational institutions; Gold; Humans; Manuals; Speech; Standards; Speech analysis; human-computer interaction; natural language processing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Spoken Language Technology Workshop (SLT), 2012 IEEE
Conference_Location :
Miami, FL
Print_ISBN :
978-1-4673-5125-6
Electronic_ISBN :
978-1-4673-5124-9
Type :
conf
DOI :
10.1109/SLT.2012.6424269
Filename :
6424269
Link To Document :
بازگشت