Abstract
We investigate one technique to produce a summary of an original text
without requiring its full semantic interpretation, but instead relying on
a model of the topic progression in the text derived from lexical chains.
We present a new algorithm to compute lexical chains in a text, merging
several robust knowledge sources: the WordNet thesaurus, a part-of-speech
tagger, shallow parser for the identification of nominal groups, and a
segmentation algorithm. Summarization proceeds in four steps: the original
text is segmented, lexical chains are constructed, strong chains are
identified and significant sentences are extracted.
We present in this paper empirical results on the identification of strong
chains and of significant sentences. Preliminary results indicate that
quality indicative summaries are produced. Pending problems are identified.
Plans to address these short-comings are briefly presented.
Code
The source code for this work can be downloaded from the link below.
Source code