In this talk, we propose a general framework for topic-specific summarization of large text corpora and illustrate how it can be used for the analysis of document collections. Our framework, concise comparative summarization
(CCS), is built on sparse classification methods. It is a lightweight and flexible tool that offers a compromise between simple word frequency based methods currently in wide use and more heavyweight, model-intensive methods such as latent Dirichlet allocation (LDA). We argue that sparse methods have much to offer for text analysis and hope CCS opens the door for a new branch of research in this important field.
Using news articles from the New York Times, we validate our tool by designing and conducting a human survey to compare the different summarizers with human understanding. We demonstrate our approach with two case studies, a media analysis of the framing of “Egypt” in the New York Times throughout the Arab Spring and an informal comparison of the New York Times’ and Wall Street Journal’s coverage of “energy.”
Overall, we find that the Lasso with L2 normalization can be effectively and usefully used to summarize large corpora, regardless of document size. Finally, I will present preliminary results in an on-going project to study the opinions of U.S. courts of appeal by using CCS in combination with LDA.