Author(s):
Dong, Yue
Abstract:
Automatic text summarization, the automated process of shortening a text while reserving the main ideas of the document(s), is a critical research area in natural language processing. The aim of this literature review is to survey the recent work on neural-based models in automatic text summarization. We examine in detail ten state-of-the-art neural-based summarizers: five abstractive models and five extractive models. In addition, we discuss the related techniques that can be applied to the summarization tasks and present promising paths for future research in neural-based summarization.
Document:
https://arxiv.org/abs/1804.04589
References:
- [Afantenos et al., 2005] Afantenos, S., Karkaletsis, V., and Stamatopoulos, P. (2005). Summarization frommedical documents: a survey.Artificial intelligence in medicine, 33(2):157–177.
- [Bahdanau et al., 2017] Bahdanau, D., Brakel, P., Xu, K., Goyal, A., Lowe, R., Pineau, J., Courville, A., andBengio, Y. (2017). An actor-critic algorithm for sequence prediction.
- [Cao et al., 2015] Cao, Z., Wei, F., Li, S., Li, W., Zhou, M., and Wang, H. (2015). Learning summary priorrepresentation for extractive summarization. InACL.
- [Cheng and Lapata, 2016] Cheng, J. and Lapata, M. (2016). Neural summarization by extracting sentences andwords. InProceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume1: Long Papers), pages 484–494, Berlin, Germany. Association for ComputationalLinguistics.
- [Chopra et al., 2016] Chopra, S., Auli, M., and Rush, A. M. (2016). Abstractive sentence summarization withattentive recurrent neural networks. InNAACL HLT 2016, The 2016 Conference of the North AmericanChapter of the Association for Computational Linguistics:Human Language Technologies, San Diego Cali-fornia, USA, June 12-17, 2016, pages 93–98.
- [Conroy and O’leary, 2001] Conroy, J. M. and O’leary, D. P. (2001).Text summarization via hidden markovmodels. InProceedings of the 24th annual international ACM SIGIR conference on Research and developmentin information retrieval, pages 406–407. ACM.
- [Das and Martins, 2007] Das, D. and Martins, A. F. (2007). A survey on automatic text summarization.[DeJong, 1982] DeJong, G. F. (1982). An overview of the frump system. In Lehnert, W. G. and Ringle, M. H.,editors,Strategies for Natural Language Processing, pages 149–176. Lawrence Erlbaum.
- [Gambhir and Gupta, 2017] Gambhir, M. and Gupta, V. (2017). Recent automatic text summarization tech-niques: a survey.Artificial Intelligence Review, 47(1):1–66.
- [Gong and Liu, 2001] Gong, Y. and Liu, X. (2001). Generic text summarization using relevance measure andlatent semantic analysis. InProceedings of the 24th annual international ACM SIGIR conference on Researchand development in information retrieval, pages 19–25. ACM.[Jones et al., 1999] Jones, K. S. et al. (1999). Automatic summarizing: factors and directions.Advances inautomatic text summarization, pages 1–12.
- [K ̊ageb ̈ack et al., 2014] K ̊ageb ̈ack, M., Mogren, O., Tahmasebi, N., and Dubhashi, D. (2014). Extractive sum-marization using continuous vector space models. InProceedings of the 2nd Workshop on Continuous VectorSpace Models and their Compositionality (CVSC)@ EACL, pages 31–39.
- [Lin, 2004] Lin, C.-Y. (2004). Rouge: A package for automatic evaluation of summaries. In Marie-Francine Moens, S. S., editor,Text Summarization Branches Out: Proceedings of the ACL-04Workshop,pages 74–81, Barcelona, Spain. Association for Computational Linguistics.[Luhn, 1958] Luhn, H. P. (1958). The automatic creation of literature abstracts.IBM Journal of research anddevelopment, 2(2):159–165.
- [Mihalcea and Tarau, 2004] Mihalcea, R. and Tarau, P. (2004). Textrank: Bringing order into text. InPro-ceedings of the 2004 conference on empirical methods in natural language processing.
- [Nallapati et al., 2017] Nallapati, R., Zhai, F., and Zhou, B. (2017). SummaRuNNer: A recurrent neuralnetwork based sequence model for extractive summarization of documents. InProceedings of the Thirty-FirstAAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages3075–3081.
- [Nallapati et al., 2016] Nallapati, R., Zhou, B., dos Santos, C. N., G ̈ul ̧cehre, C ̧., and Xiang, B. (2016). Abstrac-tive text summarization using sequence-to-sequence RNNs and beyond. InProceedings of the 20th SIGNLLConference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12,2016, pages 280–290.
- [Nenkova et al., 2011] Nenkova, A., McKeown, K., et al. (2011). Automatic summarization.Foundations andTrendsR©in Information Retrieval, 5(2–3):103–233.
- [Nenkova and Passonneau, 2004] Nenkova, A. and Passonneau, R. J. (2004). Evaluating content selection insummarization: The pyramid method. InHuman Language Technology Conference of the North AmericanChapter of the Association for Computational Linguistics,HLT-NAACL 2004, Boston, Massachusetts, USA,May 2-7, 2004, pages 145–152.15
- [Paulus et al., 2017] Paulus, R., Xiong, C., and Socher, R. (2017). A deep reinforced model for abstractivesummarization.arXiv preprint arXiv:1705.04304.
- [Radev and McKeown, 1998] Radev, D. R. and McKeown, K. R. (1998). Generating natural language sum-maries from multiple on-line sources.Computational Linguistics, 24(3):470–500.
- [Reiter and Dale, 1997] Reiter, E. and Dale, R. (1997). Building appliednatural language generation systems.Nat. Lang. Eng., 3(1):57–87.
- [Rush et al., 2015] Rush, A. M., Chopra, S., and Weston, J. (2015). Aneural attention model for abstractivesentence summarization. InProceedings of the 2015 Conference on Empirical Methods in Natural LanguageProcessing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 379–389.
- [See et al., 2017] See, A., Liu, P. J., and Manning, C. D. (2017). Get tothe point: Summarization withpointer-generator networks. InProceedings of the 55th Annual Meeting of the Association for ComputationalLinguistics, ACL 2017, Vancouver, Canada, July 30 – August 4, Volume 1: Long Papers, pages 1073–1083.
- [Wong et al., 2008] Wong, K.-F., Wu, M., and Li, W. (2008). Extractive summarization using supervised andsemi-supervised learning. InProceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 985–992. Association for Computational Linguistics.
- [Xu et al., 2015] Xu, W., Callison-Burch, C., and Napoles, C. (2015). Problems in current text simplificationresearch: New data can help.TACL, 3:283–297.[Xu, 2004] Xu, X. (2004). Pyteaser.
- [Yin and Pei, 2015] Yin, W. and Pei, Y. (2015). Optimizing sentence modeling and selection for documentsummarization. InProceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence,IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 1383–1389.
- [Zhang and Lapata, 2017] Zhang, X. and Lapata, M. (2017). Sentence simplification with deep reinforcementlearning. InProceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 595–605.16