Hosted by AI Frontiers. Register here.
Information overload could be managed better if we could generate summaries of large text/documents. Historically AI based summarization techniques involved copy-pasting of relevant text from the original text to form the summary, called extractive summarization. Of late, progress has been on the new technique called abstractive summarization in creating summaries by using/creating words that were not in the original text.
I would like to discuss the history of summarization, techniques used to create summaries and improvements brought about in this area of research. I would like to also talk about implementation solutions used by the authors (of the focus-paper) and that of others.
Many papers are published in this area of research. The latest paper that is creating waves is written by Richard Socher et al, titled A deep Reinforced Model for Abstractive Summarization.
Note to attendees: If you are new to this field or if you are already baptized into ML/NLP but want to know more, this is the forum. I am eagerly looking for feedback and comments.
Romain Paulus, C Xiong, R Socher, "A Deep Reinforced Model for Abstractive Summarization", May 2017, arXiv preprint arXiv:1705.04304
Sumit Chopra, Michael Auli, Alexander M Rush, and SEAS Harvard. 2016. Abstractive sentence summarization with attentive recurrent neural networks. Proceedings of NAACL-HLT16 .
Karunakar Suingamreddy (SK) is a tech entrepreneur and a techie. He is a ML and NLP enthusiast. His research on Question-Answering (QA) in NLP using LSTMs resulted in reasonable success. SK is a blogger on various topics including ML, NLP, Recommendation engines, High availability systems, IoT frameworks, etc. His blogs could be found on LinkedIn (https://www.linkedin.com/in/s-k-reddy-3473763/recent-activity/posts/).
6:30–7 pm Meet and greet
7-8 pm Presentation and discussion
8-8:30pm Social and catch up