Loading Now

Summary of Rethinking Transformer-based Multi-document Summarization: An Empirical Investigation, by Congbo Ma et al.


Rethinking Transformer-based Multi-document Summarization: An Empirical Investigation

by Congbo Ma, Wei Emma Zhang, Dileepa Pitawela, Haojie Zhuang, Yanfeng Shu

First submitted to arxiv on: 16 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This study investigates the performance and behaviors of Transformer-based models in multi-document summarization (MDS), exploring their impact on summary quality. The researchers conducted five empirical studies to examine the effects of document boundary separators, different Transformer structures, encoder-decoder sensitivity, training strategies, and repetition in generated summaries. Experimental results on prevalent MDS datasets and 11 evaluation metrics show that document boundaries, feature granularity, and model training strategies all influence summary quality. Notably, the decoder is more sensitive to noise than the encoder, highlighting its crucial role in generating accurate summaries. The study also finds correlations between high uncertainty scores and repetition problems in generated summaries.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how a special kind of AI model called Transformer-based models work when they’re used for summarizing many documents together. They want to see if these models can help make better summaries. To do this, the researchers did five experiments to test different things about the models, like how well they work with different parts of the document and how much they change depending on what training method is used. The results show that the models are affected by things like where the documents start and stop, and how detailed the features are. They also found that one part of the model (the decoder) is really good at catching mistakes in the summary. Overall, this study helps us understand how these AI models work and what they need to do better.

Keywords

» Artificial intelligence  » Decoder  » Encoder  » Encoder decoder  » Summarization  » Transformer