Loading Now

Summary of Locost: State-space Models For Long Document Abstractive Summarization, by Florian Le Bronnec et al.


LOCOST: State-Space Models for Long Document Abstractive Summarization

by Florian Le Bronnec, Song Duong, Mathieu Ravaut, Alexandre Allauzen, Nancy F. Chen, Vincent Guigue, Alberto Lumbreras, Laure Soulier, Patrick Gallinari

First submitted to arxiv on: 31 Jan 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed LOCOST architecture is a low-complexity alternative to transformers for encoding long sequences and capturing long-term dependencies. This encoder-decoder model utilizes state-space models and exhibits a computational complexity of O(L log L), enabling it to handle longer sequences than state-of-the-art sparse attention-based models. The authors evaluate LOCOST on long document abstractive summarization tasks, achieving performance comparable to top-performing transformers of the same size while reducing memory usage during training (up to 50%) and inference (up to 87%). Furthermore, LOCOST demonstrates exceptional handling of input texts exceeding 600K tokens at inference time, setting new state-of-the-art results on full-book summarization.
Low GrooveSquid.com (original content) Low Difficulty Summary
LOCOST is a new way to make computers understand long pieces of text. It’s like a shortcut that lets machines quickly look at really long sentences and paragraphs. This helps them summarize big documents into shorter versions. LOCOST works by using special math formulas, which makes it faster than other methods. The people who made LOCOST tested it on many different texts and found that it can handle huge amounts of text – even over 600,000 words! This is important because it means computers can now understand and summarize really long documents, like entire books.

Keywords

* Artificial intelligence  * Attention  * Encoder decoder  * Inference  * Summarization