Summary of Information-theoretic Distillation For Reference-less Summarization, by Jaehun Jung et al.
Information-Theoretic Distillation for Reference-less Summarization
by Jaehun Jung, Ximing Lu, Liwei Jiang, Faeze Brahman, Peter West, Pang Wei Koh, Yejin Choi
First submitted to arxiv on: 20 Mar 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed InfoSumm framework allows for the development of powerful summarizers without relying on large-scale language models (LLMs) or human-written references. This is achieved by formulating the desiderata of summarization through mutual information between the original document and summary, using Pythia-2.8B as a teacher model. The resulting compact summarizer with 568M parameters performs competitively against ChatGPT without relying on its capabilities, outperforming in-domain supervised models in human evaluation and unsupervised methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper presents a new way to create summary models that don’t need big language models or hand-written instructions. It works by defining what makes a good summary using information theory, then training a smaller model to match those criteria. This approach creates a powerful summarizer with only 568 million parameters that performs well against the best current models. |
Keywords
» Artificial intelligence » Summarization » Supervised » Teacher model » Unsupervised