Loading Now

Summary of Repetition Improves Language Model Embeddings, by Jacob Mitchell Springer et al.


Repetition Improves Language Model Embeddings

by Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, Aditi Raghunathan

First submitted to arxiv on: 23 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: Recent advancements in text embedding extraction from autoregressive large language models (LLMs) have primarily focused on improving data, backbone pretrained language models, or task-differentiation via instructions. This work addresses an architectural limitation of autoregressive models by proposing a simple approach called “echo embeddings.” Echo embeddings repeat the input twice and extract embeddings from the second occurrence. We demonstrate that echo embeddings can encode information about later tokens, enabling us to maximize high-quality LLMs for embeddings. On the MTEB leaderboard, echo embeddings outperform classical embeddings by over 9% zero-shot and around 0.7% when fine-tuned. Our method achieves state-of-the-art using a Mistral-7B model compared to prior open-source models that do not leverage synthetic fine-tuning data.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: This paper finds a new way to make language models better at understanding text. Right now, these models have a problem where they can’t use information from later parts of the text when extracting important features called embeddings. The solution is simple: repeat the text twice and extract the embeddings from the second part. We show that this approach improves the accuracy of language models by 9% without fine-tuning and by 0.7% with fine-tuning. This is a big deal because it makes these powerful machines even more useful for tasks like translation, summarization, and question-answering.

Keywords

* Artificial intelligence  * Autoregressive  * Embedding  * Fine tuning  * Question answering  * Summarization  * Translation  * Zero shot