Summary of Token Prepending: a Training-free Approach For Eliciting Better Sentence Embeddings From Llms, by Yuchen Fu et al.
Token Prepending: A Training-Free Approach for Eliciting Better Sentence Embeddings from LLMs
by Yuchen Fu, Zifeng Cheng, Zhiwei Jiang, Zhonghui Wang, Yafeng Yin, Zhengliang Li, Qing Gu
First submitted to arxiv on: 16 Dec 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel Token Prepending (TP) technique to improve sentence embeddings from large language models (LLMs). By prepending each layer’s decoded sentence embedding to the beginning of the next layer’s input, earlier tokens can attend to complete sentence information under causal attention. The TP technique is plug-and-play and training-free, allowing seamless integration with various prompt-based methods. Experimental results show significant performance improvements on STS tasks and downstream classification tasks across different LLMs, with negligible additional inference cost. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper finds a way to make language models better at understanding sentences. Right now, these models can only look at the last part of the sentence to understand what it means. The authors show that by adding earlier parts of the sentence as “hints” for later parts, the model can do much better. This new technique works with existing methods and doesn’t require any extra training or special setup. |
Keywords
» Artificial intelligence » Attention » Classification » Embedding » Inference » Prompt » Token