Summary of A2sf: Accumulative Attention Scoring with Forgetting Factor For Token Pruning in Transformer Decoder, by Hyun-rae Jo et al.
A2SF: Accumulative Attention Scoring with Forgetting Factor for Token Pruning in Transformer Decoder
by Hyun-rae Jo, Dongkun Shin
First submitted to arxiv on: 30 Jul 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Accumulative Attention Score with Forgetting Factor (A2SF) technique addresses memory bottleneck issues in large language models based on transformers, particularly in long sequence handling. The existing Accumulative Attention Score is not suitable for the transformer decoder structure due to the masking effect, causing uneven comparison between tokens. A2SF introduces a Forgetting Factor in the Attention Score accumulation process, applying a penalty to past Attention Scores generated from old tokens. This ensures fairness among different ages of tokens, enabling more effective selection of important tokens. The technique improves accuracy in OPT and LLaMA models, with up to 7.8% and 5.1% gains on 1-shot and 0-shot tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A new way to help language models remember important information is being tested. Right now, these models have a problem remembering things from long lists because they get stuck in memory. Scientists found that not all parts of the list are important, so they developed a new method called Accumulative Attention Score with Forgetting Factor (A2SF). This method helps the model forget old information and focus on what’s most important. The result is better accuracy when using the model for tasks like language translation or text summarization. |
Keywords
» Artificial intelligence » 1 shot » Attention » Decoder » Llama » Summarization » Transformer » Translation