Summary of Stablemask: Refining Causal Masking in Decoder-only Transformer, by Qingyu Yin et al.
StableMask: Refining Causal Masking in Decoder-only Transformer
by Qingyu Yin, Xuzheng He, Xiang Zhuang, Yu Zhao, Jianhua Yao, Xiaoyu Shen, Qiang Zhang
First submitted to arxiv on: 7 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a new method, StableMask, to address two limitations of the decoder-only Transformer architecture in language modeling. The first limitation is that it requires all attention scores to be non-zero and sum up to 1, which can lead to disproportionate attention being assigned to certain tokens. The second limitation is that RPE-based Transformers are not universal approximators due to their limited capacity at encoding absolute positional information. StableMask introduces pseudo-attention values to balance attention distributions and encodes absolute positional information via a progressively decreasing mask ratio. This method shows significant enhancements in language models with parameter sizes ranging from 71M to 1.4B across diverse datasets and encoding methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making language models better. The current best model has some problems. First, it can get stuck looking at the same parts of words too much. Second, it’s not very good at understanding where things are in a sentence. To fix these problems, the authors created a new way to do attention, called StableMask. It makes the model look more fairly at different parts of words and helps it understand where things are in sentences better. This makes language models work better on many tasks. |
Keywords
» Artificial intelligence » Attention » Decoder » Mask » Transformer