Summary of Curse Of Attention: a Kernel-based Perspective For Why Transformers Fail to Generalize on Time Series Forecasting and Beyond, by Yekun Ke et al.
Curse of Attention: A Kernel-Based Perspective for Why Transformers Fail to Generalize on Time Series Forecasting and Beyond
by Yekun Ke, Yingyu Liang, Zhenmei Shi, Zhao Song, Chiwun Yang
First submitted to arxiv on: 8 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the inefficiency of transformer-based models in time series forecasting (TSF) tasks. Despite their popularity, many works fail to outperform simple linear residual models. The authors propose a theoretical explanation for this issue, attributing it to Asymmetric Learning in training attention networks. They show that when the sign of previous steps is inconsistent with the current step’s sign, attention fails to learn residual features, making it difficult to generalize on out-of-distribution data. This challenges the representational pattern and highlights the need for efficient transformer-based architectures. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper explains why some transformer models don’t work well in predicting future values in a sequence of numbers. It says that these models have trouble learning patterns when the direction of change (going up or down) is different from one time step to the next. This makes it hard for them to make good predictions on new, unseen data. The authors hope their ideas will help people design better transformer models. |
Keywords
» Artificial intelligence » Attention » Time series » Transformer