Loading Now

Summary of Fredformer: Frequency Debiased Transformer For Time Series Forecasting, by Xihao Piao et al.


Fredformer: Frequency Debiased Transformer for Time Series Forecasting

by Xihao Piao, Zheng Chen, Taichi Murayama, Yasuko Matsubara, Yasushi Sakurai

First submitted to arxiv on: 13 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the Transformer model’s limitations in time series forecasting, particularly its tendency to focus on low-frequency features and neglect high-frequency ones. The authors attribute this “frequency bias” to the model’s preference for frequency features with higher energy. To mitigate this bias, they propose Fredformer, a Transformer-based framework that learns features equally across different frequency bands. Experimental results demonstrate the effectiveness of Fredformer in outperforming other baselines on various real-world time-series datasets. Additionally, the authors introduce a lightweight variant of Fredformer with an attention matrix approximation, which achieves comparable performance while reducing computational costs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how well the Transformer model does when predicting future events based on past data. It found that the model has trouble picking up on small changes in the data, and instead focuses on bigger patterns. The authors came up with a new way to make the model work better by having it look for features in different parts of the data equally. They tested this new approach and found it did much better than other methods. The code is available online so others can try using it too.

Keywords

» Artificial intelligence  » Attention  » Time series  » Transformer