Loading Now

Summary of Twins: Revisiting Non-stationarity in Multivariate Time Series Forecasting, by Jiaxi Hu et al.


TwinS: Revisiting Non-Stationarity in Multivariate Time Series Forecasting

by Jiaxi Hu, Qingsong Wen, Sijie Ruan, Li Liu, Yuxuan Liang

First submitted to arxiv on: 6 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Recently, multivariate time series forecasting tasks have gained popularity due to their practical applications, leading to the development of various deep forecasting models. However, real-world time series exhibit non-stationary distribution characteristics, including nested periodicity, absence of periodic distributions, and hysteresis among time variables. This paper proposes the Transformer-based TwinS model, comprising three modules: Wavelet Convolution, Period-Aware Attention, and Channel-Temporal Mixed MLP. The Wavelet Convolution module models nested periods by scaling convolution kernel size like wavelet transform, while the Period-Aware Attention guides attention computation through period relevance scores generated by a convolutional sub-network. The Channel-Temporal Mixed MLP captures overall relationships between time series through channel-time mixing learning. The TwinS model achieves state-of-the-art (SOTA) performance compared to mainstream TS models, with a maximum improvement in Mean Squared Error (MSE) of 25.8% over PatchTST.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making predictions on real-world data that changes over time. Real-life data can be tricky because it doesn’t always follow the same pattern. The researchers created a new model called TwinS to better predict this type of data. They used three special parts: Wavelet Convolution, Period-Aware Attention, and Channel-Temporal Mixed MLP. These parts work together to capture the patterns in the data and make more accurate predictions. The TwinS model is better than other models at making these predictions, which can be important for things like predicting energy usage or stock prices.

Keywords

» Artificial intelligence  » Attention  » Mse  » Time series  » Transformer