Summary of Gas-norm: Score-driven Adaptive Normalization For Non-stationary Time Series Forecasting in Deep Learning, by Edoardo Urettini et al.
GAS-Norm: Score-Driven Adaptive Normalization for Non-Stationary Time Series Forecasting in Deep Learning
by Edoardo Urettini, Daniele Atzeni, Reshawn J. Ramjattan, Antonio Carta
First submitted to arxiv on: 4 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed method, GAS-Norm, improves the performance of deep neural networks (DNNs) in time series forecasting by addressing data non-stationarity. The authors first demonstrate the limitations of DNNs in simple non-stationary settings and then introduce a novel approach that combines Generalized Autoregressive Score (GAS) models with DNNs. This hybrid method adaptively normalizes input data using GAS statistics, improving the performance of the DNN in non-stationary scenarios. The authors validate their proposal by comparing it to other state-of-the-art normalization methods and combining it with popular DNN forecasting models on real-world datasets from the Monash open-access forecasting repository. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary GAS-Norm is a new way to make deep neural networks (DNNs) better at predicting future events. Right now, DNNs often don’t do as well as simpler statistical methods for this task. One reason is that many processes have changing patterns, making it hard for the DNN to learn what’s happening. The authors of this paper show how DNNs struggle with simple changes in data patterns and then introduce a new way to fix this problem. They combine two approaches: one that estimates statistics about the data (mean and variance) at each new point, and another that uses those statistics to adjust the input data for the DNN. This helps the DNN do better when the data is changing. The authors test their method on real-world data and show it works well in 21 out of 25 cases. |
Keywords
» Artificial intelligence » Autoregressive » Time series