Loading Now

Summary of Dam: Towards a Foundation Model For Time Series Forecasting, by Luke Darlow et al.


DAM: Towards A Foundation Model for Time Series Forecasting

by Luke Darlow, Qiwen Deng, Ahmed Hassan, Martin Asenov, Rajkarn Singh, Artjom Joosen, Adam Barker, Amos Storkey

First submitted to arxiv on: 25 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed DAM (Dynamic Adjustment Model) neural model tackles the challenge of universal forecasting, which involves predicting accurate time series values across multiple domains and datasets with diverse characteristics. The existing methods struggle to generalize outside their training scope due to assumptions about regular sampling and fixed prediction horizons. The DAM uses randomly sampled histories and outputs a continuous function of time as an adjustable basis composition for non-fixed horizon forecasting. It consists of three key components: (1) a flexible approach for using randomly sampled histories, (2) a transformer backbone trained on these actively sampled histories to produce representational output, and (3) the basis coefficients of a continuous function of time. The model is shown to outperform or closely match existing SoTA models at multivariate long-term forecasting across 18 datasets, including 8 held-out for zero-shot transfer.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new way to predict future events based on past data. They want to make predictions that work well not just for one type of data but for many different types of data. Right now, other methods are only good at making predictions for the kind of data they were trained on. The new method uses something called “randomly sampled histories” and a special kind of AI model called a transformer. This allows it to make predictions that are really good even when there’s missing or irregular data. The authors tested their method on 25 different datasets and found that it worked well for many of them, including ones they had never seen before.

Keywords

* Artificial intelligence  * Time series  * Transformer  * Zero shot