Loading Now

Summary of Only the Curve Shape Matters: Training Foundation Models For Zero-shot Multivariate Time Series Forecasting Through Next Curve Shape Prediction, by Cheng Feng et al.


Only the Curve Shape Matters: Training Foundation Models for Zero-Shot Multivariate Time Series Forecasting through Next Curve Shape Prediction

by Cheng Feng, Long Huang, Denis Krompass

First submitted to arxiv on: 12 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
GTT is a foundation model designed for zero-shot multivariate time series forecasting. The encoder-only architecture is pre-trained on 200M high-quality samples spanning diverse domains. In this framework, multivariate forecasting is framed as predicting next curve shapes based on past curves in a channel-wise manner. GTT outperforms state-of-the-art supervised baselines and demonstrates superior zero-shot capabilities on unseen datasets. Additionally, the impact of varying model parameters and training dataset scales is investigated, revealing a scaling law applicable to zero-shot multivariate time series forecasting.
Low GrooveSquid.com (original content) Low Difficulty Summary
GTT is a special kind of computer program that helps predict what will happen in the future based on past events. It’s like trying to guess what shape a graph will make next based on how it looked before. GTT was trained on lots and lots of data from different places, which makes it really good at predicting things without needing more information. In fact, it even does better than other programs that were taught specifically for this task! Scientists are interested in seeing how changing certain settings or adding more training data affects its abilities.

Keywords

* Artificial intelligence  * Encoder  * Supervised  * Time series  * Zero shot