Summary of Totem: Tokenized Time Series Embeddings For General Time Series Analysis, by Sabera Talukder and Yisong Yue and Georgia Gkioxari
TOTEM: TOkenized Time Series EMbeddings for General Time Series Analysis
by Sabera Talukder, Yisong Yue, Georgia Gkioxari
First submitted to arxiv on: 26 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores the idea of using generalist models, trained on multiple data domains, for time series analysis. By discretely tokenizing time series data from various datasets and training a single model to solve different tasks across these domains, the authors aim to create performant and adaptable models that can be applied to real-world scenarios. The proposed method, TOTEM (TOkenized Time Series EMbeddings), leverages self-supervision and minimal fine-tuning to produce strong zero-shot performance on various time series tasks, including imputation, anomaly detection, and forecasting. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research paper talks about making computers better at understanding and working with time series data. Currently, models are either made for one specific task or trained on a single dataset. The authors suggest doing things differently by training a model to work across many different datasets and tasks. This means the model can be used in many different situations without needing to be retrained. They call this approach “generalist” and show that it works well using real-world data and comparing their results to existing models. |
Keywords
* Artificial intelligence * Anomaly detection * Fine tuning * Time series * Zero shot