Loading Now

Summary of Self-supervised Learning Of Disentangled Representations For Multivariate Time-series, by Ching Chang et al.


Self-Supervised Learning of Disentangled Representations for Multivariate Time-Series

by Ching Chang, Chiao-Tung Chan, Wei-Yao Wang, Wen-Chih Peng, Tien-Fu Chen

First submitted to arxiv on: 16 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces TimeDRL, a framework for multivariate time-series representation learning that addresses challenges such as high dimensionality and lack of labels. TimeDRL features disentangled embeddings at both the timestamp-level and instance-level, achieved through a [CLS] token strategy, as well as timestamp-predictive and instance-contrastive tasks for representation learning. Additionally, TimeDRL avoids augmentation methods to eliminate inductive biases. The framework is evaluated on forecasting and classification datasets, demonstrating improved performance compared to existing methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
TimeDRL is a new way to learn representations from complex data sets without labels. These data sets are important for fields like healthcare and industry because they can help us make predictions and classify things. The problem is that these data sets have lots of variables (like measurements taken at different times) and not all the information we want is labeled (labeled means it has a specific category or label). TimeDRL solves this by creating two kinds of representations: one for each timestamp and one for each instance (thing being measured). It also uses special tasks to learn these representations. When tested, TimeDRL did better than other methods on forecasting and classification tasks.

Keywords

» Artificial intelligence  » Classification  » Representation learning  » Time series  » Token