Summary of From Orthogonality to Dependency: Learning Disentangled Representation For Multi-modal Time-series Sensing Signals, by Ruichu Cai et al.
From Orthogonality to Dependency: Learning Disentangled Representation for Multi-Modal Time-Series Sensing Signals
by Ruichu Cai, Zhifang Jiang, Zijian Li, Weilin Chen, Xuexin Chen, Zhifeng Hao, Yifan Shen, Guangyi Chen, Kun Zhang
First submitted to arxiv on: 25 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel approach to multi-modal time series representation learning, challenging existing methods that assume an orthogonal latent space. Instead, it introduces a dependent modality-shared and modality-specific latent variable framework, dubbed MATE (Multi-modA l Temporal Disentanglement). The MATE model employs a temporally variational inference architecture, leveraging prior networks for disentangling latent variables. Key contributions include subspace identifiability results demonstrating the extracted representation is disentangled. Experimental studies on multi-modal sensors, human activity recognition, and healthcare datasets showcase improved performance in various downstream tasks, highlighting the effectiveness of MATE in real-world scenarios. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper introduces a new way to learn from multiple types of data over time. Existing methods assume these different data types are separate, but this paper shows that they can be connected and dependent on each other. The proposed method, called MATE, uses a special type of mathematical framework to find the connections between the different data types. This allows for better performance in various tasks, such as recognizing human activities and predicting healthcare outcomes. |
Keywords
» Artificial intelligence » Activity recognition » Inference » Latent space » Multi modal » Representation learning » Time series