Summary of C3t: Cross-modal Transfer Through Time For Human Action Recognition, by Abhi Kamboj et al.
C3T: Cross-modal Transfer Through Time for Human Action Recognition
by Abhi Kamboj, Anh Duy Nguyen, Minh Do
First submitted to arxiv on: 23 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC); Machine Learning (cs.LG); Signal Processing (eess.SP)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores a method to transfer knowledge between different sensors or modalities using a unified representation space for Human Action Recognition (HAR). The researchers formalize and investigate an understudied setting called Unsupervised Modality Adaptation (UMA), where the modality used in testing is not used in training. They develop three methods, Student-Teacher (ST), Contrastive Alignment (CA), and Cross-modal Transfer Through Time (C3T), to perform UMA. The paper presents extensive experiments on camera+IMU datasets comparing these methods in the UMA setting and their empirical upper bound in the supervised setting. The results show that C3T is the most robust and highest performing, nears the supervised setting performance even with temporal noise. This method introduces a novel mechanism for aligning signals across time-varying latent vectors. The findings suggest significant potential for developing generalizable models for time-series sensor data. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research paper tries to solve a problem where sensors from different types or sources can’t communicate with each other well. They’re trying to find a way to share knowledge between them, so they can work better together. The authors are looking at how to do this in situations where they don’t have any labeled data from the new sensor type. They test three different methods and find that one of them works really well, even when there’s some noise or variation in the data. This breakthrough could lead to better sensors that can work together more effectively. |
Keywords
» Artificial intelligence » Alignment » Supervised » Time series » Unsupervised