Summary of Transformer-based Approaches For Sensor-based Human Activity Recognition: Opportunities and Challenges, by Clayton Souza Leite et al.
Transformer-Based Approaches for Sensor-Based Human Activity Recognition: Opportunities and Challenges
by Clayton Souza Leite, Henry Mauranen, Aziza Zhanabatyrova, Yu Xiao
First submitted to arxiv on: 17 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Transformers have been successful in natural language processing and computer vision, leading to their application in sensor-based Human Activity Recognition (HAR). However, previous studies show that transformers only outperform other models when they use abundant data or computationally expensive optimization algorithms. Since these scenarios are not feasible for sensor-based HAR due to limited data availability and the need for efficient training and inference on resource-constrained devices, our research investigates the performance of transformer-based versus non-transformer-based HAR using wearable sensors. Our extensive experiments involving over 500 trials confirm that transformer-based solutions require more computational resources, yield poorer results, and experience significant degradation when quantized to accommodate resource-constrained devices. Furthermore, transformers exhibit lower robustness to adversarial attacks, potentially compromising user trust in HAR. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how well transformers work for recognizing human activities using data from wearable sensors. Previously, transformers were great at other tasks like language and image recognition. But when it comes to this activity recognition task, transformers only do better if they have a lot of data or use special algorithms that take up lots of computer power. Since we don’t always have a lot of data or can use powerful computers for this task, the study wanted to see how well different approaches work. They found that transformer-based solutions need more computing power, don’t perform as well, and get worse when they’re simplified to run on devices with limited resources. Also, transformers are less good at handling attacks that try to trick them into making mistakes. |
Keywords
» Artificial intelligence » Activity recognition » Inference » Natural language processing » Optimization » Transformer