Summary of Generalizability Under Sensor Failure: Tokenization + Transformers Enable More Robust Latent Spaces, by Geeling Chau et al.
Generalizability Under Sensor Failure: Tokenization + Transformers Enable More Robust Latent Spaces
by Geeling Chau, Yujin An, Ahamed Raffey Iqbal, Soon-Jo Chung, Yisong Yue, Sabera Talukder
First submitted to arxiv on: 28 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the challenge of discovering neural data representations that generalize across various factors, including environment, subjects, and sensors. Recent studies have focused on generalizing across sessions and subjects but neglected robustness to sensor failure, which is prevalent in neuroscience experiments. To address this issue, the authors collect their own electroencephalography dataset with multiple sessions, subjects, and sensors, and evaluate two time series models: EEGNet, a widely used convolutional neural network, and TOTEM, a discrete time series tokenizer and transformer model. The results show that TOTEM outperforms or matches EEGNet in all generalizability cases, enabling the observation of tokenization’s role in generalization. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about how scientists try to understand brain signals from different people and in different situations. They want to make sure their methods work well in different conditions. The authors collect their own brain signal data with many sessions, people, and sensors, then test two models: EEGNet and TOTEM. TOTEM does better or just as good as EEGNet in all the tests. By looking at how TOTEM works, they can see that it helps make sure the results are general. |
Keywords
* Artificial intelligence * Generalization * Neural network * Time series * Tokenization * Tokenizer * Transformer