Summary of An End-to-end Model For Time Series Classification in the Presence Of Missing Values, by Pengshuai Yao et al.
An End-to-End Model for Time Series Classification In the Presence of Missing Values
by Pengshuai Yao, Mengna Liu, Xu Cheng, Fan Shi, Huan Li, Xiufeng Liu, Shengyong Chen
First submitted to arxiv on: 11 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes an end-to-end neural network framework that tackles the issue of time series classification with missing data by unifying data imputation and representation learning. The traditional two-stage approach can lead to sub-optimal performance as label information is not utilized in the imputation process, while a one-stage approach can result in feature representation limitations due to imputed errors propagation. In contrast, this study prioritizes classification performance over imputation accuracy and incorporates a multi-scale feature learning module to extract useful information from noise-imputation data. The proposed model outperforms state-of-the-art approaches on 68 univariate time series datasets and real-world datasets with varying missing data ratios. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps solve the problem of missing values in time series data, which is a common issue. It proposes a new way to look at this problem by combining two steps – filling in the missing values and learning from the data – into one step. This approach works better than previous methods because it focuses on getting the classification right rather than making sure the missing values are perfect. The researchers also created a special module that helps extract useful information from noisy data. They tested their method on many different datasets and found that it performs well, especially when there is a lot of missing data. |
Keywords
» Artificial intelligence » Classification » Neural network » Representation learning » Time series