Summary of Explaining Time Series Via Contrastive and Locally Sparse Perturbations, by Zichuan Liu et al.
Explaining Time Series via Contrastive and Locally Sparse Perturbations
by Zichuan Liu, Yingying Zhang, Tianchun Wang, Zefan Wang, Dongsheng Luo, Mengnan Du, Min Wu, Yi Wang, Chunlin Chen, Lunting Fan, Qingsong Wen
First submitted to arxiv on: 16 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed ContraLSP model tackles the complex task of explaining multivariate time series by introducing counterfactual samples to build uninformative perturbations while keeping the distribution using contrastive learning. This approach incorporates sample-specific sparse gates to generate binary-skewed and smooth masks that integrate temporal trends and select salient features parsimoniously. The empirical studies on synthetic and real-world datasets demonstrate a substantial improvement in explanation quality for time series data, outperforming state-of-the-art models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The ContraLSP model helps us better understand complex patterns in time series data by identifying important locations and matching temporal trends. By using counterfactual samples and contrastive learning, the model can handle distribution shifts in heterogeneous datasets. This means we can get a more accurate picture of what’s happening in the data and why. |
Keywords
* Artificial intelligence * Time series