Loading Now

Summary of Towards Unbiased Evaluation Of Time-series Anomaly Detector, by Debarpan Bhattacharya and Sumanta Mukherjee and Chandramouli Kamanchi and Vijay Ekambaram and Arindam Jati and Pankaj Dayama


Towards Unbiased Evaluation of Time-series Anomaly Detector

by Debarpan Bhattacharya, Sumanta Mukherjee, Chandramouli Kamanchi, Vijay Ekambaram, Arindam Jati, Pankaj Dayama

First submitted to arxiv on: 19 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Applications (stat.AP); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper proposes a novel approach to time series anomaly detection (TSAD), which is critical in various domains such as detecting seismic activity, sensor failures, and predicting stock market crashes. The existing F1-score metric for anomaly detection is not suitable for time series data due to the dissociation between time points and events. To address this issue, point adjustments are typically made before evaluating the F1-score, but these heuristics-based methods can be biased towards true positive detection, leading to over-estimated detector performance. The authors propose an alternative adjustment protocol called Balanced Point Adjustment (BA), which provides guarantees of fairness through axiomatic definitions of TSAD evaluation.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research paper helps us detect unusual patterns in time series data better. Time series anomaly detection is important for many applications, like finding earthquakes or predicting stock market crashes. Right now, we use the F1-score to measure how well our detectors do, but this score doesn’t work as well when dealing with time series data because it’s hard to compare different times and events. To fix this problem, people usually adjust their predictions before calculating the F1-score, but these adjustments can be biased towards finding true positives, which means they might not give us an accurate picture of how good our detectors really are. This paper proposes a new way to make these adjustments that is fairer and more accurate.

Keywords

* Artificial intelligence  * Anomaly detection  * F1 score  * Time series