Loading Now

Summary of Adversarial Attacks and Defenses in Multivariate Time-series Forecasting For Smart and Connected Infrastructures, by Pooja Krishan et al.


Adversarial Attacks and Defenses in Multivariate Time-Series Forecasting for Smart and Connected Infrastructures

by Pooja Krishan, Rohan Mohapatra, Saptarshi Sengupta

First submitted to arxiv on: 27 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Performance (cs.PF)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the impact of adversarial attacks on multivariate time-series forecasting and proposes methods to counter them. It demonstrates the feasibility of untargeted white-box attacks, such as Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM), which can mislead models with high confidence. The authors also develop robust models through adversarial training and model hardening, showcasing their transferability across different datasets, including electricity data and 10-year real-world data for predicting time-to-failure of hard disks.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper shows how to trick deep learning models into making wrong predictions. It uses special kinds of attacks that can make the model think something is true when it’s not. The goal is to understand how these attacks work and find ways to stop them. The authors try out different methods, like adversarial training, to make the models more secure.

Keywords

» Artificial intelligence  » Deep learning  » Time series  » Transferability