Loading Now

Summary of Fimba: Evaluating the Robustness Of Ai in Genomics Via Feature Importance Adversarial Attacks, by Heorhii Skovorodnikov et al.


FIMBA: Evaluating the Robustness of AI in Genomics via Feature Importance Adversarial Attacks

by Heorhii Skovorodnikov, Hoda Alkhzaimi

First submitted to arxiv on: 19 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Genomics (q-bio.GN)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
As AI algorithms increasingly influence bio-technical applications and genomics sequencing, their reliability is crucial for decision-making in drug discovery and clinical outcomes. This study reveals that AI models trained on public genomics datasets are vulnerable to attacks that compromise their robustness. An attack method is developed that mimics real data and confuses the model’s decision-making process, significantly degrading performance. To further undermine model robustness, poisoned data is generated using a variational autoencoder-based model. The results show a decline in model accuracy, with an increase in false positives and negatives. Additionally, spectral analysis of the adversarial samples provides insights for developing countermeasures against such attacks.
Low GrooveSquid.com (original content) Low Difficulty Summary
AI algorithms are being used more and more in bio-technical applications and genomics sequencing. This study shows that these algorithms can be tricked into making bad decisions by using fake data. The researchers created an attack method that makes the algorithm think the data is real, but it’s actually fake. This makes the algorithm make mistakes, like misidentifying things or coming up with wrong answers. The study also shows how to create even more fake data to confuse the algorithm even further.

Keywords

* Artificial intelligence  * Variational autoencoder