Loading Now

Summary of Untargeted Adversarial Attack on Knowledge Graph Embeddings, by Tianzhe Zhao et al.


Untargeted Adversarial Attack on Knowledge Graph Embeddings

by Tianzhe Zhao, Jiaoyan Chen, Yanchi Ru, Qika Lin, Yuxia Geng, Jun Liu

First submitted to arxiv on: 8 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the vulnerabilities of knowledge graph embedding (KGE) methods when dealing with low-quality knowledge graphs, which are common in real-world applications. The authors introduce untargeted attacks that aim to reduce the global performance of KGE models on a set of unknown test triples, rather than targeting specific predictions as in previous studies. To enhance attack efficiency, they develop rule-based strategies that learn rules from logic rules and apply them for scoring triple importance or deleting important triples. They also investigate adversarial addition attacks that corrupt learned rules and use them to generate negative triples as perturbations. The authors conduct extensive experiments on two datasets using three representative KGE methods, demonstrating the effectiveness of their proposed untargeted attacks in diminishing link prediction results. Interestingly, they find that different KGE methods exhibit varying robustness to untargeted attacks, with some methods being more susceptible to certain types of attacks than others.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research paper looks at how well artificial intelligence (AI) systems called knowledge graph embedding (KGE) do when working with real-world data. Sometimes this data is not very good or accurate, which can affect the AI’s performance. The researchers developed new ways to test these KGE systems and found that they are vulnerable to certain types of attacks that can make them perform worse. They also discovered that different KGE methods have varying levels of resistance to these attacks. Overall, this paper helps us understand how to improve the accuracy and robustness of AI systems in real-world applications.

Keywords

» Artificial intelligence  » Embedding  » Knowledge graph