Loading Now

Summary of Resilience in Knowledge Graph Embeddings, by Arnab Sharma et al.


Resilience in Knowledge Graph Embeddings

by Arnab Sharma, N’Dah Jean Kouagou, Axel-Cyrille Ngonga Ngomo

First submitted to arxiv on: 28 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper focuses on knowledge graphs, which have been widely applied in various domains. Large-scale knowledge graphs effectively represent structured knowledge, making them useful for machine learning techniques like knowledge graph embedding (KGE) models. KGE models transform entities and relationships into vectors, but they often face challenges such as noise, missing information, and distribution shift. The existing literature has focused on adversarial attacks on KGE models, but the other critical aspects remain unexplored. This paper provides a unified definition of resilience in machine learning, encompassing generalisation, performance consistency, distribution adaptation, and robustness. The authors perform a systematic survey to find gaps in existing works on resilience in knowledge graphs and categorize them based on their respective aspects of resilience. The results show that most existing works focus on robustness.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how we can make sure our computers understand big amounts of information, like what’s in a library or the internet. This information is organized into “knowledge graphs” that help computers answer questions and make recommendations. However, these knowledge graphs can be noisy or missing some important details, which makes it hard for computers to use them correctly. The paper looks at how we can make sure our computers are good at using these knowledge graphs, even when the information is tricky or incomplete.

Keywords

» Artificial intelligence  » Embedding  » Knowledge graph  » Machine learning