Loading Now

Summary of Expath: Explaining Knowledge Graph Link Prediction with Ontological Closed Path Rules, by Ye Sun et al.


by Ye Sun, Lei Shi, Yongxin Tong

First submitted to arxiv on: 6 Dec 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Databases (cs.DB); Information Retrieval (cs.IR); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Link prediction (LP) is a crucial task for Knowledge Graphs (KG) completion, but it often lacks interpretability. Existing methods for explaining embedding-based LP models are limited to local explanations on KG and fail to provide human-interpretable semantics. Our proposed eXpath framework integrates relation path with ontological closed path rules to enhance the efficiency and effectiveness of LP interpretation. The eXpath explanations can be combined with other single-link explanation approaches to achieve a better overall solution. Experimental results demonstrate that introducing eXpath improves the quality of explanations by 20% on two key metrics, while reducing required explanation time by 61.4%, outperforming the best existing method. Case studies show that eXpath provides more semantically meaningful explanations through path-based evidence.
Low GrooveSquid.com (original content) Low Difficulty Summary
Link prediction is important for completing knowledge graphs, but it’s hard to understand why a model makes certain predictions. We’re proposing a new way to explain these predictions called eXpath. It looks at the relationships between things in the knowledge graph and uses rules based on those relationships to make the explanations more meaningful. Our tests show that this approach works well and can even combine with other explanation methods to give better results.

Keywords

» Artificial intelligence  » Embedding  » Knowledge graph  » Semantics