Summary of Predictive Multiplicity Of Knowledge Graph Embeddings in Link Prediction, by Yuqicheng Zhu et al.
Predictive Multiplicity of Knowledge Graph Embeddings in Link Prediction
by Yuqicheng Zhu, Nico Potyka, Mojtaba Nayyeri, Bo Xiong, Yunjie He, Evgeny Kharlamov, Steffen Staab
First submitted to arxiv on: 15 Aug 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty summary: This paper investigates the phenomenon of predictive multiplicity in knowledge graph embedding (KGE) models, which are used for link prediction on knowledge graphs. Despite multiple KGE methods performing similarly well, they can provide conflicting predictions for unseen queries. The authors define predictive multiplicity and introduce evaluation metrics to measure its impact on commonly used benchmark datasets. Their empirical study reveals significant predictive multiplicity, with a large proportion of testing queries exhibiting conflicting predictions. To address this issue, the authors propose voting methods from social choice theory, which significantly reduce conflicts in their experiments. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty summary: This paper is about how artificial intelligence models can give different answers to the same question, even if they are all good at answering questions. The authors call this “predictive multiplicity”. They study how common AI models for searching through huge collections of information behave when asked new questions that haven’t been seen before. They find that most of these models disagree with each other a lot. To solve this problem, the authors suggest using special voting rules to combine the predictions from different models. |
Keywords
* Artificial intelligence * Embedding * Knowledge graph