Summary of A Self-matching Training Method with Annotation Embedding Models For Ontology Subsumption Prediction, by Yukihiro Shiraishi et al.
A Self-matching Training Method with Annotation Embedding Models for Ontology Subsumption Prediction
by Yukihiro Shiraishi, Ken Kaneiwa
First submitted to arxiv on: 26 Feb 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Computation and Language (cs.CL); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed paper introduces a novel approach to ontology completion by developing two ontology embedding models, Inverted-index Matrix Embedding (InME) and Co-occurrence Matrix Embedding (CoME), which capture global and local information from annotation axioms. The self-matching training method enhances the robustness of concept subsumption prediction when dealing with similar and isolated entities. This paper demonstrates the effectiveness of the proposed approach on three ontologies, including GO, FoodOn, and HeLiS, outperforming existing methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research creates a way to understand how things relate to each other in a big database of information. The team develops two new tools that help with this understanding by looking at how words are used together in the database. These tools can make predictions about which group something belongs to, even if it’s similar or not related to other groups. This is useful for making sense of big databases and can be applied to many different areas, such as biology and food. |
Keywords
* Artificial intelligence * Embedding