Loading Now

Summary of Disentangled and Self-explainable Node Representation Learning, by Simone Piaggesi et al.


Disentangled and Self-Explainable Node Representation Learning

by Simone Piaggesi, André Panisson, Megha Khosla

First submitted to arxiv on: 28 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces DiSeNE (Disentangled and Self-Explainable Node Embedding), a framework that generates self-explainable node representations in an unsupervised manner for graph models. While previous efforts have focused on explaining graph model decisions, the interpretability of unsupervised node embeddings remains underexplored. The proposed method employs disentangled representation learning to produce dimension-wise interpretable embeddings, where each dimension is aligned with distinct topological structure of the graph. To achieve this, novel desiderata for disentangled and interpretable embeddings are formalized, driving new objective functions that optimize simultaneously for both interpretability and disentanglement. The framework also includes several new metrics to evaluate representation quality and human interpretability. Extensive experiments across multiple benchmark datasets demonstrate the effectiveness of DiSeNE.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how computers can learn from graphs, like social networks or websites. Right now, we don’t really know why these computer models make certain decisions. The researchers created a new way to make these models more transparent by breaking down complex information into smaller pieces that are easier to understand. They developed a new method called DiSeNE, which makes it possible for humans to see what’s going on inside these computer models. This is important because it can help us build better computers and use them in more responsible ways.

Keywords

» Artificial intelligence  » Embedding  » Representation learning  » Unsupervised