Loading Now

Summary of Explaining Graph Neural Networks with Large Language Models: a Counterfactual Perspective For Molecular Property Prediction, by Yinhan He et al.


Explaining Graph Neural Networks with Large Language Models: A Counterfactual Perspective for Molecular Property Prediction

by Yinhan He, Zaiyi Zheng, Patrick Soga, Yaozhen Zhu, yushun Dong, Jundong Li

First submitted to arxiv on: 19 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL); Biomolecules (q-bio.BM)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed LLM-GCE method aims to improve the transparency of Graph Neural Networks (GNNs) in molecular property prediction tasks, such as toxicity analysis. By leveraging large language models (LLMs), LLM-GCE generates counterfactual graph topologies from text pairs and incorporates a dynamic feedback module to mitigate hallucination. This approach demonstrates superior performance in extensive experiments.
Low GrooveSquid.com (original content) Low Difficulty Summary
Graph Neural Networks (GNNs) have been successful in predicting molecular properties like toxicity, but their black-box nature can be concerning for high-stakes decisions. To improve transparency, the Graph Counterfactual Explanation (GCE) method has emerged. However, current GCE methods don’t consider domain-specific knowledge, leading to unclear outputs. The new LLM-GCE method uses an autoencoder and feedback module to generate counterfactual graph topologies from text pairs, showing better results.

Keywords

» Artificial intelligence  » Autoencoder  » Hallucination