Summary of Greedy and Cody: Counterfactual Explainers For Dynamic Graphs, by Zhan Qu et al.
GreeDy and CoDy: Counterfactual Explainers for Dynamic Graphs
by Zhan Qu, Daniel Gomm, Michael Färber
First submitted to arxiv on: 25 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
| Summary difficulty | Written by | Summary |
|---|---|---|
| High | Paper authors | High Difficulty Summary Read the original abstract here |
| Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes two novel methods for generating counterfactual explanations of Temporal Graph Neural Networks (TGNNs), which are crucial for understanding the decisions made by these models. The proposed methods, GreeDy and CoDy, treat explanation generation as a search problem, seeking input graph alterations that alter model predictions. GreeDy uses a simple greedy approach, while CoDy employs a more sophisticated Monte Carlo Tree Search algorithm. Experimental results show that both methods are effective in generating clear explanations, with CoDy outperforming GreeDy and existing factual methods by up to 59% in terms of success rate. |
| Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how Temporal Graph Neural Networks make decisions by creating simple examples that show why they made a certain choice. The authors suggest two new ways to do this, called GreeDy and CoDy. These methods try to find changes to the input data that would change the model’s decision. They use different approaches to do this: one is simple and quick, while the other is more complex but more effective. This could help us trust these models more by making their decisions clearer. |




