Loading Now

Summary of Wl Meet Vc, by Christopher Morris et al.


WL meet VC

by Christopher Morris, Floris Geerts, Jan Tönshoff, Martin Grohe

First submitted to arxiv on: 26 Jan 2023

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Discrete Mathematics (cs.DM); Data Structures and Algorithms (cs.DS); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
GNNs have been extensively studied for their expressive power, linked to the 1-dimensional Weisfeiler-Leman algorithm (1-WL). While this connection has led to significant advances in understanding and enhancing GNNs’ expressive power, it does not provide insights into their generalization performance. This paper studies GNNs’ generalization ability through Vapnik-Chervonenkis (VC) dimension theory in two settings, focusing on graph-level predictions. We show that the bitlength of GNNs’ weights tightly bounds their VC dimension and derive an upper bound for GNNs’ VC dimension using the number of colors produced by the 1-WL. Our empirical study confirms the validity of our theoretical findings.
Low GrooveSquid.com (original content) Low Difficulty Summary
GNNs are a type of machine learning model that can learn patterns in graph data. In this paper, scientists studied how well these models do at making predictions about new graphs they haven’t seen before. They used a mathematical idea called Vapnik-Chervonenkis dimension to understand what makes GNNs good or bad at generalizing. The researchers found that the number of colors used by another math concept called the 1-dimensional Weisfeiler-Leman algorithm can help predict how well GNNs will do. They tested their ideas and saw that they were correct.

Keywords

* Artificial intelligence  * Generalization  * Machine learning