Loading Now

Summary of Polynormer: Polynomial-expressive Graph Transformer in Linear Time, by Chenhui Deng et al.


Polynormer: Polynomial-Expressive Graph Transformer in Linear Time

by Chenhui Deng, Zichao Yue, Zhiru Zhang

First submitted to arxiv on: 2 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Polynormer is a novel graph transformer (GT) model designed to balance expressivity and scalability for large graphs. While traditional GTs have quadratic complexity, Polynormer boasts linear complexity, making it suitable for processing massive datasets. The model integrates a polynomial-expressive base with local and global equivariant attention mechanisms, enabling the learning of high-degree polynomials that adapt to graph topology and node features. Polynormer outperforms state-of-the-art GNN and GT baselines on 13 homophilic and heterophilic datasets, including large graphs with millions of nodes, even without using nonlinear activation functions.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine a way for computers to understand complex networks like social media or the internet. This paper presents a new model called Polynormer that can learn about these networks quickly and accurately. Unlike other models, Polynormer can handle very large networks with millions of users or nodes. It does this by using special math tricks to make it efficient. The results show that Polynormer is better than existing models at predicting what will happen in these networks.

Keywords

* Artificial intelligence  * Attention  * Gnn  * Transformer