Summary of Can Large Language Models Act As Ensembler For Multi-gnns?, by Hanqi Duan et al.
Can Large Language Models Act as Ensembler for Multi-GNNs?
by Hanqi Duan, Yao Cheng, Jianxiang Yu, Xiang Li
First submitted to arxiv on: 22 Oct 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Graph Neural Networks (GNNs) have shown great promise in learning from graph-structured data. However, they lack the semantic understanding capability of rich textual node attributes, limiting their effectiveness. Interestingly, existing GNN models fail to consistently outperform one another across diverse datasets. To address this, we propose LensGNN, a model that combines multiple GNNs and Large Language Models (LLMs). LensGNN first aligns the representations of different GNNs in a common space using LoRA fine-tuning, then injects graph tokens and textual information into LLMs. This enables LensGNN to ensemble multiple GNNs and leverage the strengths of LLMs. Our experiments demonstrate that LensGNN outperforms existing models, advancing text-attributed graph ensemble learning by providing a robust solution for integrating semantic and structural information. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine trying to learn from a bunch of connected pieces of information, like how different people are connected on social media. Graph Neural Networks (GNNs) can do this, but they’re limited because they don’t understand the meaning behind each piece of information. Researchers found that different GNN models didn’t consistently perform well across different datasets. To solve this problem, they created a new model called LensGNN. This model takes multiple GNNs and combines them with Large Language Models (LLMs), which are really good at understanding text. The result is a more powerful model that can learn from both the connections between pieces of information and the meaning behind each piece. |
Keywords
» Artificial intelligence » Fine tuning » Gnn » Lora