Summary of Llm-powered Ensemble Learning For Paper Source Tracing: a Gpu-free Approach, by Kunlong Chen et al.
LLM-Powered Ensemble Learning for Paper Source Tracing: A GPU-Free Approach
by Kunlong Chen, Junjun Wang, Zhaoqun Chen, Kunjin Chen, Yitian Chen
First submitted to arxiv on: 14 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The abstract describes the authors’ participation in the KDD CUP 2024 paper source tracing competition, where they achieved third place. The task was to identify reference sources of given academic papers using closed-source large language models (LLMs). Unlike other teams, which fine-tuned pre-trained neural language models like BERT or ChatGLM, the authors employed LLMs for zero-shot and few-shot reasoning tasks. They directly generated predicted reference sources from provided papers without requiring GPUs for model training. The method was refined through ensemble learning. The authors’ code is available on GitHub. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a team that participated in a competition to identify the original sources of academic papers. To do this, they used special kinds of computer models called large language models (LLMs). These models are really good at understanding and processing text, even without needing powerful computers (GPUs) to help them work. The team’s approach was different from others because it didn’t rely on pre-trained models like BERT or ChatGLM, but instead used the LLMs directly to make predictions. They also combined their results with other approaches to get even better answers. |
Keywords
» Artificial intelligence » Bert » Few shot » Zero shot