Loading Now

Summary of Llms For Literature Review: Are We There Yet?, by Shubham Agarwal et al.


LLMs for Literature Review: Are we there yet?

by Shubham Agarwal, Gaurav Sahu, Abhay Puri, Issam H. Laradji, Krishnamurthy DJ Dvijotham, Jason Stanley, Laurent Charlin, Christopher Pal

First submitted to arxiv on: 15 Dec 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Digital Libraries (cs.DL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Recent Large Language Models (LLMs) have shown promise in assisting with literature review writing, particularly for decomposing the task into retrieval and planning components. This study explores zero-shot abilities of recent LLMs in retrieving related works given a query abstract, as well as generating literature reviews based on retrieved results. The authors introduce a novel two-step search strategy combining keyword extraction from an abstract using an LLM and querying an external knowledge base for relevant papers. They also propose a prompting-based re-ranking mechanism with attribution, which doubles the normalized recall compared to naive search methods. In the generation phase, the authors suggest a two-step approach outlining a review plan followed by executing steps in the plan to generate the actual review. The evaluation protocol designed for rolling use with newly released LLMs is released to promote additional research and development. Empirical results show that LLMs demonstrate promising potential for writing literature reviews when decomposed into smaller components, and the planning-based approach achieves higher-quality reviews by minimizing hallucinated references.
Low GrooveSquid.com (original content) Low Difficulty Summary
Scientists write important papers called literature reviews to summarize what others have discovered in their field of study. This paper looks at whether a type of artificial intelligence called Large Language Models (LLMs) can help with writing these reviews. The authors found that LLMs are good at helping find relevant papers when given an abstract to work from, and they can even generate a review based on what they find. They developed new methods for searching through papers and generating reviews, which show promising results. This research could lead to better tools for scientists in the future.

Keywords

» Artificial intelligence  » Knowledge base  » Prompting  » Recall  » Zero shot