Summary of Evor: Evolving Retrieval For Code Generation, by Hongjin Su et al.
EVOR: Evolving Retrieval for Code Generation
by Hongjin Su, Shuyang Jiang, Yuhang Lai, Haoyuan Wu, Boao Shi, Che Liu, Qian Liu, Tao Yu
First submitted to arxiv on: 19 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a novel pipeline called EVOR for retrieval-augmented code generation (RACG) that leverages the synchronous evolution of both queries and diverse knowledge bases. Existing RACG pipelines are limited by their reliance on static knowledge bases with a single source, hindering the adaptation capabilities of Large Language Models (LLMs) to domains they have insufficient knowledge of. The authors develop EVOR to address this limitation, demonstrating its effectiveness in two realistic settings where external knowledge is required to solve code generation tasks. Experimental results show that EVOR achieves 2-4 times higher execution accuracy compared to other methods like Reflexion and DocPrompting. The study highlights the benefits of synchronous evolution of queries and documents and diverse information sources in the knowledge base, paving the way for future research on advanced RACG pipelines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Code generation using Large Language Models (LLMs) has been successful with the help of retrieval-augmented generation (RAG). However, existing pipelines for code generation rely on static knowledge bases from a single source. This makes it difficult for LLMs to adapt to new domains where they lack sufficient knowledge. A new pipeline called EVOR solves this problem by using multiple sources of information and letting queries and documents evolve together. In two real-life scenarios, EVOR was tested and showed better results than other methods. |
Keywords
» Artificial intelligence » Knowledge base » Rag » Retrieval augmented generation