Loading Now

Summary of Evaluation Of Llms on Syntax-aware Code Fill-in-the-middle Tasks, by Linyuan Gong et al.


Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks

by Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung

First submitted to arxiv on: 7 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Software Engineering (cs.SE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for evaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM) task. SAFIM focuses on syntax-aware completions of program structures like code blocks and conditional expressions, using 17,720 examples from multiple programming languages sourced from recent code submissions after April 2022. The benchmark provides a robust framework with various prompt designs and novel syntax-aware post-processing techniques, facilitating accurate comparisons across LLMs. Our evaluation of 15 LLMs shows that FIM pretraining enhances FIM proficiency and improves Left-to-Right (L2R) inference using LLMs. Our findings challenge conventional beliefs, suggesting that pretraining methods and data quality have more impact than model size. SAFIM serves as a foundational platform for future research in effective pretraining strategies for code LLMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
We created a new way to test Large Language Models (LLMs) on filling in code. This benchmark is called Syntax-Aware Fill-In-the-Middle (SAFIM). SAFIM focuses on completing program structures like blocks of code and conditional statements using 17,720 examples from different programming languages. We tested 15 LLMs and found that pretraining them for this task improves their ability to fill in code correctly. Our results challenge what we thought we knew about how well these models perform. SAFIM is a useful tool for studying how to make the most of Large Language Models.

Keywords

* Artificial intelligence  * Inference  * Pretraining  * Prompt  * Syntax