Loading Now

Summary of Evaluating Language Models For Generating and Judging Programming Feedback, by Charles Koutcheme et al.


Evaluating Language Models for Generating and Judging Programming Feedback

by Charles Koutcheme, Nicola Dainese, Arto Hellas, Sami Sarsa, Juho Leinonen, Syed Ashraf, Paul Denny

First submitted to arxiv on: 5 Jul 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper evaluates the performance of open-source large language models (LLMs) in generating high-quality feedback for programming assignments, comparing them with proprietary models. The study focuses on learning programming in computing education research (CER), a domain that has seen significant attention due to LLMs’ transformative impact. Using a dataset of introductory Python programming exercises, the evaluations suggest that state-of-the-art open-source LLMs are nearly equivalent to proprietary models in both generating and assessing programming feedback. Additionally, the paper demonstrates the efficiency of smaller LLMs for these tasks and highlights the wide range of accessible LLMs, even freely available, for educators and practitioners.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study compares the performance of open-source large language models (LLMs) with proprietary ones in giving feedback on programming assignments. It looks at how well these models do their job, which is important because learning to program can be tricky. The researchers used a big dataset of Python exercises for beginners and found that open-source LLMs are almost as good as the proprietary ones. They also showed that smaller LLMs can still do a great job in this area. Overall, this study is helpful for people who teach programming or want to learn it themselves.

Keywords

» Artificial intelligence  » Attention  » Cer