Summary of Javabench: a Benchmark Of Object-oriented Code Generation For Evaluating Large Language Models, by Jialun Cao and Zhiyong Chen and Jiarong Wu and Shing-chi Cheung and Chang Xu
JavaBench: A Benchmark of Object-Oriented Code Generation for Evaluating Large Language Models
by Jialun Cao, Zhiyong Chen, Jiarong Wu, Shing-chi Cheung, Chang Xu
First submitted to arxiv on: 10 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Programming Languages (cs.PL); Software Engineering (cs.SE)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A recently developed set of code generation benchmarks, such as HumanEval, are commonly used to evaluate the capabilities of large language models (LLMs). However, after analyzing the latest 24 benchmarks, researchers have identified three significant imbalances. Firstly, a substantial disparity exists between programming languages, with 95.8% of benchmarks utilizing Python, while only a small fraction involves Java. Secondly, there is an imbalance in code granularity, with function- and statement-level benchmarks accounting for over 83.3% of the total, leaving few opportunities to assess class- or project-level coding skills, mostly limited to Python. Finally, existing benchmarks predominantly evaluate basic coding skills, neglecting advanced Object-Oriented Programming (OOP) features like encapsulation, inheritance, and polymorphism. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Code generation benchmarks are used to test large language models. Researchers looked at 24 of these benchmarks and found some problems. Most of the tests use Python, not Java. The tests also focus on small pieces of code, not bigger projects. This means we don’t get to see how well the models can handle more complex coding tasks. |