Summary of Clr-bench: Evaluating Large Language Models in College-level Reasoning, by Junnan Dong et al.
CLR-Bench: Evaluating Large Language Models in College-level Reasoning
by Junnan Dong, Zijin Hong, Yuanchen Bei, Feiran Huang, Xinrun Wang, Xiao Huang
First submitted to arxiv on: 23 Oct 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Large language models (LLMs) have been shown to excel in various language understanding tasks. However, existing benchmarks only measure accuracy in predicting final answers on multi-choice questions, leaving a gap in verifying the essential understanding of LLMs given a chosen answer. To address this, we propose CLR-Bench, which comprehensively evaluates LLMs’ complex college-level reasoning abilities. Specifically, our dataset consists of 16 challenging computer science and artificial intelligence disciplines, with 5 types of questions each accompanied by expert explanations. We formalize evaluation criteria using two novel metrics: Q→A for direct answer prediction and Q→AR for considering the joint ability to answer and provide rationale. Extensive experiments are conducted with 40 LLMs on 1,018 discipline-specific questions. Our results show that even top-performing closed-source LLMs like GPT-4 turbo tend to “guess” college-level answers, resulting in a significant decrease in accuracy from 63.31% Q→A to 39.00% Q→AR, indicating an unsatisfactory reasoning ability. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models are super smart at understanding text! But right now, we don’t have good ways to test if they really understand what they’re saying. That’s why this paper makes a new way to check how well these models can reason about complex topics like computer science and AI. They made a big dataset with lots of questions and answers from experts. Then, they came up with special rules to see how the models do on these tasks. They tested many different models and found that even the best ones don’t really understand what they’re saying – they just make good guesses! |
Keywords
» Artificial intelligence » Gpt » Language understanding