Loading Now

Summary of Core-bench: Fostering the Credibility Of Published Research Through a Computational Reproducibility Agent Benchmark, by Zachary S. Siegel et al.


CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark

by Zachary S. Siegel, Sayash Kapoor, Nitya Nagdir, Benedikt Stroebl, Arvind Narayanan

First submitted to arxiv on: 17 Sep 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Multiagent Systems (cs.MA)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces CORE-Bench, a benchmark designed to measure the accuracy of AI agents in tackling computational reproducibility tasks. The benchmark consists of 270 tasks based on 90 scientific papers across three disciplines: computer science, social science, and medicine. Tasks have three difficulty levels and include both language-only and vision-language tasks. The evaluation system measures agent accuracy in a fast and parallelizable way, saving days of evaluation time per run. Two baseline agents, AutoGPT and CORE-Agent, were tested using two underlying language models: GPT-4o and GPT-4o-mini. The best agent achieved an accuracy of 21% on the hardest task, highlighting the scope for improvement in automating routine scientific tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
AI researchers are working on a new benchmark called CORE-Bench. This benchmark helps test AI agents that can reproduce results from science papers. Scientists use code and data to verify their findings. The CORE-Bench has 270 tasks based on 90 papers in three fields: computer science, social science, and medicine. It includes easy, medium, and hard tasks with words or pictures. The paper also shows how to test AI agents using two language models. One agent was good at reproducing results, but there is still room for improvement.

Keywords

» Artificial intelligence  » Gpt