Loading Now

Summary of Divert: Distractor Generation with Variational Errors Represented As Text For Math Multiple-choice Questions, by Nigel Fernandez et al.


DiVERT: Distractor Generation with Variational Errors Represented as Text for Math Multiple-choice Questions

by Nigel Fernandez, Alexander Scarlatos, Wanyong Feng, Simon Woodhead, Andrew Lan

First submitted to arxiv on: 27 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Computers and Society (cs.CY); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces DiVERT, a novel approach for generating high-quality distractors in multiple-choice questions (MCQs), particularly in math. Distractors are crucial for assessing and pedagogically improving MCQs, but creating them that anticipate students’ knowledge deficiencies or misconceptions is challenging. Existing methods, even those using large language models (LLMs), struggle to identify plausible distractors, let alone understand the errors behind them. DiVERT uses a variational approach to learn an interpretable representation of errors and generates distractors with comparable quality to human-authored ones. The approach outperforms state-of-the-art methods, including GPT-4o, on a real-world math MCQ dataset used by hundreds of thousands of students. This research has significant implications for improving the assessment and pedagogical value of MCQs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about creating better distractors in multiple-choice questions (MCQs). Distractors are important because they help us understand what students know and don’t know, and how we can improve their learning. The problem is that it’s hard to create distractors that make sense for real students’ mistakes or misunderstandings. This paper introduces a new way called DiVERT that uses computer models to learn from examples of correct answers and wrong ones. It does better than other methods at generating good distractors, and even comes close to what humans would do.

Keywords

* Artificial intelligence  * Gpt