Loading Now

Summary of Comparison Of Large Language Models For Generating Contextually Relevant Questions, by Ivo Lodovico Molina et al.


Comparison of Large Language Models for Generating Contextually Relevant Questions

by Ivo Lodovico Molina, Valdemar Švábenský, Tsubasa Minematsu, Li Chen, Fumiya Okubo, Atsushi Shimada

First submitted to arxiv on: 30 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The study investigates the effectiveness of Large Language Models (LLMs) for generating automatic questions from university slide text without fine-tuning. Three models, GPT-3.5, Llama 2-Chat 13B, and Flan T5 XXL, are compared in their ability to create questions using a two-step pipeline. The generated questions are evaluated by students across five metrics: clarity, relevance, difficulty, slide relation, and question-answer alignment. Results show that GPT-3.5 and Llama 2-Chat 13B outperform Flan T5 XXL, particularly in terms of clarity and question-answer alignment.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how well Large Language Models (LLMs) can create questions from university slide text without needing extra training. Three models are tested to see which one does the best job making questions that are clear and easy to understand. The students who tried out these questions thought they were pretty good, especially when it came to matching the question to the answer.

Keywords

» Artificial intelligence  » Alignment  » Fine tuning  » Gpt  » Llama  » T5