Loading Now

Summary of Non Verbis, Sed Rebus: Large Language Models Are Weak Solvers Of Italian Rebuses, by Gabriele Sarti et al.


Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses

by Gabriele Sarti, Tommaso Caselli, Malvina Nissim, Arianna Bisazza

First submitted to arxiv on: 1 Aug 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel collection of verbalized rebuses in Italian is introduced to assess the capabilities of state-of-the-art large language models. The study finds that while general-purpose systems struggle with rebus-solving, fine-tuning can improve performance. However, the gains from training are largely due to memorization rather than actual linguistic proficiency or sequential instruction-following skills.
Low GrooveSquid.com (original content) Low Difficulty Summary
Rebuses are puzzles that require you to figure out a hidden phrase by looking at images and letters. Researchers created a big collection of rebuses in Italian to test how well large language models can solve them. They found that some models, like LLaMA-3 and GPT-4o, aren’t very good at it, but if they’re taught specifically for this task, they get better. However, the improvement is mostly because the models are memorizing the answers rather than actually understanding what’s going on.

Keywords

» Artificial intelligence  » Fine tuning  » Gpt  » Llama