Loading Now

Summary of Evaluating Llms on Entity Disambiguation in Tables, by Federico Belotti and Fabio Dadda and Marco Cremaschi and Roberto Avogadro and Matteo Palmonari


Evaluating LLMs on Entity Disambiguation in Tables

by Federico Belotti, Fabio Dadda, Marco Cremaschi, Roberto Avogadro, Matteo Palmonari

First submitted to arxiv on: 12 Aug 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents an extensive evaluation of four state-of-the-art (SOTA) approaches for table annotation using deep learning and heuristic-based methods. The approaches, including Alligator, Dagobah, TURL, and TableLlama, are evaluated on their ability to solve the entity disambiguation task. Two Large Language Models (LLMs), GPT-4o and GPT-4o-mini, are also included in the evaluation due to their performance in public benchmarks. The primary objective is to measure the performance of these approaches on a common-ground evaluation setting and assess their computational and cost requirements.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper compares different ways to understand tables using artificial intelligence. It looks at four special methods that are very good at this task, called Alligator, Dagobah, TURL, and TableLlama. These methods use a combination of deep learning and clever tricks to figure out what the information in the table means. The researchers also tested two language models, GPT-4o and GPT-4o-mini, because they do well on certain tests. The goal is to see which method works best and how much it costs to use.

Keywords

» Artificial intelligence  » Deep learning  » Gpt