Loading Now

Summary of Tat-llm: a Specialized Language Model For Discrete Reasoning Over Tabular and Textual Data, by Fengbin Zhu et al.


TAT-LLM: A Specialized Language Model for Discrete Reasoning over Tabular and Textual Data

by Fengbin Zhu, Ziyang Liu, Fuli Feng, Chao Wang, Moxin Li, Tat-Seng Chua

First submitted to arxiv on: 24 Jan 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles question answering (QA) over hybrid data comprising tabular and textual content, a common scenario on the Web. The authors leverage large language models (LLMs) to solve this task, building upon recent advancements in multi-step reasoning capabilities. A step-wise pipeline is proposed, consisting of extractor, reasoner, and executor components, with initial validation using GPT-4, which outperforms existing methods. However, utilizing online LLMs poses challenges regarding cost, latency, and data security risks, prompting the development of smaller LLMs specialized in this task. A TAT-LLM language model is created by fine-tuning LLaMA 2 with automatically generated training data following the step-wise pipeline. Experimental results demonstrate that the proposed TAT-LLM model outperforms baseline models, including large-scale LLMs like GPT-4 on FinQA, TAT-QA, and TAT-DQA benchmarks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps computers answer questions better when given a mix of table-like data and text. This is important because many online documents have both types of content. The authors use powerful language models to solve this problem. They create a step-by-step process for answering questions that includes extracting key information, using reasoning skills, and taking action. Initially, they tested their approach with GPT-4, which did well. However, using these large language models can be costly, slow, and raise security concerns. So, the authors developed smaller language models specifically designed for this task. They trained one of these small models on a lot of data and found that it performed better than other approaches on several tests.

Keywords

» Artificial intelligence  » Fine tuning  » Gpt  » Language model  » Llama  » Prompting  » Question answering