Loading Now

Summary of Gtbls: Generating Tables From Text by Conditional Question Answering, By Anirudh Sundar et al.


gTBLS: Generating Tables from Text by Conditional Question Answering

by Anirudh Sundar, Christopher Richardson, Larry Heck

First submitted to arxiv on: 21 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Information Retrieval (cs.IR); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel approach called Generative Tables (gTBLS) for automatically generating tables from unstructured text. Unlike previous methods that rely on a single-stage Transformer-based model, gTBLS employs a two-stage process to infer table structure (row and column headers) and then formulate questions using these headers, fine-tuning a causal language model to answer them. This approach allows for the utilization of pre-trained Large Language Models in a zero-shot configuration, making it suitable for situations where fine-tuning is not feasible. The authors demonstrate the effectiveness of gTBLS by achieving up to 10% improvement in BERTScore on table construction and up to 20% improvement on table content generation tasks across various datasets, including E2E, WikiTableText, WikiBio, and RotoWire.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making computers better at understanding text and turning it into organized tables. Right now, this is a hard problem that requires human help. The researchers came up with a new way to do this using two steps: first, they figure out what the table should look like (like what goes in each row and column), and then they use that information to create the actual table. This method can even use pre-trained language models without needing extra training, which makes it useful for situations where that’s not possible. The results show that this approach is better than previous methods at creating tables from text.

Keywords

* Artificial intelligence  * Causal language model  * Fine tuning  * Transformer  * Zero shot