Loading Now

Summary of A Bounding Box Is Worth One Token: Interleaving Layout and Text in a Large Language Model For Document Understanding, by Jinghui Lu et al.


A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document Understanding

by Jinghui Lu, Haiyang Yu, Yanjie Wang, Yongjie Ye, Jingqun Tang, Ziwei Yang, Binghong Wu, Qi Liu, Hao Feng, Han Wang, Hao Liu, Can Huang

First submitted to arxiv on: 2 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Multimedia (cs.MM)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces LayTextLLM, a novel approach for document understanding that combines spatial layouts with large language models (LLMs). Unlike existing methods, LayTextLLM efficiently integrates layout and textual data by projecting each bounding box to a single embedding and interleaving it with text. This allows the model to leverage autoregressive traits of LLMs while avoiding long sequence issues. The approach is evaluated on key information extraction (KIE) and visual question answering (VQA) tasks, showing significant improvements over previous state-of-the-art models.
Low GrooveSquid.com (original content) Low Difficulty Summary
LayTextLLM is a new way to understand documents by combining text and pictures. Normally, methods that combine these two things have limitations, like making really long sentences or not using the language model’s special abilities. LayTextLLM gets around this by turning each picture into a single piece of information and mixing it with the words. This makes the language model work better on tasks like finding important information in documents and answering questions about what is shown in pictures.

Keywords

» Artificial intelligence  » Autoregressive  » Bounding box  » Embedding  » Language model  » Question answering