Loading Now

Summary of Daniel: a Fast Document Attention Network For Information Extraction and Labelling Of Handwritten Documents, by Thomas Constum et al.


DANIEL: A fast Document Attention Network for Information Extraction and Labelling of handwritten documents

by Thomas Constum, Pierrick Tranouez, Thierry Paquet

First submitted to arxiv on: 12 Jul 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces DANIEL, a fully end-to-end architecture that integrates language models with Document Attention Network (DAN) to comprehensively understand handwritten documents. The model simultaneously performs document layout analysis, handwritten text recognition, and named entity recognition on full-page documents across multiple languages, layouts, and tasks. For named entity recognition, the ontology can be specified via an input prompt. The architecture employs a convolutional encoder for image processing and an autoregressive decoder based on a transformer-based language model. DANIEL achieves competitive results on four datasets, including state-of-the-art performance on RIMES 2009, M-POPP, and IAM NER.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper makes it easier to understand handwritten documents by using one powerful tool instead of three separate steps. The new model, called DANIEL, can recognize what’s written in the document, where it is written, and who or what it’s about. It works well across many languages and tasks, and it’s much faster than other methods.

Keywords

» Artificial intelligence  » Attention  » Autoregressive  » Decoder  » Encoder  » Language model  » Named entity recognition  » Ner  » Prompt  » Transformer