Summary of Locr: Location-guided Transformer For Optical Character Recognition, by Yu Sun et al.
LOCR: Location-Guided Transformer for Optical Character Recognition
by Yu Sun, Dongzhan Zhou, Chen Lin, Conghui He, Wanli Ouyang, Han-Sen Zhong
First submitted to arxiv on: 4 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel end-to-end Optical Character Recognition (OCR) model, called LOCR, is proposed for accurately recognizing texts, equations, tables, and figures in academic documents. The existing end-to-end methods are hindered by significant repetition issues, particularly with complex layouts. To address this issue, the transformer architecture is integrated with location guiding during autoregression in the proposed LOCR model. The model is trained on a dataset comprising over 77M text-location pairs from 125K academic document pages, including bounding boxes for words, tables, and mathematical symbols. The results show that LOCR outperforms existing methods in terms of edit distance, BLEU, METEOR, and F-measure, while reducing repetition frequency in various datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary OCR technology is used to recognize text in documents. A new model called LOCR can do this job well. It uses a special kind of computer programming called the transformer architecture. This helps the model understand where words and other things are located on the page. The model was trained using over 77 million examples from academic papers. It does better than other models at recognizing text, and it also reduces mistakes when reading documents. |
Keywords
» Artificial intelligence » Bleu » Transformer