Summary of Improving Ocr Quality in 19th Century Historical Documents Using a Combined Machine Learning Based Approach, by David Fleischhacker et al.
Improving OCR Quality in 19th Century Historical Documents Using a Combined Machine Learning Based Approach
by David Fleischhacker, Wolfgang Goederle, Roman Kern
First submitted to arxiv on: 15 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper tackles a significant challenge in historical research on the 19th century, leveraging machine learning models to recognize and extract complex data structures from a high-value historical primary source, the Schematismus. The goal is to improve OCR quality and enable comprehensive analysis of the administrative and social structure of the later Habsurg Empire. To achieve this, the study uses Faster R-CNN as the basis for the ML architecture for structure recognition and synthesizes Hof- und Staatsschematismus-style data to train the model. The results show a significant improvement in OCR performance, with a remarkable 71.98% decrease in CER and 52.49% decrease in WER. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps historians study the past better by using computers to understand old documents. Historians have lots of new digital copies of important papers from the 19th century, but they need ways to automatically read and analyze these documents. The researchers tried different computer vision techniques to make this happen. They used a special kind of AI model called Faster R-CNN to recognize patterns in the documents. To teach the model what to look for, they created fake data that looked like real historical documents. Then, they tested the model on some real old documents and saw big improvements in how well it could read them. |
Keywords
* Artificial intelligence * Cer * Cnn * Machine learning