Loading Now

Summary of Wecromcl: Weakly Supervised Cross-modality Contrastive Learning For Transcription-only Supervised Text Spotting, by Jingjing Wu et al.


WeCromCL: Weakly Supervised Cross-Modality Contrastive Learning for Transcription-only Supervised Text Spotting

by Jingjing Wu, Zhengyao Fang, Pengyuan Lyu, Chengquan Zhang, Fanglin Chen, Guangming Lu, Wenjie Pei

First submitted to arxiv on: 28 Jul 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents Transcription-only Supervised Text Spotting, a task that aims to learn text spotters relying only on transcriptions without location annotations. The authors formulate this problem as a Weakly Supervised Cross-modality Contrastive Learning problem and design a simple yet effective model called WeCromCL. Unlike typical methods, WeCromCL conducts atomistic contrastive learning to model the character-wise appearance consistency between a text transcription and its correlated region in a scene image. The detected anchor points are used as pseudo location labels to guide the learning of text spotting. Extensive experiments on four benchmarks demonstrate the superior performance of the proposed model over others.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about teaching computers to recognize text in pictures without knowing exactly where the text is located. It’s a big challenge because the computer doesn’t have any hints about the location of the text. The researchers came up with a new way to solve this problem by using a technique called contrastive learning. They created a model that looks at individual characters in the picture and compares them to the corresponding text. This helps the computer learn where the text is located without needing exact coordinates. The results show that their method works better than other approaches.

Keywords

* Artificial intelligence  * Supervised