Summary of Simltd: Simple Supervised and Semi-supervised Long-tailed Object Detection, by Phi Vu Tran
SimLTD: Simple Supervised and Semi-Supervised Long-Tailed Object Detection
by Phi Vu Tran
First submitted to arxiv on: 28 Dec 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Recent advancements in visual recognition have led to significant progress in modern object detection models. However, these models still struggle with learning from few exemplars, a problem known as open-set recognition. This paper focuses on addressing the long-tailed distribution of object classes in object detection tasks. Existing approaches rely on large labeled datasets like ImageNet to augment training instances, but this is impractical and has limited utility in real-world scenarios. Instead, our proposed SimLTD framework leverages optional unlabeled images to improve performance. The framework consists of three steps: pre-training on abundant head classes, transfer learning on scarce tail classes, and fine-tuning on a sampled set of both head and tail classes. This approach establishes new record results on the challenging LVIS v1 benchmark in both supervised and semi-supervised settings. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine you’re trying to teach a computer to recognize objects in pictures. Right now, computers are really good at recognizing things they’ve seen before, but they struggle when they don’t have many examples to learn from. This paper is about finding a way for computers to learn from just a few examples of each object. The authors propose a new method that uses lots of easy-to-find images without labels to improve the computer’s ability to recognize objects. This method works by first training the computer on common objects, then using those skills to learn about less common objects. By combining these steps, the computer can become even better at recognizing objects, even when it doesn’t have many examples to learn from. |
Keywords
» Artificial intelligence » Fine tuning » Object detection » Semi supervised » Supervised » Transfer learning