Summary of Distill-then-prune: An Efficient Compression Framework For Real-time Stereo Matching Network on Edge Devices, by Baiyu Pan and Jichao Jiao and Jianxing Pang and Jun Cheng
Distill-then-prune: An Efficient Compression Framework for Real-time Stereo Matching Network on Edge Devices
by Baiyu Pan, Jichao Jiao, Jianxing Pang, Jun Cheng
First submitted to arxiv on: 20 May 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel strategy for real-time stereo matching that achieves high accuracy while maintaining speed. The approach combines knowledge distillation and model pruning to overcome the trade-off between accuracy and speed. A lightweight model is designed by removing redundant modules from efficient models, then knowledge is distilled into this model using an efficient model as the teacher. Finally, the model is pruned to obtain the final result. Experimental results on Sceneflow and KITTI benchmarks demonstrate state-of-the-art performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper tries to make computers better at matching images together in real-time. Right now, these methods aren’t very accurate. The researchers found a way to improve accuracy by using two techniques: making the computer learn from another model that’s already good at this task, and then removing parts of the model that aren’t needed. They tested their method on two sets of images and got better results than before. |
Keywords
» Artificial intelligence » Knowledge distillation » Pruning