Summary of Distilling Generative-discriminative Representations For Very Low-resolution Face Recognition, by Junzheng Zhang et al.
Distilling Generative-Discriminative Representations for Very Low-Resolution Face Recognition
by Junzheng Zhang, Weijia Guo, Bochao Liu, Ruixin Shi, Yong Li, Shiming Ge
First submitted to arxiv on: 10 Sep 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Multimedia (cs.MM)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A generative-discriminative representation distillation approach is proposed for very low-resolution face recognition, combining generative and discriminative models via two distillation modules. The approach jointly distills generative and discriminative models using a diffusion model’s encoder as the generative teacher and a pretrained face recognizer as the discriminative teacher. This leads to a robust and discriminative student model for very low-resolution face recognition, improving detail recovery and enhancing recognition accuracy on face datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Very low-resolution face recognition is challenging due to loss of facial details. A new approach uses a generative-discriminative representation distillation method to help recognize faces at very low resolutions. This method combines two types of models: generative and discriminative. It uses a pre-trained model that can generate detailed images as the “teacher” for the first part, and then freezes the student model’s backbone. The second part uses another pre-trained model that recognizes faces as the teacher to supervise the learning of the student head. This results in a robust and accurate model for recognizing very low-resolution faces. |
Keywords
» Artificial intelligence » Diffusion model » Distillation » Encoder » Face recognition » Student model