Loading Now

Summary of Model Compression Techniques in Biometrics Applications: a Survey, by Eduarda Caldeira et al.


Model Compression Techniques in Biometrics Applications: A Survey

by Eduarda Caldeira, Pedro C. Neto, Marco Huber, Naser Damer, Ana F. Sequeira

First submitted to arxiv on: 18 Jan 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper aims to provide a comprehensive overview of model compression techniques for deep learning models in biometrics applications. The development of these algorithms has led to significant performance improvements, but at the cost of increased complexity, making them less suitable for resource-constrained devices. To address this issue, researchers have proposed various compression methods, including quantization, knowledge distillation, and pruning. This paper surveys the current literature on these techniques, evaluating their advantages and disadvantages, and suggesting future work directions to improve model fairness. By analyzing the comparative value of these techniques, the authors highlight the need for a balanced approach that considers both performance and fairness in model compression.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research looks at how we can make computer models used in biometrics (like face recognition) smaller and more efficient without losing their accuracy. These models have gotten very good at doing tasks, but they’ve also become very complicated and take up a lot of memory and processing power. To fix this, researchers have come up with ways to shrink these models, like cutting out parts that aren’t important or making them work in a simpler way. This paper looks at all the different methods people are using to do this and tries to figure out which ones work best. It also talks about how we need to make sure these compressed models don’t have biases built into them.

Keywords

» Artificial intelligence  » Deep learning  » Face recognition  » Knowledge distillation  » Model compression  » Pruning  » Quantization