Summary of Auformer: Vision Transformers Are Parameter-efficient Facial Action Unit Detectors, by Kaishen Yuan et al.
AUFormer: Vision Transformers are Parameter-Efficient Facial Action Unit Detectors
by Kaishen Yuan, Zitong Yu, Xin Liu, Weicheng Xie, Huanjing Yue, Jingyu Yang
First submitted to arxiv on: 7 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel approach to Facial Action Unit (AU) detection, which is crucial in affective computing. The existing methods suffer from overfitting issues due to the utilization of a large number of learnable parameters on scarce AU-annotated datasets or heavy reliance on substantial additional relevant data. To address this challenge, the authors introduce Parameter-Efficient Transfer Learning (PETL) and propose a novel Mixture-of-Knowledge Expert (MoKE) collaboration mechanism. The MoKE consists of multiple experts that integrate personalized multi-scale and correlation knowledge, which collaborates with other MoKEs to achieve parameter-efficient AU detection. Additionally, the authors design a Margin-truncated Difficulty-aware Weighted Asymmetric Loss (MDWA-Loss) to encourage the model to focus on activated AUs, differentiate unactivated AUs, and discard potential mislabeled samples. The proposed approach achieves state-of-the-art performance in various experiments, including within-domain, cross-domain, data efficiency, and micro-expression domain. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about a new way to detect Facial Action Units (AUs) that are important for computers to understand human emotions. Right now, the methods used have problems because they use too many learnable parameters on small datasets or rely on extra data. The authors came up with a new approach called Parameter-Efficient Transfer Learning (PETL). They also created a special kind of collaboration system called Mixture-of-Knowledge Expert (MoKE) that helps make predictions more efficiently. This MoKE is made up of many experts that work together to make better decisions. The authors also designed a special loss function to help the model focus on important things and ignore mistakes. The new approach works really well in different tests, including ones where they test it with big datasets or small datasets. |
Keywords
» Artificial intelligence » Loss function » Overfitting » Parameter efficient » Transfer learning