Summary of Achieve Fairness Without Demographics For Dermatological Disease Diagnosis, by Ching-hao Chiu et al.
Achieve Fairness without Demographics for Dermatological Disease Diagnosis
by Ching-Hao Chiu, Yu-Jen Chen, Yawen Wu, Yiyu Shi, Tsung-Yi Ho
First submitted to arxiv on: 16 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a method for ensuring fair predictions in medical image diagnosis, particularly in dermatological disease images. The approach focuses on addressing prediction biases concerning demographic groups such as gender, age, and race. By utilizing demographic information during training, recent research has made progress in fairness mitigation. However, this method is limited to specific attributes and may not generalize well to other attributes. To overcome this limitation, the proposed method enhances model features by capturing relationships between sensitive and target attributes, while regularizing feature entanglement between corresponding classes. This ensures that the model can only classify based on target attribute-related features, improving fairness and accuracy. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about making sure AI models used in medicine are fair and don’t make mistakes because of someone’s age, gender, or race. Right now, these models can be unfair if they’re trained using information that doesn’t matter to the person being diagnosed, like their eye color. The authors suggest a new way to train the models so they can work fairly for any attribute without knowing the sensitive information beforehand. They tested this method on two sets of skin disease images and showed it improved fairness compared to other methods. |