Loading Now

Summary of The Impact Of Generalization Techniques on the Interplay Among Privacy, Utility, and Fairness in Image Classification, by Ahmad Hassanpour et al.


The Impact of Generalization Techniques on the Interplay Among Privacy, Utility, and Fairness in Image Classification

by Ahmad Hassanpour, Amir Zarei, Khawla Mallat, Anderson Santana de Oliveira, Bian Yang

First submitted to arxiv on: 16 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This study explores the delicate balance between fairness, privacy, and utility in image classification using machine learning. It investigates how techniques like sharpness-aware training (SAT) and differential privacy (DP-SAT) can improve this balance. The research also examines fairness in both private and non-private models trained on datasets with synthetic and real-world biases. To measure the privacy risks, membership inference attacks (MIAs) are performed, and the consequences of eliminating high-privacy risk samples, or outliers, are explored. A new metric called harmonic score is introduced, which combines accuracy, privacy, and fairness into a single measure.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how machine learning can be fair, private, and useful for image classification. It tries to balance these things by using special techniques like sharpness-aware training (SAT) and differential privacy (DP-SAT). The research also checks if models are fair when trained on datasets with real-world biases. To see if the models are safe, it does attacks called membership inference attacks (MIAs). The study also explores what happens when you remove samples that might be at risk of being identified. It comes up with a new way to measure how well a model does, by combining accuracy, privacy, and fairness into one score.

Keywords

» Artificial intelligence  » Image classification  » Inference  » Machine learning