Loading Now

Summary of Double-dip: Thwarting Label-only Membership Inference Attacks with Transfer Learning and Randomization, by Arezoo Rajabi et al.


Double-Dip: Thwarting Label-Only Membership Inference Attacks with Transfer Learning and Randomization

by Arezoo Rajabi, Reeya Pimple, Aiswarya Janardhanan, Surudhi Asokraj, Bhaskar Ramasubramanian, Radha Poovendran

First submitted to arxiv on: 2 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Double-Dip, a systematic empirical study, investigates the use of transfer learning (TL) combined with randomization to thwart membership inference attacks (MIAs) on overfitted DNNs without degrading classification accuracy. The study examines the roles of shared feature space and parameter values between source and target models, number of frozen layers, and complexity of pretrained models. Double-Dip is evaluated on three dataset pairs: CIFAR-10 and ImageNet, GTSRB and ImageNet, and CelebA and VGGFace2. Four publicly available pretrained DNNs are considered: VGG-19, ResNet-18, Swin-T, and FaceNet. The experiments demonstrate that Stage-1 reduces adversary success while increasing classification accuracy of non-members against an adversary with white-box or black-box DNN model access attempting to carry out SOTA label-only MIAs. After Stage-2, the success of an adversary carrying out a label-only MIA is further reduced to near 50%, bringing it closer to a random guess and showing the effectiveness of Double-Dip.
Low GrooveSquid.com (original content) Low Difficulty Summary
Double-Dip is a new way to protect deep neural networks (DNNs) from being attacked. The study shows that by using transfer learning and randomization, we can make DNNs more secure without hurting their ability to correctly classify images. This is important because currently, DNNs are vulnerable to attacks that can steal private information.

Keywords

* Artificial intelligence  * Classification  * Inference  * Resnet  * Transfer learning