Loading Now

Summary of Exploring User-level Gradient Inversion with a Diffusion Prior, by Zhuohang Li et al.


Exploring User-level Gradient Inversion with a Diffusion Prior

by Zhuohang Li, Andrew Lowy, Jing Liu, Toshiaki Koike-Akino, Bradley Malin, Kieran Parsons, Ye Wang

First submitted to arxiv on: 11 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
We investigate a new attack vector in distributed learning called user-level gradient inversion, which allows attackers to infer private information beyond training data reconstruction. Existing attacks fail to achieve good reconstruction quality, particularly in large batch and image sizes. To address this limitation, we propose a novel approach that uses a denoising diffusion model as a strong image prior to enhance recovery. Unlike traditional attacks, our method focuses on recovering representative images that capture sensitive shared semantic information corresponding to the underlying user. Our experiments with face images demonstrate the ability of our methods to recover realistic facial images along with private user attributes.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you’re trying to steal someone’s secret identity from a computer program. Normally, attacks like this try to reconstruct individual pieces of information, but they often fail when there are many people involved or the data is very large. We came up with a new way to attack that focuses on finding patterns in what makes each person unique. Our method uses a special kind of math problem to make educated guesses about someone’s identity. In our tests, we were able to successfully guess facial features and private information.

Keywords

» Artificial intelligence  » Diffusion model