Loading Now

Summary of Privacy-preserving Universal Adversarial Defense For Black-box Models, by Qiao Li et al.


Privacy-preserving Universal Adversarial Defense for Black-box Models

by Qiao Li, Cong Wu, Jing Chen, Zijun Zhang, Kun He, Ruiying Du, Xinxin Wang, Qingchuang Zhao, Yang Liu

First submitted to arxiv on: 20 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel universal black-box defense method called DUCD is introduced to enhance the resilience of deep neural networks (DNNs) against various types of adversarial attacks in critical applications like identity authentication and autonomous driving. Unlike traditional methods, DUCD does not require access to the target model’s parameters or architecture, making it a privacy-preserving approach. The method involves distilling the target model by querying it with data, creating a white-box surrogate while preserving data privacy. This surrogate is then enhanced using certified defense techniques based on randomized smoothing and optimized noise selection, allowing for robust defense against a broad range of adversarial attacks. Comparative evaluations demonstrate the effectiveness of DUCD, which outperforms existing black-box defenses and matches the accuracy of white-box defenses while maintaining data privacy and reducing membership inference attack success rates.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way to keep deep neural networks (DNNs) safe from hackers is called DUCD. This method helps protect important things like identity authentication and self-driving cars from being fooled by small changes in images or data. Right now, most methods for keeping DNNs safe require knowing how the model works, but that’s not always possible or desirable. DUCD is different because it doesn’t need to know the details of the target model. Instead, it creates a copy of the model using data and then makes this copy even stronger by adding noise and making sure it’s really robust against various types of attacks.

Keywords

» Artificial intelligence  » Inference