Loading Now

Summary of Safety and Performance, Why Not Both? Bi-objective Optimized Model Compression Against Heterogeneous Attacks Toward Ai Software Deployment, by Jie Zhu et al.


Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment

by Jie Zhu, Leye Wang, Xiao Han, Anmin Liu, Tao Xie

First submitted to arxiv on: 2 Jan 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Cryptography and Security (cs.CR); Software Engineering (cs.SE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to compressing deep learning models in AI software while ensuring their safety-performance co-optimization. The authors introduce a test-driven sparse training framework called SafeCompress, which can automatically compress large models into smaller ones using dynamic sparse training. They also develop three concrete instances of the framework: BMIA-SafeCompress, WMIA-SafeCompress, and MMIA-SafeCompress, each designed to defend against different types of attacks, including black-box and white-box membership inference attacks. The authors conduct extensive experiments on five datasets for computer vision and natural language processing tasks, demonstrating the effectiveness and generalizability of their framework.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper solves a big problem in AI software: how to make deep learning models smaller so they can run on devices with limited resources like smartphones. But it’s not just about making them smaller – the authors want to make sure these compressed models are also safe from attacks. They create a special training method called SafeCompress that uses a “test-driven development” approach, kind of like how software developers test their code before releasing it. The authors show that this method works well on five different datasets and can even defend against multiple types of attacks at once.

Keywords

» Artificial intelligence  » Deep learning  » Inference  » Natural language processing  » Optimization