Loading Now

Summary of Safety and Performance, Why Not Both? Bi-objective Optimized Model Compression Toward Ai Software Deployment, by Jie Zhu et al.


Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment

by Jie Zhu, Leye Wang, Xiao Han

First submitted to arxiv on: 11 Aug 2022

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Software Engineering (cs.SE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the pressing issue of deep learning model size increasing rapidly in AI software, hindering large-scale deployment on resource-restricted devices. To address this challenge, AI software compression plays a vital role, aiming to compress models while maintaining high performance. However, compressed models may inherit defects from their larger counterparts, which can be exploited by attackers deploying these models without adequate protection. This paper proposes a test-driven sparse training framework called SafeCompress, inspired by the TDD paradigm in software engineering, to address the safe model compression problem. By simulating attack mechanisms as safety tests, SafeCompress automatically compresses big models into small ones using dynamic sparse training. Additionally, this paper develops a concrete safe model compression mechanism for membership inference attacks (MIA), called MIA-SafeCompress. The results of extensive experiments on five datasets across computer vision and natural language processing tasks verify the effectiveness and generalization of SafeCompress.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is all about making AI models smaller and more secure! Right now, deep learning models are getting really big and hard to use on devices like smartphones. To fix this problem, researchers want to shrink these models without losing their ability to work well. But they also need to make sure that the shrunken models can’t be easily hacked or exploited by bad guys. To do this, they created a new way of training models called SafeCompress, which is inspired by a method used in software development. This new approach simulates different kinds of attacks on the model and makes sure it’s protected against those attacks. The researchers tested their method on five different datasets and showed that it works well for both computer vision and natural language processing tasks.

Keywords

» Artificial intelligence  » Deep learning  » Generalization  » Inference  » Model compression  » Natural language processing