Loading Now

Summary of Adcorda: Classifier Refinement Via Adversarial Correction and Domain Adaptation, by Lulan Shen et al.


AdCorDA: Classifier Refinement via Adversarial Correction and Domain Adaptation

by Lulan Shen, Ali Edalati, Brett Meyer, Warren Gross, James J. Clark

First submitted to arxiv on: 24 Jan 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed AdCorDA method refines a pretrained classifier network by modifying the training set and leveraging the duality between network weights and layer inputs. This input space training involves two stages: adversarial correction, where incorrect classifications are corrected using adversarial attacks, followed by domain adaptation back to the original training set. Experimental validations on the CIFAR-100 dataset show significant accuracy boosts of over 5%. The technique can be applied to refine weight-quantized neural networks, leading to substantial performance enhancements and enhanced robustness against adversarial attacks.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way to improve a trained AI model is presented in this paper. The method, called AdCorDA, takes an existing trained model and makes it better by changing the data it was trained on. This is done in two steps: first, incorrect predictions are fixed using special attacks that try to trick the model, then the model is fine-tuned again using the corrected data. This process was tested on a popular dataset and showed a big improvement in accuracy – over 5%! The method can also be used to improve models that store their weights in a compact form, which is important for making AI more efficient.

Keywords

* Artificial intelligence  * Domain adaptation