Loading Now

Summary of Towards Understanding Dual Bn in Hybrid Adversarial Training, by Chenshuang Zhang et al.


Towards Understanding Dual BN In Hybrid Adversarial Training

by Chenshuang Zhang, Chaoning Zhang, Kang Zhang, Axi Niu, Junmo Kim, In So Kweon

First submitted to arxiv on: 28 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the application of batch normalization (BN) in adversarial training (AT), particularly when models are trained on both adversarial samples and clean samples (Hybrid-AT). The authors challenge a common assumption that using dual BN, where BN is used for adversarial branches and BN is used for clean branches, improves robustness. Instead, they find that disentangling affine parameters plays a more significant role than disentangling statistics in model training. This discovery aligns with prior work, and the authors build upon it to further investigate Hybrid-AT. The paper also proposes a two-task hypothesis as an empirical foundation for improving Hybrid-AT. Furthermore, the authors examine dual BN at test-time and find that affine parameters characterize robustness during inference.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks into how batch normalization (BN) works when training models on both fake and real data (Hybrid-AT). People thought using two different types of BN would make models more robust. But, the researchers found out that actually, it’s not the statistics of the data that matter most, but rather the way the model is trained. This fits with what other scientists have discovered before. The team also came up with a new idea about how Hybrid-AT works and tested it to see if it makes models more robust.

Keywords

* Artificial intelligence  * Batch normalization  * Inference