Loading Now

Summary of Ttaq: Towards Stable Post-training Quantization in Continuous Domain Adaptation, by Junrui Xiao et al.


TTAQ: Towards Stable Post-training Quantization in Continuous Domain Adaptation

by Junrui Xiao, Zhikai Li, Lianwei Yang, Yiduo Mei, Qingyi Gu

First submitted to arxiv on: 13 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel method for post-training quantization (PTQ) called TTAQ, which tackles the performance degradation of traditional PTQ in dynamically evolving test domains. TTAQ uses Perturbation Error Mitigation (PEM) and Perturbation Consistency Reconstruction (PCR) to mitigate the impact of input perturbations and ensure stable predictions for same samples. Additionally, Adaptive Balanced Loss (ABL) is introduced to adjust logits based on class frequency and complexity, effectively addressing class imbalance. The paper conducts extensive experiments on multiple datasets with generic TTA methods, showing that TTAQ outperforms existing baselines and improves the accuracy of low-bit PTQ models in continually changing test domains.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper finds a way to make AI models work better when they’re used in real-world situations where data is constantly changing. They created a new method called TTAQ, which helps reduce errors caused by unexpected changes in the data. It uses special techniques like PEM and PCR to keep the model’s predictions stable and ABL to balance out classes that are more or less common. The results show that this new approach can do better than previous methods and make AI models more reliable.

Keywords

» Artificial intelligence  » Logits  » Quantization