Summary of Glira: Black-box Membership Inference Attack Via Knowledge Distillation, by Andrey V. Galichin et al.
GLiRA: Black-Box Membership Inference Attack via Knowledge Distillationby Andrey V. Galichin, Mikhail Pautov, Alexey Zhavoronkin,…
GLiRA: Black-Box Membership Inference Attack via Knowledge Distillationby Andrey V. Galichin, Mikhail Pautov, Alexey Zhavoronkin,…
MH-pFLID: Model Heterogeneous personalized Federated Learning via Injection and Distillation for Medical Data Analysisby Luyuan…
Distilling Diffusion Models into Conditional GANsby Minguk Kang, Richard Zhang, Connelly Barnes, Sylvain Paris, Suha…
Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Drivingby Lingdong Kong, Xiang Xu, Jiawei Ren, Wenwei…
A Generalization Theory of Cross-Modality Distillation with Contrastive Learningby Hangyu Lin, Chen Liu, Chengming Xu,…
Contrastive Dual-Interaction Graph Neural Network for Molecular Property Predictionby Zexing Zhao, Guangsi Shi, Xiaopeng Wu,…
Practical Dataset Distillation Based on Deep Support Vectorsby Hyunho Lee, Junhoo Lee, Nojun KwakFirst submitted…
On Improving the Algorithm-, Model-, and Data- Efficiency of Self-Supervised Learningby Yun-Hao Cao, Jianxin WuFirst…
Let’s Focus: Focused Backdoor Attack against Federated Transfer Learningby Marco Arazzi, Stefanos Koffas, Antonino Nocera,…
Noisy Node Classification by Bi-level Optimization based Multi-teacher Distillationby Yujing Liu, Zongqian Wu, Zhengyu Lu,…