Summary of Revisiting Machine Unlearning with Dimensional Alignment, by Seonguk Seo et al.
Revisiting Machine Unlearning with Dimensional Alignmentby Seonguk Seo, Dongwan Kim, Bohyung HanFirst submitted to arxiv…
Revisiting Machine Unlearning with Dimensional Alignmentby Seonguk Seo, Dongwan Kim, Bohyung HanFirst submitted to arxiv…
NC-NCD: Novel Class Discovery for Node Classificationby Yue Hou, Xueyuan Chen, He Zhu, Romei Liu,…
Strike a Balance in Continual Panoptic Segmentationby Jinpeng Chen, Runmin Cong, Yuxuan Luo, Horace Ho…
Proximal Policy Distillationby Giacomo SpiglerFirst submitted to arxiv on: 21 Jul 2024CategoriesMain: Machine Learning (cs.LG)Secondary:…
AsyCo: An Asymmetric Dual-task Co-training Model for Partial-label Learningby Beibei Li, Yiyuan Zheng, Beihong Jin,…
Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for GNN-to-MLP Knowledge Distillationby Lirong Wu, Yunfan…
BOND: Aligning LLMs with Best-of-N Distillationby Pier Giuseppe Sessa, Robert Dadashi, LĂ©onard Hussenot, Johan Ferret,…
An Attention-based Representation Distillation Baseline for Multi-Label Continual Learningby Martin Menabue, Emanuele Frascaroli, Matteo Boschini,…
RDBE: Reasoning Distillation-Based Evaluation Enhances Automatic Essay Scoringby Ali Ghiasvand MohammadkhaniFirst submitted to arxiv on:…
Continual Distillation Learning: Knowledge Distillation in Prompt-based Continual Learningby Qifan Zhang, Yunhui Guo, Yu XiangFirst…