Summary of Small but Funny: a Feedback-driven Approach to Humor Distillation, by Sahithya Ravi et al.
Small But Funny: A Feedback-Driven Approach to Humor Distillationby Sahithya Ravi, Patrick Huber, Akshat Shrivastava,…
Small But Funny: A Feedback-Driven Approach to Humor Distillationby Sahithya Ravi, Patrick Huber, Akshat Shrivastava,…
Multiple Instance Learning for Glioma Diagnosis using Hematoxylin and Eosin Whole Slide Images: An Indian…
Composing Reinforcement Learning Policies, with Formal Guaranteesby Florent Delgrange, Guy Avni, Anna Lukina, Christian Schilling,…
ELAD: Explanation-Guided Large Language Models Active Distillationby Yifei Zhang, Bo Pan, Chen Ling, Yuntong Hu,…
Data Distribution Distilled Generative Model for Generalized Zero-Shot Recognitionby Yijie Wang, Mingjian Hong, Luwen Huangfu,…
Distillation Enhanced Generative Retrievalby Yongqi Li, Zhen Zhang, Wenjie Wang, Liqiang Nie, Wenjie Li, Tat-Seng…
UNDIAL: Self-Distillation with Adjusted Logits for Robust Unlearning in Large Language Modelsby Yijiang River Dong,…
Contextualization Distillation from Large Language Model for Knowledge Graph Completionby Dawei Li, Zhen Tan, Tianlong…
Dual-Student Knowledge Distillation Networks for Unsupervised Anomaly Detectionby Liyi Yao, Shaobing GaoFirst submitted to arxiv…
Enhancing End-to-End Multi-Task Dialogue Systems: A Study on Intrinsic Motivation Reinforcement Learning Algorithms for Improved…