Summary of A Probabilistic Perspective on Unlearning and Alignment For Large Language Models, by Yan Scholten et al.
A Probabilistic Perspective on Unlearning and Alignment for Large Language Modelsby Yan Scholten, Stephan Günnemann,…
A Probabilistic Perspective on Unlearning and Alignment for Large Language Modelsby Yan Scholten, Stephan Günnemann,…
Test-time Adaptation for Regression by Subspace Alignmentby Kazuki Adachi, Shin'ya Yamaguchi, Atsutoshi Kumagai, Tomoki HamagamiFirst…
Formation of Representations in Neural Networksby Liu Ziyin, Isaac Chuang, Tomer Galanti, Tomaso PoggioFirst submitted…
Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Modelsby Zhengfeng Lai, Vasileios Saveris, Chen Chen,…
Dynamic Gradient Alignment for Online Data Mixingby Simin Fan, David Grangier, Pierre AblinFirst submitted to…
Dual Active Learning for Reinforcement Learning from Human Feedbackby Pangpang Liu, Chengchun Shi, Will Wei…
Simplicity bias and optimization threshold in two-layer ReLU networksby Etienne Boursier, Nicolas FlammarionFirst submitted to…
Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignmentby Yifan Zhang, Ge Zhang,…
EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routingby Haotian Sun, Tao Lei, Bowen Zhang, Yanghao…
Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representationsby Minoh Jeong, Min Namgung, Zae Myung Kim,…