Summary of Duo-llm: a Framework For Studying Adaptive Computation in Large Language Models, by Keivan Alizadeh et al.
Duo-LLM: A Framework for Studying Adaptive Computation in Large Language Modelsby Keivan Alizadeh, Iman Mirzadeh,…
Duo-LLM: A Framework for Studying Adaptive Computation in Large Language Modelsby Keivan Alizadeh, Iman Mirzadeh,…
FreqMark: Frequency-Based Watermark for Sentence-Level Detection of LLM-Generated Textby Zhenyu Xu, Kun Zhang, Victor S.…
When Attention Sink Emerges in Language Models: An Empirical Viewby Xiangming Gu, Tianyu Pang, Chao…
Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Expertsby Xu Liu, Juncheng Liu,…
‘Quis custodiet ipsos custodes?’ Who will watch the watchmen? On Detecting AI-generated peer-reviewsby Sandeep Kumar,…
Nudging: Inference-time Alignment via Model Collaborationby Yu Fei, Yasaman Razeghi, Sameer SinghFirst submitted to arxiv…
Exact Byte-Level Probabilities from Tokenized Language Models for FIM-Tasks and Model Ensemblesby Buu Phan, Brandon…
Semantic Token Reweighting for Interpretable and Controllable Text Embeddings in CLIPby Eunji Kim, Kyuhong Shim,…
ElasticTok: Adaptive Tokenization for Image and Videoby Wilson Yan, Volodymyr Mnih, Aleksandra Faust, Matei Zaharia,…
A Closer Look at Machine Unlearning for Large Language Modelsby Xiaojian Yuan, Tianyu Pang, Chao…