Summary of Efficiently Distilling Llms For Edge Applications, by Achintya Kundu et al.
Efficiently Distilling LLMs for Edge Applicationsby Achintya Kundu, Fabian Lim, Aaron Chew, Laura Wynter, Penny…
Efficiently Distilling LLMs for Edge Applicationsby Achintya Kundu, Fabian Lim, Aaron Chew, Laura Wynter, Penny…
LLM Attributor: Interactive Visual Attribution for LLM Generationby Seongmin Lee, Zijie J. Wang, Aishwarya Chakravarthy,…
Position-Aware Parameter Efficient Fine-Tuning Approach for Reducing Positional Bias in LLMsby Zheng Zhang, Fan Yang,…
TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answeringby Chuyi Shang, Amos You, Sanjay Subramanian,…
Token-Efficient Leverage Learning in Large Language Modelsby Yuanhao Zeng, Min Wang, Yihang Wang, Yingxia ShaoFirst…
What is in Your Safe Data? Identifying Benign Data that Breaks Safetyby Luxi He, Mengzhou…
Lipsum-FT: Robust Fine-Tuning of Zero-Shot Models Using Random Text Guidanceby Giung Nam, Byeongho Heo, Juho…
Extensive Self-Contrast Enables Feedback-Free Language Model Alignmentby Xiao Liu, Xixuan Song, Yuxiao Dong, Jie TangFirst…
InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learningby Yan-Shuo Liang, Wu-Jun LiFirst submitted to arxiv on:…
A Two-Phase Recall-and-Select Framework for Fast Model Selectionby Jianwei Cui, Wenhang Shi, Honglin Tao, Wei…