Summary of Revisiting Zeroth-order Optimization For Memory-efficient Llm Fine-tuning: a Benchmark, by Yihua Zhang et al.
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmarkby Yihua Zhang, Pingzhi Li, Junyuan Hong,…
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmarkby Yihua Zhang, Pingzhi Li, Junyuan Hong,…
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Modelsby Yifan Yang, Jiajun…
A Curious Case of Searching for the Correlation between Training Data and Adversarial Robustness of…
ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphsby Yuhan Li, Peisong Wang, Zhixun Li, Jeffrey Xu…
Aligning Large Language Models by On-Policy Self-Judgmentby Sangkyu Lee, Sungdong Kim, Ashkan Yousefpour, Minjoon Seo,…
Model Editing by Standard Fine-Tuningby Govind Gangadhar, Karl StratosFirst submitted to arxiv on: 16 Feb…
Speculative Streaming: Fast LLM Inference without Auxiliary Modelsby Nikhil Bhendawade, Irina Belousova, Qichen Fu, Henry…
TuneTables: Context Optimization for Scalable Prior-Data Fitted Networksby Benjamin Feuer, Robin Tibor Schirrmeister, Valeriia Cherepanova,…
Instruction Diversity Drives Generalization To Unseen Tasksby Dylan Zhang, Justin Wang, Francois ChartonFirst submitted to…
DAEDRA: A language model for predicting outcomes in passive pharmacovigilance reportingby Chris von CsefalvayFirst submitted…