Summary of Sub-goal Distillation: a Method to Improve Small Language Agents, by Maryam Hashemzadeh et al.
Sub-goal Distillation: A Method to Improve Small Language Agentsby Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar,…
Sub-goal Distillation: A Method to Improve Small Language Agentsby Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar,…
Assessing Adversarial Robustness of Large Language Models: An Empirical Studyby Zeyu Yang, Zhao Meng, Xiaochen…
Get more for less: Principled Data Selection for Warming Up Fine-Tuning in LLMsby Feiyang Kang,…
Kinematic analysis of structural mechanics based on convolutional neural networkby Leye Zhang, Xiangxiang Tian, Hongjun…
Stable Diffusion Dataset Generation for Downstream Classification Tasksby Eugenio Lomurno, Matteo D'Oria, Matteo MatteucciFirst submitted…
Decoupling Exploration and Exploitation for Unsupervised Pre-training with Successor Featuresby JaeYoon Kim, Junyu Xuan, Christy…
Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuningby Jing Xu, Jingzhao ZhangFirst submitted to…
Autoformalizing Natural Language to First-Order Logic: A Case Study in Logical Fallacy Detectionby Abhinav Lalwani,…
Fine-Tuned Large Language Models for Symptom Recognition from Spanish Clinical Textby Mai A. Shaaban, Abbas…