Summary of Winning Amazon Kdd Cup’24, by Chris Deotte et al.
Winning Amazon KDD Cup’24by Chris Deotte, Ivan Sorokin, Ahmet Erdem, Benedikt Schifferer, Gilberto Titericz Jr,…
Winning Amazon KDD Cup’24by Chris Deotte, Ivan Sorokin, Ahmet Erdem, Benedikt Schifferer, Gilberto Titericz Jr,…
BA-LoRA: Bias-Alleviating Low-Rank Adaptation to Mitigate Catastrophic Inheritance in Large Language Modelsby Yupeng Chang, Yi…
Abstractive summarization from Audio Transcriptionby Ilia DerkachFirst submitted to arxiv on: 30 Jul 2024CategoriesMain: Computation…
Leveraging Parameter Efficient Training Methods for Low Resource Text Classification: A Case Study in Marathiby…
Pre-trained Language Models Improve the Few-shot Prompt Ability of Decision Transformerby Yu Yang, Pan XuFirst…
Tensor Train Low-rank Approximation (TT-LoRA): Democratizing AI with Accelerated LLMsby Afia Anjum, Maksim E. Eren,…
A Federated Learning-Friendly Approach for Parameter-Efficient Fine-Tuning of SAM in 3D Segmentationby Mothilal Asokan, Joseph…
CELLM: An Efficient Communication in Large Language Models Training for Federated Learningby Raja Vavekanand, Kira…
Parameter-Efficient Fine-Tuning via Circular Convolutionby Aochuan Chen, Jiashun Cheng, Zijing Liu, Ziqi Gao, Fugee Tsung,…
Stay Tuned: An Empirical Study of the Impact of Hyperparameters on LLM Tuning in Real-World…