Summary of Pre-trained Language Models Improve the Few-shot Prompt Ability Of Decision Transformer, by Yu Yang et al.
Pre-trained Language Models Improve the Few-shot Prompt Ability of Decision Transformerby Yu Yang, Pan XuFirst…
Pre-trained Language Models Improve the Few-shot Prompt Ability of Decision Transformerby Yu Yang, Pan XuFirst…
Mission Impossible: A Statistical Perspective on Jailbreaking LLMsby Jingtong Su, Julia Kempe, Karen UllrichFirst submitted…
Tensor Train Low-rank Approximation (TT-LoRA): Democratizing AI with Accelerated LLMsby Afia Anjum, Maksim E. Eren,…
A Natural Language Processing Framework for Hotel Recommendation Based on Users’ Text Reviewsby Lavrentia Aravani,…
Coarse Correspondences Boost Spatial-Temporal Reasoning in Multimodal Language Modelby Benlin Liu, Yuhao Dong, Yiqin Wang,…
Tamper-Resistant Safeguards for Open-Weight LLMsby Rishub Tamirisa, Bhrugu Bharathi, Long Phan, Andy Zhou, Alice Gatti,…
Memorization Capacity for Additive Fine-Tuning with Small ReLU Networksby Jy-yong Sohn, Dohyun Kwon, Seoyeon An,…
OmniParser for Pure Vision Based GUI Agentby Yadong Lu, Jianwei Yang, Yelong Shen, Ahmed AwadallahFirst…
A Federated Learning-Friendly Approach for Parameter-Efficient Fine-Tuning of SAM in 3D Segmentationby Mothilal Asokan, Joseph…
Zero Shot Health Trajectory Prediction Using Transformerby Pawel Renc, Yugang Jia, Anthony E. Samir, Jaroslaw…