Summary of Transforming and Combining Rewards For Aligning Large Language Models, by Zihao Wang et al.
Transforming and Combining Rewards for Aligning Large Language Modelsby Zihao Wang, Chirag Nagpal, Jonathan Berant,…
Transforming and Combining Rewards for Aligning Large Language Modelsby Zihao Wang, Chirag Nagpal, Jonathan Berant,…
SpeechComposer: Unifying Multiple Speech Tasks with Prompt Compositionby Yihan Wu, Soumi Maiti, Yifan Peng, Wangyou…
A Linguistic Comparison between Human and ChatGPT-Generated Conversationsby Morgan Sandler, Hyesun Choung, Arun Ross, Prabu…
Tradeoffs Between Alignment and Helpfulness in Language Models with Representation Engineeringby Yotam Wolf, Noam Wies,…
LCV2: An Efficient Pretraining-Free Framework for Grounded Visual Question Answeringby Yuhan Chen, Lumei Su, Lihua…
Do LLMs Dream of Ontologies?by Marco Bombieri, Paolo Fiorini, Simone Paolo Ponzetto, Marco RospocherFirst submitted…
Airavata: Introducing Hindi Instruction-tuned LLMby Jay Gala, Thanmay Jayakumar, Jaavid Aktar Husain, Aswanth Kumar M,…
Unlearning Traces the Influential Training Data of Language Modelsby Masaru Isonuma, Ivan TitovFirst submitted to…
UniMS-RAG: A Unified Multi-source Retrieval-Augmented Generation for Personalized Dialogue Systemsby Hongru Wang, Wenyu Huang, Yang…
Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Modelsby Hongzhan Lin, Ziyang…