Summary of Repetition Improves Language Model Embeddings, by Jacob Mitchell Springer et al.
Repetition Improves Language Model Embeddingsby Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, Aditi…
Repetition Improves Language Model Embeddingsby Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, Aditi…
Transformers are Expressive, But Are They Expressive Enough for Regression?by Swaroop Nath, Harshad Khadilkar, Pushpak…
Fine-tuning Large Language Models for Domain-specific Machine Translationby Jiawei Zheng, Hanghai Hong, Feiyan Liu, Xiaoli…
Asynchronous and Segmented Bidirectional Encoding for NMTby Jingpu Yang, Zehua Han, Mengyu Xiang, Helin Wang,…
ProPD: Dynamic Token Tree Pruning and Generation for LLM Parallel Decodingby Shuzhang Zhong, Zebin Yang,…
AlgoFormer: An Efficient Transformer Framework with Algorithmic Structuresby Yihang Gao, Chuanyang Zheng, Enze Xie, Han…
UMBCLU at SemEval-2024 Task 1A and 1C: Semantic Textual Relatedness with and without machine translationby…
Improving Deep Generative Models on Many-To-One Image-to-Image Translationby Sagar Saxena, Mohammad Nayeem TeliFirst submitted to…
Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic Interpretability: A Case Study on Othello-GPTby Zhengfu…
Advancing Translation Preference Modeling with RLHF: A Step Towards Cost-Effective Solutionby Nuo Xu, Jun Zhao,…