Summary of Reducing the Scope Of Language Models with Circuit Breakers, by David Yunis et al.
Reducing the Scope of Language Models with Circuit Breakersby David Yunis, Siyu Huo, Chulaka Gunasekara,…
Reducing the Scope of Language Models with Circuit Breakersby David Yunis, Siyu Huo, Chulaka Gunasekara,…
Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans…
Can Stories Help LLMs Reason? Curating Information Space Through Narrativeby Vahid Sadiri Javadi, Johanne R.…
Ferret-UI 2: Mastering Universal User Interface Understanding Across Platformsby Zhangheng Li, Keen You, Haotian Zhang,…
Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classificationby Xiaoyu Tao, Tingyue…
Prompting and Fine-Tuning of Small LLMs for Length-Controllable Telephone Call Summarizationby David Thulke, Yingbo Gao,…
Learning Versatile Skills with Curriculum Maskingby Yao Tang, Zhihui Xie, Zichuan Lin, Deheng Ye, Shuai…
A Theoretical Understanding of Chain-of-Thought: Coherent Reasoning and Error-Aware Demonstrationby Yingqian Cui, Pengfei He, Xianfeng…
A Simple Model of Inference Scaling Lawsby Noam LeviFirst submitted to arxiv on: 21 Oct…
Is Less More? Exploring Token Condensation as Training-free Test-time Adaptationby Zixin Wang, Dong Gong, Sen…