Summary of How Likely Do Llms with Cot Mimic Human Reasoning?, by Guangsheng Bao et al.
How Likely Do LLMs with CoT Mimic Human Reasoning?by Guangsheng Bao, Hongbo Zhang, Cunxiang Wang,…
How Likely Do LLMs with CoT Mimic Human Reasoning?by Guangsheng Bao, Hongbo Zhang, Cunxiang Wang,…
Key Design Choices in Source-Free Unsupervised Domain Adaptation: An In-depth Empirical Analysisby Andrea Maracani, Raffaello…
Explainable Contrastive and Cost-Sensitive Learning for Cervical Cancer Classificationby Ashfiqun Mustari, Rushmia Ahmed, Afsara Tasnim,…
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuningby Yong Liu, Zirui Zhu,…
How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?by Hongkang Li, Meng Wang, Songtao…
Towards Efficient Active Learning in NLP via Pretrained Representationsby Artem Vysogorets, Achintya GopalFirst submitted to…
The Impact of LoRA on the Emergence of Clusters in Transformersby Hugo Koubbi, Matthieu Boussard,…
A Data-Centric Approach To Generate Faithful and High Quality Patient Summaries with Large Language Modelsby…
Repetition Improves Language Model Embeddingsby Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, Aditi…
Does Combining Parameter-efficient Modules Improve Few-shot Transfer Accuracy?by Nader Asadi, Mahdi Beitollahi, Yasser Khalil, Yinchuan…