Summary of More Is More: Addition Bias in Large Language Models, by Luca Santagata et al.
More is More: Addition Bias in Large Language Modelsby Luca Santagata, Cristiano De NobiliFirst submitted…
More is More: Addition Bias in Large Language Modelsby Luca Santagata, Cristiano De NobiliFirst submitted…
Self-Instructed Derived Prompt Generation Meets In-Context Learning: Unlocking New Potential of Black-Box LLMsby Zhuo Li,…
Dialogue You Can Trust: Human and AI Perspectives on Generated Conversationsby Ike Ebubechukwu, Johane Takeuchi,…
Large Language Models for Automatic Detection of Sensitive Topicsby Ruoyu Wen, Stephanie Elena Crowe, Kunal…
Nuance Matters: Probing Epistemic Consistency in Causal Reasoningby Shaobo Cui, Junyou Li, Luca Mouchel, Yiyang…
Can AI Replace Human Subjects? A Large-Scale Replication of Psychological Experiments with LLMsby Ziyan Cui,…
PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Actionby Yijia Shao, Tianshi Li, Weiyan…
UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal Models in Multi-View Urban Scenariosby Baichuan Zhou,…
Logic-Enhanced Language Model Agents for Trustworthy Social Simulationsby Agnieszka Mensfelt, Kostas Stathis, Vince TrencsenyiFirst submitted…
FRACTURED-SORRY-Bench: Framework for Revealing Attacks in Conversational Turns Undermining Refusal Efficacy and Defenses over SORRY-Bench…