Summary of Do More Details Always Introduce More Hallucinations in Lvlm-based Image Captioning?, by Mingqian Feng et al.
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?by Mingqian Feng, Yunlong Tang,…
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?by Mingqian Feng, Yunlong Tang,…
Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attentionby Wenbin…
Beyond Under-Alignment: Atomic Preference Enhanced Factuality Tuning for Large Language Modelsby Hongbang Yuan, Yubo Chen,…
Post-hoc Utterance Refining Method by Entity Mining for Faithful Knowledge Grounded Conversationsby Yoonna Jang, Suhyune…
Ask-EDA: A Design Assistant Empowered by LLM, Hybrid RAG and Abbreviation De-hallucinationby Luyao Shi, Michael…
Confabulation: The Surprising Value of Large Language Model Hallucinationsby Peiqi Sui, Eamon Duede, Sophie Wu,…
CODE: Contrasting Self-generated Description to Combat Hallucination in Large Multi-modal Modelsby Junho Kim, Hyunjun Kim,…
Enhancing Trust in LLMs: Algorithms for Comparing and Interpreting LLMsby Nik Bear BrownFirst submitted to…
Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low…
Decompose, Enrich, and Extract! Schema-aware Event Extraction using LLMsby Fatemeh Shiri, Van Nguyen, Farhad Moghimifar,…