Summary of A Survey Of Hallucination in Large Visual Language Models, by Wei Lan et al.
A Survey of Hallucination in Large Visual Language Modelsby Wei Lan, Wenyi Chen, Qingfeng Chen,…
A Survey of Hallucination in Large Visual Language Modelsby Wei Lan, Wenyi Chen, Qingfeng Chen,…
Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Trainingby Shahrad Mohammadzadeh, Juan David Guerra,…
Coarse-to-Fine Highlighting: Reducing Knowledge Hallucination in Large Language Modelsby Qitan Lv, Jie Wang, Hanzhu Chen,…
Enabling Scalable Evaluation of Bias Patterns in Medical LLMsby Hamed Fayyaz, Raphael Poulain, Rahmatollah BeheshtiFirst…
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided Decodingby Kyungmin Min, Minbeom Kim, Kang-il Lee,…
MCQG-SRefine: Multiple Choice Question Generation and Evaluation with Iterative Self-Critique, Correction, and Comparison Feedbackby Zonghai…
FaithBench: A Diverse Hallucination Benchmark for Summarization by Modern LLMsby Forrest Sheng Bao, Miaoran Li,…
A Claim Decomposition Benchmark for Long-form Answer Verificationby Zhihao Zhang, Yixing Fan, Ruqing Zhang, Jiafeng…
On A Scale From 1 to 5: Quantifying Hallucination in Faithfulness Evaluationby Xiaonan Jing, Srinivas…
Controlled Automatic Task-Specific Synthetic Data Generation for Hallucination Detectionby Yong Xie, Karan Aggarwal, Aitzaz Ahmad,…