Summary of Magnifier Prompt: Tackling Multimodal Hallucination Via Extremely Simple Instructions, by Yuhan Fu et al.
Magnifier Prompt: Tackling Multimodal Hallucination via Extremely Simple Instructionsby Yuhan Fu, Ruobing Xie, Jiazhen Liu,…
Magnifier Prompt: Tackling Multimodal Hallucination via Extremely Simple Instructionsby Yuhan Fu, Ruobing Xie, Jiazhen Liu,…
Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learningby Huiwen Wu, Xiaohan…
On the Capacity of Citation Generation by Large Language Modelsby Haosheng Qian, Yixing Fan, Ruqing…
Can Structured Data Reduce Epistemic Uncertainty?by Shriram M S, Sushmitha S, Gayathri K S, Shahina…
Honest AI: Fine-Tuning “Small” Language Models to Say “I Don’t Know”, and Reducing Hallucination in…
Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study Over Open-ended Question…
Pap2Pat: Benchmarking Outline-Guided Long-Text Patent Generation with Patent-Paper Pairsby Valentin Knappich, Simon Razniewski, Anna Hätty,…
EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignmentby Yifei Xing, Xiangyuan Lan, Ruiping Wang,…
Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Modelsby Yufang Liu, Tao Ji, Changzhi…
MedHalu: Hallucinations in Responses to Healthcare Queries by Large Language Modelsby Vibhor Agarwal, Yiqiao Jin,…