Summary of Towards a Systematic Evaluation Of Hallucinations in Large-vision Language Models, by Ashish Seth et al.
Towards a Systematic Evaluation of Hallucinations in Large-Vision Language Modelsby Ashish Seth, Dinesh Manocha, Chirag…
Towards a Systematic Evaluation of Hallucinations in Large-Vision Language Modelsby Ashish Seth, Dinesh Manocha, Chirag…
Is Your Text-to-Image Model Robust to Caption Noise?by Weichen Yu, Ziyan Yang, Shanchuan Lin, Qi…
From Hallucinations to Facts: Enhancing Language Models with Curated Knowledge Graphsby Ratnesh Kumar Joshi, Sagnik…
ReXTrust: A Model for Fine-Grained Hallucination Detection in AI-Generated Radiology Reportsby Romain Hardy, Sung Eun…
Dehallucinating Parallel Context Extension for Retrieval-Augmented Generationby Zexiong Ma, Shengnan An, Zeqi Lin, Yanzhen Zou,…
Are LLMs Good Literature Review Writers? Evaluating the Literature Review Writing Ability of Large Language…
Task-Oriented Dialog Systems for the Senegalese Wolof Languageby Derguene Mbaye, Moussa DialloFirst submitted to arxiv…
RAC3: Retrieval-Augmented Corner Case Comprehension for Autonomous Driving with Vision-Language Modelsby Yujin Wang, Quanfeng Liu,…
Multi-Task Learning with LLMs for Implicit Sentiment Analysis: Data-level and Task-level Automatic Weight Learningby Wenna…
Delve into Visual Contrastive Decoding for Hallucination Mitigation of Large Vision-Language Modelsby Yi-Lun Lee, Yi-Hsuan…