Summary of Do More Details Always Introduce More Hallucinations in Lvlm-based Image Captioning?, by Mingqian Feng et al.
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?by Mingqian Feng, Yunlong Tang,…
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?by Mingqian Feng, Yunlong Tang,…
MAGIC: Generating Self-Correction Guideline for In-Context Text-to-SQLby Arian Askari, Christian Poelitz, Xinye TangFirst submitted to…
CollabStory: Multi-LLM Collaborative Story Generation and Authorship Analysisby Saranya Venkatraman, Nafis Irtiza Tripto, Dongwon LeeFirst…
Online-Adaptive Anomaly Detection for Defect Identification in Aircraft Assemblyby Siddhant Shete, Dennis Mronga, Ankita Jadhav,…
Talk With Human-like Agents: Empathetic Dialogue Through Perceptible Acoustic Reception and Reactionby Haoqiu Yan, Yongxin…
Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attentionby Wenbin…
On the Robustness of Language Models for Tabular Question Answeringby Kushal Raj Bhandari, Sixue Xing,…
Beyond Visual Appearances: Privacy-sensitive Objects Identification via Hybrid Graph Reasoningby Zhuohang Jiang, Bingkui Tong, Xia…
Can Large Language Models Code Like a Linguist?: A Case Study in Low Resource Sound…
Large Language Model as a Universal Clinical Multi-task Decoderby Yujiang Wu, Hongjian Song, Jiawen Zhang,…