Summary of Towards a Systematic Evaluation Of Hallucinations in Large-vision Language Models, by Ashish Seth et al.
Towards a Systematic Evaluation of Hallucinations in Large-Vision Language Modelsby Ashish Seth, Dinesh Manocha, Chirag…
Towards a Systematic Evaluation of Hallucinations in Large-Vision Language Modelsby Ashish Seth, Dinesh Manocha, Chirag…
CAG: Chunked Augmented Generation for Google Chrome’s Built-in Gemini Nanoby Vivek Vellaiyappan Surulimuthu, Aditya Karnam…
Multilingual Mathematical Reasoning: Advancing Open-Source LLMs in Hindi and Englishby Avinash Anand, Kritarth Prasad, Chhavi…
Retention Score: Quantifying Jailbreak Risks for Vision Language Modelsby Zaitang Li, Pin-Yu Chen, Tsung-Yi HoFirst…
Visual Prompting with Iterative Refinement for Design Critique Generationby Peitong Duan, Chin-Yi Chen, Bjoern Hartmann,…
Mining Math Conjectures from LLMs: A Pruning Approachby Jake Chuharski, Elias Rojas Collins, Mark MeringoloFirst…
Multi-modal and Multi-scale Spatial Environment Understanding for Immersive Visual Text-to-Speechby Rui Liu, Shuwei He, Yifan…
Codenames as a Benchmark for Large Language Modelsby Matthew Stephenson, Matthew Sidji, Benoît RonvalFirst submitted…
Seeing the Forest and the Trees: Solving Visual Graph and Tree Based Data Structure Problems…
Leveraging Audio and Text Modalities in Mental Health: A Study of LLMs Performanceby Abdelrahman A.…