Summary of Text-guided Attention Is All You Need For Zero-shot Robustness in Vision-language Models, by Lu Yu et al.
Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Modelsby Lu Yu, Haiyang…
Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Modelsby Lu Yu, Haiyang…
Diffusion as Reasoning: Enhancing Object Goal Navigation with LLM-Biased Diffusion Modelby Yiming Ji, Yang Liu,…
A Fresh Look at Generalized Category Discovery through Non-negative Matrix Factorizationby Zhong Ji, Shuo Yang,…
Advancing Efficient Brain Tumor Multi-Class Classification – New Insights from the Vision Mamba Model in…
Building Altruistic and Moral AI Agent with Brain-inspired Affective Empathy Mechanismsby Feifei Zhao, Hui Feng,…
Beyond Text: Optimizing RAG with Multimodal Inputs for Industrial Applicationsby Monica Riedler, Stefan LangerFirst submitted…
Path-based summary explanations for graph recommenders (extended version)by Danae Pla Karidi, Evaggelia PitouraFirst submitted to…
From Explicit Rules to Implicit Reasoning in an Interpretable Violence Monitoring Systemby Wen-Dong Jiang, Chih-Yung…
Sing it, Narrate it: Quality Musical Lyrics Translationby Zhuorui Ye, Jinhan Li, Rongwu XuFirst submitted…
Mapping the Neuro-Symbolic AI Landscape by Architectures: A Handbook on Augmenting Deep Learning Through Symbolic…