Summary of Ask Llms Directly, “what Shapes Your Bias?”: Measuring Social Bias in Large Language Models, by Jisu Shin et al.
Ask LLMs Directly, “What shapes your bias?”: Measuring Social Bias in Large Language Modelsby Jisu…
Ask LLMs Directly, “What shapes your bias?”: Measuring Social Bias in Large Language Modelsby Jisu…
Are language models rational? The case of coherence norms and belief revisionby Thomas Hofweber, Peter…
Degrees of Freedom Matter: Inferring Dynamics from Point Trajectoriesby Yan Zhang, Sergey Prokudin, Marko Mihajlovic,…
Attribute-Aware Implicit Modality Alignment for Text Attribute Person Searchby Xin Wang, Fangfang Liu, Zheng Li,…
Efficient Knowledge Infusion via KG-LLM Alignmentby Zhouyu Jiang, Ling Zhong, Mengshu Sun, Jun Xu, Rui…
Prompt-based Visual Alignment for Zero-shot Policy Transferby Haihan Gao, Rui Zhang, Qi Yi, Hantao Yao,…
CODE: Contrasting Self-generated Description to Combat Hallucination in Large Multi-modal Modelsby Junho Kim, Hyunjun Kim,…
Multimodal Reasoning with Multimodal Knowledge Graphby Junlin Lee, Yequan Wang, Jing Li, Min ZhangFirst submitted…
No Captions, No Problem: Captionless 3D-CLIP Alignment with Hard Negatives via CLIP Knowledge and LLMsby…
FedMKT: Federated Mutual Knowledge Transfer for Large and Small Language Modelsby Tao Fan, Guoqiang Ma,…