Summary of Can Large Language Model Agents Simulate Human Trust Behavior?, by Chengxing Xie et al.
Can Large Language Model Agents Simulate Human Trust Behavior?by Chengxing Xie, Canyu Chen, Feiran Jia,…
Can Large Language Model Agents Simulate Human Trust Behavior?by Chengxing Xie, Canyu Chen, Feiran Jia,…
Alirector: Alignment-Enhanced Chinese Grammatical Error Correctorby Haihui Yang, Xiaojun QuanFirst submitted to arxiv on: 7…
Direct Language Model Alignment from Online AI Feedbackby Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi…
Can Generative Agents Predict Emotion?by Ciaran Regan, Nanami Iwahashi, Shogo Tanaka, Mizuki OkaFirst submitted to…
Toward Human-AI Alignment in Large-Scale Multi-Player Gamesby Sugandha Sharma, Guy Davidson, Khimya Khetarpal, Anssi Kanervisto,…
LLM Agents in Interaction: Measuring Personality Consistency and Linguistic Alignment in Interacting Populations of Large…
Deep Semantic-Visual Alignment for Zero-Shot Remote Sensing Image Scene Classificationby Wenjia Xu, Jiuniu Wang, Zhiwei…
Boximator: Generating Rich and Controllable Motions for Video Synthesisby Jiawei Wang, Yuchen Zhang, Jiaxin Zou,…
Exploring Homogeneous and Heterogeneous Consistent Label Associations for Unsupervised Visible-Infrared Person ReIDby Lingfeng He, De…
Contextual Feature Extraction Hierarchies Converge in Large Language Models and the Brainby Gavin Mischler, Yinghao…