Summary of Improved Emotional Alignment Of Ai and Humans: Human Ratings Of Emotions Expressed by Stable Diffusion V1, Dall-e 2, and Dall-e 3, By James Derek Lomas et al.
Improved Emotional Alignment of AI and Humans: Human Ratings of Emotions Expressed by Stable Diffusion v1, DALL-E 2, and DALL-E 3
by James Derek Lomas, Willem van der Maden, Sohhom Bandyopadhyay, Giovanni Lion, Nirmal Patel, Gyanesh Jain, Yanna Litowsky, Haian Xue, Pieter Desmet
First submitted to arxiv on: 28 May 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Generative AI systems are increasingly capable of expressing emotions via text and imagery, which will play a major role in their efficacy, particularly those designed to support human mental health and wellbeing. To better understand the alignment of AI-expressed emotions with human perceptions, we designed a survey measuring the alignment between emotions expressed by generative AI and human perceptions. We used three generative image models (DALL-E 2, DALL-E 3, and Stable Diffusion v1) to generate 240 examples of images based on prompts expressing five positive and five negative emotions across humans and robots. Participants rated the alignment of AI-generated emotional expressions with a text prompt. The results show that generative AI models can produce emotional expressions well-aligned with human emotions, but the alignment depends on the AI model used and the emotion itself. We analyze variations in performance to identify gaps for future improvement. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Generative AI systems are getting better at expressing emotions like happiness or sadness. To see how good they really are, we asked people to rate whether the emotions expressed by these systems matched their own feelings. We looked at three different AI models and found that they can all express emotions that match human feelings, but some do it better than others. The emotions that were easiest for the AI models to get right were happy and sad ones, while more complex emotions like amusement or fear were harder for them to capture accurately. This research helps us understand how well AI systems can mimic human emotions, which is important for creating AI systems that can help people with their mental health. |
Keywords
» Artificial intelligence » Alignment » Diffusion » Prompt