Summary of Should Agentic Conversational Ai Change How We Think About Ethics? Characterising An Interactional Ethics Centred on Respect, by Lize Alberts et al.
Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect
by Lize Alberts, Geoff Keeling, Amanda McCroskery
First submitted to arxiv on: 17 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed paper addresses the ethical considerations of large language models (LLMs) used in conversational agents. While existing work focuses on making outputs helpful, honest, and free from harm, it neglects pragmatic factors that impact the tactfulness or inconsiderateness of interactions. As AI systems become more proactive and agentic, considering relational and situational ethics is crucial. The paper explores what it means for a system to treat an individual respectfully in a series of interactions, highlighting unexplored risks at the level of situated social interaction. Practical suggestions are offered to ensure LLM technologies interact with people in a respectful manner. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Conversational agents powered by large language models (LLMs) need to behave ethically and appropriately. Researchers focus on making outputs helpful, honest, and free from harm. But they forget about how interactions change depending on the situation. As AI gets better at doing things on its own, it’s important to consider how a system interacts with people in different situations. This paper looks at what it means for an AI system to treat someone respectfully when interacting with them multiple times. It finds some unknown risks and gives tips on how to make LLM technologies interact nicely with people. |