Loading Now

Summary of Inducing Human-like Biases in Moral Reasoning Language Models, by Artem Karpov et al.


Inducing Human-like Biases in Moral Reasoning Language Models

by Artem Karpov, Seong Hah Cho, Austin Meek, Raymond Koopmanschap, Lucy Farnik, Bogdan-Ionut Cirstea

First submitted to arxiv on: 23 Nov 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computers and Society (cs.CY); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: This paper investigates the alignment of large language models (LLMs) with human moral reasoning by fine-tuning them on behavioral and brain data. The study uses various LLMs (BERT, RoBERTa, DeBERTa) and trains them on ethical decision-making datasets (ETHICS benchmark) and functional magnetic resonance imaging (fMRI) data from Koster-Hale et al. [2013]. The results show that larger models generally perform better in both behavioral accuracy and brain alignment (BrainScore). However, fine-tuning LLMs on fMRI data does not significantly improve BrainScores.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: This study looks at how well large language models can understand human moral reasoning. Researchers tested different models on tasks that require ethical decision-making and also used brain scans to see if the models align with what people think is right or wrong. The results show that bigger models are generally better, but training them on brain data doesn’t make a big difference.

Keywords

» Artificial intelligence  » Alignment  » Bert  » Fine tuning