Loading Now

Summary of A Unified Causal Framework For Auditing Recommender Systems For Ethical Concerns, by Vibhhu Sharma et al.


A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns

by Vibhhu Sharma, Shantanu Gupta, Nil-Jana Akpinar, Zachary C. Lipton, Liu Leqi

First submitted to arxiv on: 20 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Information Retrieval (cs.IR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel approach to auditing recommender systems, which are increasingly influential in shaping users’ beliefs and preferences. The authors propose a general framework for defining auditing metrics from a causal lens, aiming to ensure the continuous improvement of recommendation algorithms while safeguarding against potential issues like biases and ethical concerns. Specifically, they identify gaps in existing auditing metrics, particularly regarding user agency, and propose two new classes of metrics: future- and past-reachability and stability. These metrics measure users’ ability to influence their own and others’ recommendations, respectively. The authors provide both gradient-based and black-box approaches for computing these metrics, allowing auditors to compute them under different levels of access to the recommender system.
Low GrooveSquid.com (original content) Low Difficulty Summary
Recommender systems are changing how we discover new music, movies, and products. But have you ever wondered if these systems are being fair? This paper is all about making sure that the things we’re recommended are chosen fairly. The authors propose a new way to measure this fairness, by looking at what happens in the future and what’s happened in the past. They also provide two new ways to calculate this fairness: one for when you have access to lots of data, and another for when you don’t.

Keywords

* Artificial intelligence