Loading Now

Summary of Towards a Fairer Non-negative Matrix Factorization, by Lara Kassab et al.


Towards a Fairer Non-negative Matrix Factorization

by Lara Kassab, Erin George, Deanna Needell, Haowen Geng, Nika Jafar Nia, Aoxi Li

First submitted to arxiv on: 14 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the potential biases introduced by Non-negative Matrix Factorization (NMF) in topic modeling and dimensionality reduction techniques. The authors identify a limitation in traditional NMF methods, which can result in unfair representations of data groups based on demographics or protected attributes. To mitigate this issue, they propose Fairer-NMF, an approach that minimizes the maximum reconstruction loss for different groups relative to their size and intrinsic complexity. Two algorithms are presented: alternating minimization (AM) and multiplicative updates (MU), which shows improved computational efficiency while maintaining performance. The paper evaluates the efficacy of Fairer-NMF on synthetic and real datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Fairer-NMF is a new approach that helps topic modeling and dimensionality reduction techniques be more fair. Right now, these methods can have biases when grouping data based on things like demographics or protected attributes. This means they might not give equal representation to all groups. Fairer-NMF tries to fix this by making sure the maximum difference in reconstruction loss between groups is minimized, relative to how big each group is and how complex it is. Two ways are presented to solve this problem: alternating minimization (AM) and multiplicative updates (MU), which takes less time but still works well. The paper tests Fairer-NMF on fake and real datasets.

Keywords

» Artificial intelligence  » Dimensionality reduction