Loading Now

Summary of Exploring the Trade-off Between Model Performance and Explanation Plausibility Of Text Classifiers Using Human Rationales, by Lucas E. Resck et al.


Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales

by Lucas E. Resck, Marcos M. Raimundo, Jorge Poco

First submitted to arxiv on: 3 Apr 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces a methodology for enhancing the plausibility of post-hoc explanations in NLP models by incorporating rationales, which are text annotations explaining human decisions. The approach is agnostic to model architectures and explainability methods, and it preserves faithfulness while improving plausibility. A novel loss function inspired by contrastive learning is introduced during model training to balance performance and plausibility. The trade-off between the two loss functions is explored using a multi-objective optimization algorithm, generating a Pareto-optimal frontier of models that balance quality and original performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper makes AI more understandable! It helps computers explain their decisions by adding special notes called “rationales” to their thinking. This way, we can see why the computer made certain choices, making its explanations more believable. The approach is super flexible, working with different models and ways of explaining things. By mixing in a new kind of math problem, the researchers created computers that balance being right with giving good reasons for their answers.

Keywords

» Artificial intelligence  » Loss function  » Nlp  » Optimization