Loading Now

Summary of Pruning Literals For Highly Efficient Explainability at Word Level, by Rohan Kumar Yadav et al.


Pruning Literals for Highly Efficient Explainability at Word Level

by Rohan Kumar Yadav, Bimal Bhattarai, Abhik Jana, Lei Jiao, Seid Muhie Yimam

First submitted to arxiv on: 7 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper presents a novel approach to explainable natural language processing (NLP) by designing a post-hoc pruning technique for Tsetlin Machines (TMs). TMs are capable of providing word-level explanations using proposition logic, but their complexity can make them difficult to interpret. The authors’ pruning method eliminates randomly placed literals in the clause, making the model more efficiently interpretable. The proposed approach is evaluated on the YELP-HAT Dataset and outperforms attention map-based neural network models in terms of pairwise similarity measure. The accuracy of the pruned TM does not degrade significantly, but rather enhances performance by up to 4% to 9% compared to vanilla TMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how machines can better explain their decisions when it comes to natural language processing. Right now, most top-performing machine learning models don’t give us a clear reason why they made a prediction. To fix this, researchers are working on “explainable” models that can break down their thought process into simple steps. One promising approach is called the Tsetlin Machine (TM). TM uses a special kind of math called proposition logic to provide word-level explanations for its predictions. However, these complex explanations can be hard for humans to understand. To solve this problem, the authors of this paper developed a new way to “prune” or simplify the TM’s complex explanations. This makes it easier for us to understand how the model is thinking. The authors tested their approach on a dataset called YELP-HAT and found that it outperforms other methods in terms of accuracy.

Keywords

» Artificial intelligence  » Attention  » Machine learning  » Natural language processing  » Neural network  » Nlp  » Pruning