Loading Now

Summary of Neural Reasoning Networks: Efficient Interpretable Neural Networks with Automatic Textual Explanations, by Stephen Carrow et al.


Neural Reasoning Networks: Efficient Interpretable Neural Networks With Automatic Textual Explanations

by Stephen Carrow, Kyle Harper Erwin, Olga Vilenskaia, Parikshit Ram, Tim Klinger, Naweed Aghmad Khan, Ndivhuwo Makondo, Alexander Gray

First submitted to arxiv on: 10 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes Neural Reasoning Networks (NRNs), a novel neuro-symbolic architecture for tabular dataset classification that generates logically sound textual explanations for its predictions. NRNs consist of connected layers of logical neurons implementing real-valued logic, and are trained using an extension to PyTorch that leverages GPU scaling and batched training. The proposed R-NRN algorithm learns the network’s weights and structure simultaneously using gradient descent optimization with backpropagation. Evaluation on 22 open-source datasets demonstrates improved performance (measured by ROC AUC) compared to multi-layer perceptrons, while offering faster training and reduced parameter requirements. Additionally, NRNs’ explanations are shorter and more accurate than comparable approaches.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about a new way to use computers to help us understand why they’re making certain decisions. Right now, many computer systems make predictions without explaining how they got those answers. This can be a problem when we need to know why the system made that decision. The authors of this paper propose a new type of computer model called Neural Reasoning Networks (NRNs) that can explain its decisions in a way that makes sense. NRNs use a combination of neural networks and logical rules to make predictions, and they’re really good at it! In fact, the authors tested their system on 22 different datasets and found that it performed almost as well as some of the best computer systems out there, but was faster and required fewer calculations.

Keywords

» Artificial intelligence  » Auc  » Backpropagation  » Classification  » Gradient descent  » Optimization