Loading Now

Summary of Going Beyond Approximation: Encoding Constraints For Explainable Multi-hop Inference Via Differentiable Combinatorial Solvers, by Mokanarangan Thayaparan et al.


Going Beyond Approximation: Encoding Constraints for Explainable Multi-hop Inference via Differentiable Combinatorial Solvers

by Mokanarangan Thayaparan, Marco Valentino, André Freitas

First submitted to arxiv on: 5 Aug 2022

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel neuro-symbolic architecture called Diff-Comb Explainer to improve the performance of end-to-end differentiable multi-hop inference models. The authors aim to integrate Integer Linear Programming (ILP) with Transformers to overcome the limitations of existing hybrid frameworks, which rely on convex relaxation and can produce sub-optimal solutions. The proposed model uses Differentiable BlackBox Combinatorial solvers (DBCS) to directly integrate ILP formulations without transformation or relaxation. Experimental results show that Diff-Comb Explainer achieves improved accuracy and explainability compared to non-differentiable solvers, Transformers, and existing differentiable constraint-based multi-hop inference frameworks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about a new way to make computers understand sentences by breaking them down into smaller parts and combining the answers. It uses two important tools: Integer Linear Programming (ILP) and Transformers. The current way of doing this isn’t perfect, so the authors created a new model called Diff-Comb Explainer that works better. This new model can answer questions more accurately and explain its thought process better than other methods.

Keywords

* Artificial intelligence  * Inference