Loading Now

Summary of Interrogate: Learning to Share, Specialize, and Prune Representations For Multi-task Learning, by Babak Ehteshami Bejnordi et al.


InterroGate: Learning to Share, Specialize, and Prune Representations for Multi-task Learning

by Babak Ehteshami Bejnordi, Gaurav Kumar, Amelie Royer, Christos Louizos, Tijmen Blankevoort, Mohsen Ghafoorian

First submitted to arxiv on: 26 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel multi-task learning (MTL) architecture called InterroGate that addresses the challenge of task interference in jointly learning multiple tasks. The authors propose using a learnable gating mechanism to automatically balance shared and task-specific representations while preserving performance across all tasks. This approach optimizes inference computational efficiency by dynamically learning patterns of parameter sharing and specialization during training, which are then fixed at inference. The results demonstrate state-of-the-art (SoTA) performances on three MTL benchmarks using convolutional and transformer-based backbones on CelebA, NYUD-v2, and PASCAL-Context datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about a new way to learn multiple things together using the same model. When we try to do this, it can get tricky because different tasks might compete with each other for resources. The authors have come up with a solution called InterroGate that helps solve this problem by learning how much to share and specialize in what’s being learned. This makes the model more efficient while still doing well on all the tasks. They tested it on some big datasets and got the best results.

Keywords

* Artificial intelligence  * Inference  * Multi task  * Transformer