Loading Now

Summary of Counting Network For Learning From Majority Label, by Kaito Shiku et al.


Counting Network for Learning from Majority Label

by Kaito Shiku, Shinnosuke Matsuo, Daiki Suehiro, Ryoma Bise

First submitted to arxiv on: 20 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel problem in multi-class Multiple-Instance Learning (MIL) is proposed, called Learning from the Majority Label (LML). In LML, the majority class of instances in a bag is assigned as the bag’s label. Existing MIL methods are unsuitable for LML due to aggregating confidences, which may lead to inconsistency between the bag-level label and the label obtained by counting the number of instances for each class. A novel counting network is proposed to produce the bag-level majority labels estimated by counting the number of instances for each class, ensuring consistency between the network outputs and one obtained by counting the number of instances. Experimental results demonstrate that the counting network outperforms conventional MIL methods on four datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Learning from the Majority Label (LML) is a new problem in machine learning. Imagine you have a bunch of things, each with a label like “dog” or “cat”. The majority label for the bunch would be the most common one, like “dogs and cats”. LML tries to predict what individual thing belongs to which category based on the majority label of its bunch. Previous methods didn’t work well because they mixed up different labels. This new method uses a special network that counts how many things are in each category and makes sure it agrees with the majority label. It works better than other methods and can be used for lots of applications.

Keywords

* Artificial intelligence  * Machine learning