Summary of Mixture Of Experts Based Multi-task Supervise Learning From Crowds, by Tao Han et al.
Mixture of Experts based Multi-task Supervise Learning from Crowds
by Tao Han, Huaixuan Shi, Xinyi Ding, Xiao Ma, Huamao Gu, Yili Fang
First submitted to arxiv on: 18 Jul 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers tackle the challenge of improving truth inference in crowdsourcing by proposing a new approach to modeling worker behavior. Traditional methods rely on statistical or deep learning-based models that treat the ground truth as hidden variables. However, these models overlook the item feature level, leading to imprecise characterizations and poor quality truth inference. The proposed MMLC (Mixture of Experts based Multi-task Supervised Learning from Crowds) framework eliminates this limitation by introducing a worker behavior model at the item feature level. Two truth inference strategies are presented within MMLC: MMLC-owf, which utilizes clustering methods to identify the projection vector of the oracle worker, and MMLC-df, which employs the MMLC model to fill crowdsourced data. Experimental results demonstrate that MMLC-owf outperforms state-of-the-art methods, while MMLC-df enhances the quality of existing truth inference methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about improving how we figure out what’s true on the internet when lots of people are giving opinions. Right now, we use special computer models to try to find the right answer. But these models don’t really understand why people give certain answers or what makes them good or bad. The researchers came up with a new way to make these models better by looking at how people behave when they’re answering questions about specific things. They tested this new method and found that it works much better than the old ways. |
Keywords
» Artificial intelligence » Clustering » Deep learning » Inference » Mixture of experts » Multi task » Supervised