Loading Now

Summary of Learning to Generalize Unseen Domains Via Multi-source Meta Learning For Text Classification, by Yuxuan Hu et al.


Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification

by Yuxuan Hu, Chenwei Zhang, Min Yang, Xiaodan Liang, Chengming Li, Xiping Hu

First submitted to arxiv on: 20 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A deep learning-based approach for text classification has led to impressive accuracy rates, but these models struggle when applied to new, unseen domains. To tackle this challenge, researchers have developed frameworks that use multiple seen domains to train a model capable of high accuracy in an unseen domain. This paper proposes a multi-source meta-learning framework that simulates the process of model generalization to an unseen domain and extracts sufficient domain-related features. A memory mechanism is introduced to store domain-specific features, which coordinates with the meta-learning framework. Additionally, a “jury” mechanism enables the model to learn sufficient domain-invariant features. Experimental results demonstrate that this approach outperforms state-of-the-art methods on multi-source text classification datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Text classification models have achieved high accuracy rates using deep learning approaches. However, these models struggle when applied to new domains without labeled data. To overcome this challenge, researchers propose a framework that uses multiple seen domains to train a model capable of high accuracy in an unseen domain. The approach simulates the process of model generalization and extracts domain-related features. The results show that this approach can improve the ability of models to generalize to new domains.

Keywords

» Artificial intelligence  » Deep learning  » Generalization  » Meta learning  » Text classification