Loading Now

Summary of Contrastive Learning Subspace For Text Clustering, by Qian Yong et al.


Contrastive Learning Subspace for Text Clustering

by Qian Yong, Chen Chen, Xiabing Zhou

First submitted to arxiv on: 26 Aug 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Contrastive learning has been applied to text clustering tasks, with existing methods focusing solely on instance-wise semantic similarity. However, these approaches neglect contextual information and relationships among all instances that need to be clustered. In this paper, we propose Subspace Contrastive Learning (SCL), a novel text clustering approach that models cluster-wise relationships among instances. SCL consists of two modules: self-expressive and contrastive learning, which construct virtual positive samples and learn a discriminative subspace, respectively. Our method outperforms existing approaches on multiple task clustering datasets, achieving superior results with less complexity in positive sample construction. The proposed SCL method leverages graph-based neural networks, using graph attention networks to model relationships among texts. Experimental results demonstrate the effectiveness of our approach for text clustering tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper explores new ways to group similar texts together. Current methods only look at how each text is related to others, but they don’t consider the bigger picture of how all texts are connected. The authors propose a new method called Subspace Contrastive Learning (SCL) that takes into account these relationships. SCL has two parts: one helps create fake positive examples and another learns patterns in the data. This approach does better than previous methods on various text clustering tasks, while also being simpler to use.

Keywords

» Artificial intelligence  » Attention  » Clustering