Summary of Investigating Semi-supervised Learning Algorithms in Text Datasets, by Himmet Toprak Kesgin et al.
Investigating Semi-Supervised Learning Algorithms in Text Datasets
by Himmet Toprak Kesgin, Mehmet Fatih Amasyali
First submitted to arxiv on: 3 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers investigate how neural networks can be improved using large training datasets. They explore semi-supervised learning (SSL) techniques, which are particularly useful when there’s a limited amount of labeled data and a vast quantity of unlabeled data. The study focuses on comparing SSL algorithms that don’t rely on data augmentation, including self-training, co-training, tri-training, and tri-training with disagreement. To test these methods, the researchers use four different text datasets for various tasks. They analyze the performance of each algorithm from multiple angles and suggest potential improvements. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This study is about making computer programs learn better. Researchers tried to figure out how to make neural networks smarter using lots of data. They found that when they don’t change or modify the data, some methods work better than others on text-based tasks. The team compared different learning techniques and used four types of texts to test them. They looked at how well each method did and gave ideas for making it even better. |
Keywords
* Artificial intelligence * Data augmentation * Self training * Semi supervised