Summary of Stencil: Submodular Mutual Information Based Weak Supervision For Cold-start Active Learning, by Nathan Beck et al.
STENCIL: Submodular Mutual Information Based Weak Supervision for Cold-Start Active Learning
by Nathan Beck, Adithya Iyer, Rishabh Iyer
First submitted to arxiv on: 21 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed paper, titled “STENCIL: A Novel Approach to Active Learning for Class-Imbalanced Text Classification,” introduces a new method for reducing the annotation cost of large language models in natural language processing (NLP) applications. By utilizing text exemplars and submodular mutual information, STENCIL selects weakly labeled rare-class instances that are then strongly labeled by an annotator. This approach improves overall accuracy by 10-18% and rare-class F-1 score by 17-40% on multiple text classification datasets compared to common active learning methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine you’re trying to teach a computer to recognize different types of text, like emails or news articles. Right now, it takes a lot of work to label all the data so the computer can learn. A new way to make this process faster and more accurate is called STENCIL. It uses special examples of text and a math formula to pick out the hardest cases for humans to label, making sure they focus on the most important ones first. This helps computers get better at recognizing different types of text and makes it easier to train them. |
Keywords
* Artificial intelligence * Active learning * Natural language processing * Nlp * Text classification