Summary of Towards Cross-lingual Audio Abuse Detection in Low-resource Settings with Few-shot Learning, by Aditya Narayan Sankaran et al.
Towards Cross-Lingual Audio Abuse Detection in Low-Resource Settings with Few-Shot Learning
by Aditya Narayan Sankaran, Reza Farahbakhsh, Noel Crespi
First submitted to arxiv on: 2 Dec 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the use of pre-trained audio representations for detecting online abusive content in low-resource languages, specifically in Indian languages using Few Shot Learning (FSL). The authors leverage powerful representations from models like Wav2Vec and Whisper to detect abusive language in 10 languages. They experiment with various shot sizes, evaluating the impact of limited data on performance, and conduct a feature visualization study to understand model behavior. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about using machines to find online mean words that are not well-studied in languages that don’t have much information available. The researchers use powerful audio models to detect mean language in 10 Indian languages. They test how well this works with different amounts of training data and also look at what the model is doing. |
Keywords
» Artificial intelligence » Few shot