Loading Now

Summary of Deep Prompt Multi-task Network For Abuse Language Detection, by Jian Zhu et al.


Deep Prompt Multi-task Network for Abuse Language Detection

by Jian Zhu, Yuping Ruan, Jingfei Chang, Wenhui Sun, Hui Wan, Jian Long, Cheng Luo

First submitted to arxiv on: 8 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Deep Prompt Multi-task Network (DPMN) is a novel approach to detecting abusive language online. By fine-tuning pre-trained language models (PLMs), existing detection methods struggle to leverage their general knowledge. DPMN addresses this issue by designing two forms of deep prompt tuning and light prompt tuning for PLMs. The effects of different prompt lengths, tuning strategies, and prompt initialization methods on detecting abusive language are explored. Additionally, a Task Head based on Bi-LSTM and FFN is proposed as a short text classifier. The DPMN utilizes multi-task learning to improve detection metrics further. Experimental results show that DPMN outperforms state-of-the-art methods on three public datasets: OLID, SOLID, and AbuseAnalyzer.
Low GrooveSquid.com (original content) Low Difficulty Summary
Abuse language detection online is a tough problem that needs solving. Right now, existing methods aren’t very good at it. We think this is because they’re relying too much on fine-tuning pre-trained language models (PLMs). So we came up with a new approach called Deep Prompt Multi-task Network (DPMN) to help detect abusive language better. DPMN tries out different ways of tweaking the PLMs’ prompts and sees how that affects detection accuracy. We also designed a special Task Head for short text classification. And because it’s hard to get good results on just one task, we used multi-task learning to make our approach even better. In tests, DPMN did way better than existing methods on three big datasets.

Keywords

* Artificial intelligence  * Fine tuning  * Lstm  * Multi task  * Prompt  * Text classification