Loading Now

Summary of Securing Social Spaces: Harnessing Deep Learning to Eradicate Cyberbullying, by Rohan Biswas et al.


Securing Social Spaces: Harnessing Deep Learning to Eradicate Cyberbullying

by Rohan Biswas, Kasturi Ganguly, Arijit Das, Diganta Saha

First submitted to arxiv on: 1 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed deep learning-based approach utilizes BERT and BiLSTM architectures to effectively address cyberbullying in online spaces. By analyzing large volumes of posts, the approach can predict potential instances of cyberbullying. The hateBERT model, an extension of BERT focused on hate speech detection, achieves an accuracy rate of 89.16%, outperforming four other models. This research contributes to “Computational Intelligence for Social Transformation,” promising a safer and more inclusive digital landscape.
Low GrooveSquid.com (original content) Low Difficulty Summary
Cyberbullying is a serious problem that affects people’s mental and physical health when they use social media. It’s important to find better ways to detect cyberbullying so online spaces can be safer. To do this, researchers introduced a new approach using deep learning. This approach looks at lots of posts online and predicts if someone might be bullied. The best model, called hateBERT, was able to accurately spot cyberbullying 89% of the time.

Keywords

* Artificial intelligence  * Bert  * Deep learning