Loading Now

Summary of Identifying Cyberbullying Roles in Social Media, by Manuel Sandoval et al.


Identifying Cyberbullying Roles in Social Media

by Manuel Sandoval, Mohammed Abuhamad, Patrick Furman, Mujtaba Nazari, Deborah L. Hall, Yasin N. Silva

First submitted to arxiv on: 21 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL); Computers and Society (cs.CY); Social and Information Networks (cs.SI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the use of machine learning models to detect the roles involved in cyberbullying interactions. The authors leverage four underlying language models (LLMs): BERT, RoBERTa, T5, and GPT-2, fine-tuning them with oversampled data from the AMiCA dataset. They evaluate the performance of these models using various evaluation metrics and show that oversampling techniques improve model accuracy. The best-performing model is a fine-tuned RoBERTa achieving an overall F1 score of 83.5%. The authors also analyze per-class model performance, demonstrating strengths in classes with more samples but struggles with fewer samples and contextual ambiguity. This study highlights the current limitations and challenges in developing accurate models for detecting cyberbullying roles.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research aims to help detect and stop online bullying by using special computer programs called machine learning models. These models can learn from examples and get better at recognizing different types of behavior online, like who is being mean or who is trying to help. The researchers tested four different models on a dataset of cyberbullying incidents and found that one model, based on RoBERTa, was the most accurate. This study shows how machine learning can be used to help stop online bullying and what are some challenges it still faces.

Keywords

» Artificial intelligence  » Bert  » F1 score  » Fine tuning  » Gpt  » Machine learning  » T5