Loading Now

Summary of Automated Essay Scoring in Arabic: a Dataset and Analysis Of a Bert-based System, by Rayed Ghazawi et al.


Automated essay scoring in Arabic: a dataset and analysis of a BERT-based system

by Rayed Ghazawi, Edwin Simpson

First submitted to arxiv on: 15 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces AR-AES, a benchmark dataset for Arabic Automated Essay Scoring (AES) consisting of 2046 undergraduate essays with gender information, scores, and transparent rubric-based evaluation guidelines. The dataset covers four diverse courses, including traditional and online exams, providing comprehensive insights into the scoring process. Additionally, the study pioneers the use of AraBERT for AES, exploring its performance on different question types. The results show encouraging performance, particularly for Environmental Chemistry and source-dependent essay questions. Furthermore, the paper examines the scale of errors made by a BERT-based AES system, finding that 96.15% of errors are within one point of the first human marker’s prediction, with 79.49% matching exactly. These findings highlight the subjectivity inherent in essay grading, and underscore the potential for current AES technology to assist human markers to grade consistently across large classes.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper creates a big dataset to help computers score essays written in Arabic. This is important because it’s hard to find data like this, especially since many students learn in Arabic. The researchers also try using special computer models called AraBERT to see if they can do a good job scoring essays. They find that these models work pretty well, especially when the questions are about science or environmental topics. But what’s even more interesting is that the computers make mistakes sometimes – just like humans do! In fact, most of the time the computers are only off by one point from what a human would score it as. This shows that computers could be really helpful in grading essays, but they’re not perfect and need to work with humans.

Keywords

» Artificial intelligence  » Bert