Summary of Explainable Automatic Grading with Neural Additive Models, by Aubrey Condor et al.
Explainable Automatic Grading with Neural Additive Models
by Aubrey Condor, Zachary Pardos
First submitted to arxiv on: 1 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL); Applications (stat.AP)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A new approach to automatic short answer grading (ASAG) is proposed, aiming to provide both accuracy and interpretability. The authors experiment with Neural Additive Models (NAMs), combining the performance of large neural networks (NNs) with the explainability of additive models. By using a Knowledge Integration (KI) framework from learning sciences, feature engineering is guided to create inputs that reflect whether students include specific ideas in their responses. This approach hypothesizes that indicating idea inclusion or exclusion will provide good predictive power and interpretability for NAMs. The performance of NAMs is compared with explainable logistic regression and non-explainable DeBERTa models, all using the same features. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Automated grading can help teachers focus on more important tasks while students learn from feedback. However, current AI-powered grading systems are often “black boxes,” making it difficult for students to understand why they got a certain score. To change this, researchers created a new type of AI model that explains its decisions. This “Neural Additive Model” combines the power of artificial intelligence with the ability to show which student responses include important ideas. The goal is to create an AI system that not only grades assignments accurately but also helps students learn by showing them how their answers were scored. |
Keywords
» Artificial intelligence » Feature engineering » Logistic regression