Loading Now

Summary of Using Generative Text Models to Create Qualitative Codebooks For Student Evaluations Of Teaching, by Andrew Katz et al.


Using Generative Text Models to Create Qualitative Codebooks for Student Evaluations of Teaching

by Andrew Katz, Mitchell Gerhardt, Michelle Soledad

First submitted to arxiv on: 18 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed method leverages natural language processing (NLP) and large language models (LLMs) to analyze student evaluations of teaching (SETs). The approach enables the extraction, embedding, clustering, and summarization of SETs to identify recurring themes. This is demonstrated by applying the method to a corpus of 5,000 SETs from a large public university. The resulting codebook can be used to generate actionable insights for educators, administrators, and researchers. By combining NLP techniques with LLMs, this work provides a novel framework for analyzing SETs and other types of student writing.
Low GrooveSquid.com (original content) Low Difficulty Summary
Feedback is important for improvement, but when there’s lots of feedback from many sources, it’s hard to make sense of it all. Student evaluations can be helpful for teachers and administrators, but when there are thousands of them, they’re difficult to analyze. This paper proposes a new way to use natural language processing (NLP) and big language models to understand student evaluations. The method helps extract important themes from these evaluations. By applying this approach to 5,000 student evaluations from a large university, the researchers show that it works well. This work can help teachers, administrators, and researchers make better sense of student feedback.

Keywords

» Artificial intelligence  » Clustering  » Embedding  » Natural language processing  » Nlp  » Summarization