Loading Now

Summary of A Computational Method For Measuring “open Codes” in Qualitative Analysis, by John Chen et al.


A Computational Method for Measuring “Open Codes” in Qualitative Analysis

by John Chen, Alexandros Lotsos, Lexie Zhao, Caiyi Wang, Jessica Hullman, Bruce Sherin, Uri Wilensky, Michael Horn

First submitted to arxiv on: 19 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty Summary: This study addresses the need for systematic measurement and evaluation of generative AI (GAI) outcomes in qualitative data analysis, specifically open coding. By leveraging Grounded Theory and Thematic Analysis theories, a computational method is developed to identify potential biases from “open codes”. Unlike previous studies that operationalized human expert results as ground truth, this approach involves a team-based collaboration between human and machine coders. Two HCI datasets are experimented with to establish the method’s reliability by comparing it with human analysis and analyzing output stability. The study provides evidence-based suggestions and example workflows for integrating ML/GAI with open coding.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty Summary: This research aims to improve how computers analyze human data in social sciences like sociology. It wants to make sure computer results are fair and unbiased by comparing them to what humans think is correct. To do this, it combines human experts’ work with machine learning (ML) tools. The study uses two big datasets from the field of Human-Computer Interaction (HCI) to test its new approach. By doing this, it hopes to provide a better way for computers to help humans analyze data in social sciences.

Keywords

* Artificial intelligence  * Machine learning