Summary of Sharingan: Extract User Action Sequence From Desktop Recordings, by Yanting Chen et al.
Sharingan: Extract User Action Sequence from Desktop Recordings
by Yanting Chen, Yi Ren, Xiaoting Qin, Jue Zhang, Kehong Yuan, Lu Han, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, Qi Zhang
First submitted to arxiv on: 13 Nov 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes two novel Vision-Language Model (VLM)-based methods for extracting user actions from desktop video recordings. The Direct Frame-Based Approach (DF) inputs sampled frames directly into VLMs, while the Differential Frame-Based Approach (DiffF) incorporates explicit frame differences detected via computer vision techniques. Evaluation using a self-curured dataset and an adapted benchmark shows that the DF approach achieves 70-80% accuracy in identifying user actions, with extracted action sequences being replayable through Robotic Process Automation. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us better understand how people use computers by automatically analyzing video recordings of what they do on their screens. The authors develop two new ways to extract information from these videos using special models that combine computer vision and language processing. They test these methods with some data they collected themselves and another dataset from previous research. The results show that one method is pretty good at identifying actions, like clicking or typing. This work can help us create more automated systems that mimic human behavior. |
Keywords
» Artificial intelligence » Language model