Loading Now

Summary of Exploring Human-ai Perception Alignment in Sensory Experiences: Do Llms Understand Textile Hand?, by Shu Zhong et al.


Exploring Human-AI Perception Alignment in Sensory Experiences: Do LLMs Understand Textile Hand?

by Shu Zhong, Elia Gatti, Youngjun Cho, Marianna Obrist

First submitted to arxiv on: 5 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores how well large language models (LLMs) align with human experiences related to touch. The researchers created an interaction where participants handled two textile samples without seeing them, then described the differences between them to the LLM. The LLM attempted to identify the target textile by assessing similarity in its high-dimensional embedding space. While a degree of perceptual alignment exists, it varies significantly among different textile samples. For example, LLM predictions are well aligned for silk satin but not for cotton denim. This study paves the way for future research on human-AI perceptual alignment and its potential benefits for everyday tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how computers understand what we feel when we touch things. The researchers wanted to see if large language models can match our feelings when we handle different textures like silk or denim. They asked people to describe the differences between two fabrics without looking at them, and then used this information to try to guess which one was which. The results show that there is some connection between what humans feel and what computers understand, but it’s not perfect. Some fabrics, like silk satin, are easier for computers to understand than others, like cotton denim. This study helps us learn more about how humans and computers can work together better.

Keywords

» Artificial intelligence  » Alignment  » Embedding space