Loading Now

Summary of Can Pre-trained Language Models Understand Chinese Humor?, by Yuyan Chen et al.


Can Pre-trained Language Models Understand Chinese Humor?

by Yuyan Chen, Zhixu Li, Jiaqing Liang, Yanghua Xiao, Bang Liu, Yunwen Chen

First submitted to arxiv on: 4 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A recent paper investigates the ability of pre-trained language models (PLMs) to understand humor, a challenging task in natural language processing. Despite previous attempts using PLMs for humor recognition and generation, these efforts have not addressed whether PLMs can truly comprehend humor. This study provides the first comprehensive evaluation framework for assessing PLM-based humor understanding, comprising three evaluation steps and four tasks. A novel Chinese humor dataset is also constructed to meet the data requirements of this framework. The empirical study on this dataset yields valuable insights that can inform future optimizations of PLMs for humor understanding and generation.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at whether pre-trained language models (PLMs) can understand humor, which is a tricky task in computer science. Some people have tried using PLMs to recognize and create funny jokes, but they didn’t answer the main question: Can PLMs really get the joke? This study tries to answer that by creating a special way to test how well PLMs do at understanding humor, with three steps and four tasks. To help test this, the researchers created a big dataset of Chinese jokes. What they found can help them make better language models for recognizing and making funny jokes in the future.

Keywords

» Artificial intelligence  » Natural language processing