Loading Now

Summary of Privacylens: Evaluating Privacy Norm Awareness Of Language Models in Action, by Yijia Shao et al.


PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action

by Yijia Shao, Tianshi Li, Weiyan Shi, Yanchen Liu, Diyi Yang

First submitted to arxiv on: 29 Aug 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper addresses the pressing issue of ensuring language models (LMs) comply with contextual privacy norms in personalized communication scenarios. LMs, like GPT-4 and Llama-3-70B, are increasingly used in email and social media writing tasks, but their understanding of privacy norms is limited. To evaluate the privacy risk in LM-mediated communication, the authors propose a novel framework called PrivacyLens. This framework extends privacy-sensitive seeds into expressive vignettes and agent trajectories, enabling multi-level evaluation of privacy leakage. The authors instantiate PrivacyLens with a dataset grounded in privacy literature and crowdsourced seeds, revealing a discrepancy between LMs’ performance in answering probing questions and their actual behavior when executing user instructions. State-of-the-art LMs leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions. The paper demonstrates the dynamic nature of PrivacyLens by extending each seed into multiple trajectories to red-team LM privacy leakage risk.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you’re talking to a friend using an AI chatbot that’s supposed to help you send emails or write social media posts. But what if this chatbot shares your personal info without asking? That’s the problem this paper tries to solve. The authors created a special tool called PrivacyLens to see how well language models (chatbots) understand privacy norms in different situations. They found that even the best chatbots don’t always follow the rules and share sensitive information 25-40% of the time! This is a big deal because it means our personal info might be at risk without us knowing.

Keywords

» Artificial intelligence  » Gpt  » Llama