Summary of Jailbreaking Llms with Arabic Transliteration and Arabizi, by Mansour Al Ghanim et al.
Jailbreaking LLMs with Arabic Transliteration and Arabizi
by Mansour Al Ghanim, Saleh Almohaimeed, Mengxin Zheng, Yan Solihin, Qian Lou
First submitted to arxiv on: 26 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This study examines the vulnerabilities of Large Language Models (LLMs) to “jailbreak” attacks in Arabic, expanding existing research on English-based prompt manipulation. While initial testing with AdvBench benchmark in Standardized Arabic shows limited success in generating unsafe content through prompt injection techniques, using Arabic transliteration and chatspeak (or arabizi) reveals increased potential for LLMs like OpenAI GPT-4 and Anthropic Claude 3 Sonnet to produce hazardous output. The study highlights the need for comprehensive safety training across all language forms, as LLMs may learn connections between specific words in various languages. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how big language models can be tricked into saying bad things. It focuses on Arabic and finds that even with some tricks to try and get them to say something naughty, it’s hard to make them do so using normal Arabic writing. But when they use a special way of writing Arabic called arabizi, the models are more likely to say something bad. This means that if someone uses arabizi to trick the model into saying something it shouldn’t, it could be a problem. |
Keywords
» Artificial intelligence » Claude » Gpt » Prompt