Loading Now

Summary of Fine-tuned Large Language Models (llms): Improved Prompt Injection Attacks Detection, by Md Abdur Rahman et al.


Fine-tuned Large Language Models (LLMs): Improved Prompt Injection Attacks Detection

by Md Abdur Rahman, Fan Wu, Alfredo Cuzzocrea, Sheikh Iqbal Ahamed

First submitted to arxiv on: 28 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large language models (LLMs) have advanced significantly in tackling various language-based tasks, but their applications are vulnerable to prompt injection attacks. These attacks manipulate LLMs through carefully designed input prompts, diverting the model from its original instruction and potentially executing unintended actions. This poses serious security threats, including data leaks, biased outputs, or harmful responses. To detect prompt vulnerabilities, this project explores two approaches: using a pre-trained LLM (XLM-RoBERTa) and fine-tuning it with a task-specific labeled dataset from deepset in huggingface. The pre-trained model achieves zero-shot classification accuracy, while the fine-tuned model achieves impressive results with 99.13% accuracy, 100% precision, 98.33% recall, and 99.15% F1-score through rigorous experimentation and evaluation.
Low GrooveSquid.com (original content) Low Difficulty Summary
This project is about making sure that large language models (LLMs) are safe from attacks. These attacks try to trick the LLMs by giving them special prompts that make them do something they weren’t supposed to do. This could lead to problems like data leaks or harmful responses. To stop this from happening, the project looks at two ways to detect when an attack is coming: using a pre-trained model and fine-tuning it with more information. The results show that this approach works well, making it a good way to keep LLMs safe.

Keywords

» Artificial intelligence  » Classification  » F1 score  » Fine tuning  » Precision  » Prompt  » Recall  » Zero shot