Summary of How Johnny Can Persuade Llms to Jailbreak Them: Rethinking Persuasion to Challenge Ai Safety by Humanizing Llms, By Yi Zeng et al.
How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by…
How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by…
UPDP: A Unified Progressive Depth Pruner for CNN and Vision Transformerby Ji Liu, Dehua Tang,…
Adaptive Data Augmentation for Aspect Sentiment Quad Predictionby Wenyuan Zhang, Xinghua Zhang, Shiyao Cui, Kun…
PersianMind: A Cross-Lingual Persian-English Large Language Modelby Pedram Rostami, Ali Salemi, Mohammad Javad DoustiFirst submitted…
Human-AI Collaborative Essay Scoring: A Dual-Process Framework with LLMsby Changrong Xiao, Wenxing Ma, Qingping Song,…
A Brain-inspired Computational Model for Human-like Concept Learningby Yuwei Wang, Yi ZengFirst submitted to arxiv…
Improving the Detection of Small Oriented Objects in Aerial Imagesby Chandler Timm C. Doloriel, Rhandley…
Kun: Answer Polishment for Chinese Self-Alignment with Instruction Back-Translationby Tianyu Zheng, Shuyue Guo, Xingwei Qu,…
PCB-Vision: A Multiscene RGB-Hyperspectral Benchmark Dataset of Printed Circuit Boardsby Elias Arbash, Margret Fuchs, Behnood…
Frequency Masking for Universal Deepfake Detectionby Chandler Timm Doloriel, Ngai-Man CheungFirst submitted to arxiv on:…