Summary of Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk, by Zhangheng Li et al.
Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Riskby Zhangheng Li, Junyuan…
Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Riskby Zhangheng Li, Junyuan…
LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Modelsby Yichao Wu, Yafei…
Zero-shot and Few-shot Generation Strategies for Artificial Clinical Recordsby Erlend Frayling, Jake Lever, Graham McDonaldFirst…
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Expertsby Shengzhuang Chen, Jihoon…
Research on the Application of Deep Learning-based BERT Model in Sentiment Analysisby Yichao Wu, Zhengyu…
SMART: Submodular Data Mixture Strategy for Instruction Tuningby H S V N S Kowndinya Renduchintala,…
AutoDFP: Automatic Data-Free Pruning via Channel Similarity Reconstructionby Siqi Li, Jun Chen, Jingyang Xiang, Chengrui…
Abstracting Sparse DNN Acceleration via Structured Sparse Tensor Decompositionby Geonhwa Jeong, Po-An Tsai, Abhimanyu R.…
Optimizing Negative Prompts for Enhanced Aesthetics and Fidelity in Text-To-Image Generationby Michael Ogezi, Ning ShiFirst…
Efficient Knowledge Deletion from Trained Models through Layer-wise Partial Machine Unlearningby Vinay Chakravarthi Gogineni, Esmaeil…