Preprint Article Version 1 This version is not peer-reviewed

ShieldDiff: Suppressing Sexual Content Generation from Diffusion Models through Reinforcement Learning

Version 1 : Received: 1 July 2024 / Approved: 3 July 2024 / Online: 3 July 2024 (11:07:13 CEST)

How to cite: Han, D.; Mohamed, S.; Li, Y. ShieldDiff: Suppressing Sexual Content Generation from Diffusion Models through Reinforcement Learning. Preprints 2024, 2024070330. https://doi.org/10.20944/preprints202407.0330.v1 Han, D.; Mohamed, S.; Li, Y. ShieldDiff: Suppressing Sexual Content Generation from Diffusion Models through Reinforcement Learning. Preprints 2024, 2024070330. https://doi.org/10.20944/preprints202407.0330.v1

Abstract

With the advance of generative AI, the text-to-image (T2I) model has the ability to generate various contents. However, the generated contents cannot be fully controlled. There is a potential risk that text-to-image model can generate unsafe images with uncomfortable contents. In our work, we focus on eliminating the NSFW (not safe for work) content generation from text-to-image model while maintaining the high quality of generated images by fine-tuning the pretrained diffusion model via reinforcement learning by optimizing the well-designed content-safe reward function. The proposed method leverages a customized reward function consisting of the CLIP (Contrastive Language-Image Pre-training) and nudity rewards to prune the nudity contents that adhere to the pretrained model and keep the corresponding semantic meaning on the safe side. In this way, the text-to-image model is robust to unsafe adversarial prompts since unsafe visual representations are mitigated from latent space. Extensive experiments conducted on different datasets demonstrate the effectiveness of the proposed method in alleviating unsafe content generation while preserving the high-fidelity of benign images as well as images generated by unsafe prompts. Our method surpasses existing state-of-the-art (SOTA) baseline methods and achieves better performance on sexual content removal (14.5% higher than SafeGen) and image quality retention. In terms of robustness, our method outperforms Safe Latent Diffusion and SafeGen under the SOTA black-box attacking model SneakyPrompt by approximately 2x and 5.6x, respectively. Furthermore, our constructed method can be a benchmark for other methods aiming for anti-NSFW generation, high-level prompt-image safe alignment.

Keywords

Privacy; Content Safety; Generative AI

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.