Warfare:Breaking the Watermark Protection of AI-Generated Content
Abstract: AI-Generated Content (AIGC) is rapidly expanding, with services using advanced generative models to create realistic images and fluent text. Regulating such content is crucial to prevent policy violations, such as unauthorized commercialization or unsafe content distribution. Watermarking is a promising solution for content attribution and verification, but we demonstrate its vulnerability to two key attacks: (1) Watermark removal, where adversaries erase embedded marks to evade regulation, and (2) Watermark forging, where they generate illicit content with forged watermarks, leading to misattribution. We propose Warfare, a unified attack framework leveraging a pre-trained diffusion model for content processing and a generative adversarial network for watermark manipulation. Evaluations across datasets and embedding setups show that Warfare achieves high success rates while preserving content quality. We further introduce Warfare-Plus, which enhances efficiency without compromising effectiveness. The code can be found in https://github.com/GuanlinLee/warfare.
- Governments race to regulate ai tools. https://www.reuters.com/technology/governments-efforts-regulate-ai-tools-2023-04-12/.
- A comprehensive guide to the aigc measures issued by the cac. https://www.lexology.com/library/detail.aspx?g=42ad7be8-76bd-40c8-ae5d-271aaf3710eb.
- Instagram. https://www.instagram.com/.
- Midjourney. https://docs.midjourney.com/.
- Best sites to sell ai art - make profit from day one. https://okuha.com/best-sites-to-sell-ai-art/.
- Singapore’s approach to ai governance. https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework.
- Synthid. https://www.deepmind.com/synthid.
- Twitter. https://twitter.com/.
- Introducing chatgpt. https://openai.com/blog/chatgpt.
- Wasserstein GAN. CoRR, abs/1701.07875, 2017.
- Identifying and mitigating the security risks of generative ai. CoRR, abs/2308.14840, 2023.
- From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads. CoRR, abs/2305.15336, 2023.
- Large-scale visible watermark detection and removal with deep convolutional networks. In Proc. of the PRCV, pages 27–40, 2018.
- Diffusionshield: A watermark for copyright protection against generative diffusion models. CoRR, abs/2306.04642, 2023.
- Supervised GAN Watermarking for Intellectual Property Protection. In 2022 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1–6, 2022.
- Watermarking images in self-supervised latent spaces. In Proc. of the ICASSP, pages 3054–3058, 2022.
- The Stable Signature: Rooting Watermarks in Latent Diffusion Models. CoRR, abs/2303.15435, 2023.
- The mass, fake news, and cognition security. Frontiers in Computer Science, 15(3):153806, 2021.
- Julian Hazell. Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns. CoRR, abs/2305.06972, 2023.
- Deep Residual Learning for Image Recognition. In Proc. of the CVPR, pages 770–778, 2016.
- Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proc. of the NeurIPS, pages 6626–6637, 2017.
- Denoising Diffusion Probabilistic Models. In Proc. of the NeurIPS, 2020.
- Video Diffusion Models. In Proc. of the NeurIPS, 2022.
- Image quality metrics: Psnr vs. ssim. In Proc. of the ICPR, pages 2366–2369, 2010.
- A style-based generator architecture for generative adversarial networks. In Proc. of the CVPR, pages 4401–4410, 2019.
- Elucidating the design space of diffusion-based generative models. In Proc. of the NeurIPS, 2022.
- On the Reliability of Watermarks for Large Language Models. CoRR, abs/2306.04634, 2023.
- DiffWave: A Versatile Diffusion Model for Audio Synthesis. In Proc. of the ICLR, 2021.
- Anti-DreamBooth: Protecting users from personalized text-to-image synthesis. CoRR, abs/2303.15433, 2023.
- Xinyu Li. Diffwa: Diffusion models for watermark attack. CoRR, abs/2306.12790, 2023.
- Visible watermark removal via self-calibrated localization and background refinement. In Proc. of the MM, pages 4426–4434, 2021.
- Wdnet: Watermark-decomposition network for visible watermark removal. In Proc. of the WACV, pages 3684–3692, 2021.
- Prompt Injection attack against LLM-integrated Applications. CoRR, abs/2306.05499, 2023a.
- Watermarking Diffusion Model. CoRR, abs/2305.12502, 2023b.
- Deep Learning Face Attributes in the Wild. In Proc. of the ICCV, 2015.
- WAN: watermarking attack network. In Proc. of the BMVC, page 420, 2021.
- Diffusion models for adversarial purification. In Proc. of the ICML, pages 16805–16827, 2022.
- Visual Adversarial Examples Jailbreak Large Language Models. CoRR, abs/2306.13213, 2023.
- Learning transferable visual models from natural language supervision. In Proc. of the ICML, pages 8748–8763, 2021.
- High-Resolution Image Synthesis with Latent Diffusion Models. In Proc. of the CVPR, pages 10674–10685, 2022.
- Raising the Cost of Malicious AI-Powered Image Editing. CoRR, abs/2302.06588, 2023.
- Stegastamp: Invisible hyperlinks in physical photographs. In Proc. of the CVPR, pages 2114–2123, 2020.
- LLaMA: Open and Efficient Foundation Language Models. CoRR, abs/2302.13971, 2023.
- Deep image prior. In Proc. of the CVPR, pages 9446–9454, 2018.
- RD-IWAN: residual dense based imperceptible watermark attack network. IEEE Transactions on Circuits and Systems for Video Technology, 32(11):7460–7472, 2022.
- Jailbroken: How Does LLM Safety Training Fail? CoRR, abs/2307.02483, 2023.
- Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust. CoRR, abs/2305.20030, 2023.
- Wasserstein divergence for gans. In Proc. of the ECCV, pages 673–688, 2018.
- LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. CoRR, abs/1506.03365, 2015.
- Robust invisible video watermarking with attention. CoRR, abs/1909.01285, 2019.
- The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proc. of the CVPR, pages 586–595, 2018.
- Invisible image watermarks are provably removable using generative ai. CoRR, abs/2306.01953, 2023a.
- A recipe for watermarking diffusion models. CoRR, abs/2303.10137, 2023b.
- Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. of the ICCV, pages 2242–2251, 2017.
- Hidden: Hiding data with deep networks. In Proc. of the ECCV, pages 682–697, 2018.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.