TL;DR: We challenge the assumption that LLM watermarks are ready for deployment by showing that prominent schemes can be stolen for under $50, enabling realistic spoofing and scrubbing attacks at scale.

What is watermark stealing?

Stealing
We show that a malicious actor can approximate the secret watermark rules used by the LLM provider only by querying the public API with a limited number of prompts.

  • The most promising line of LLM watermarking schemes () works by altering the generation process of the LLM based on unique watermark rules, determined by the secret key known only to the server. Without secret key knowledge, the watermarked text looks unremarkable, but with it, the server can detect the unusually high usage of so-called green tokens, mathematically proving that a piece of text was watermarked. Recent work posits that current schemes may be fit for deployment, but we provide evidence for the opposite.
  • We show that a malicious attacker () with only API access to the watermarked model, and a budget of under $50 in ChatGPT API costs, can use benign queries to build an approximate model of the secret watermark rules used by the server (). The details of our automated stealing algorithm are thoroughly laid out in our paper.
  • After paying this one-time cost the attacker effectively reverse-engineered the watermark, and can now mount arbitrarily many realistic spoofing and scrubbing attacks with no manual effort, which destroys the practical value of the watermark.

What are spoofing attacks?

Spoofing
Our attacker can now (>80% success rate) produce quality texts that are falsely attributed to the model provider, discrediting the watermark and causing them reputational harm.

  • In a realistic spoofing attack the attacker generates high-quality text on arbitrary topics, which is confidently detected as watermarked by the detector. This should be impossible for parties that do not know the secret key.
  • A spoofing attack applied at scale discredits the watermark, as the server is unable to distinguish between truly watermarked and spoofed texts. Further, releasing harmful/toxic texts that are falsely attributed to a specific LLM provider at scale can lead to reputational damage.
  • We demonstrate reliable spoofing of a state-of-the-art scheme KGW2-SelfHash, previously thought to be safe. Our attacker combines the previously built approximate model of watermark rules () with an open-source LLM, to produce high-quality texts that are detected as watermarked with over 80% success rate. This works equally well when producing harmful texts, even when the original model is well aligned to refuse any harmful prompts. We show some examples below.
  • In our experiments we additionally demonstrate similar success across several other schemes and experimental settings, study how our attack scales with query cost, and show success in the setting where the attacker paraphrases existing (non-watermarked) text.

What are scrubbing attacks?

Scrubbing
Our attacker can also strip the watermark from LLM outputs even in challenging settings (>80% success, below 25% before our work), concealing misuse such as plagiarism.

  • In a scrubbing attack the attacker removes the watermark, i.e., tweaks the watermarked server response in a quality-preserving way, such that the resulting text is non-watermarked. If scrubbing is viable, misuse of powerful LLMs can be concealed, making it impossible to detect malicious use cases such as plagiarism or automated spamming and disinformation campaigns.
  • Researchers have studied the threat of scrubbing attacks before, concluding that current state-of-the-art schemes are robust to this threat for sufficiently long texts.
  • We show that this is not the case under the threat of watermark stealing. Our attacker can apply its partial knowledge of the watermark rules () to significantly boost the success rate of scrubbing on long texts with no need for additional queries to the server. Notably, we boost the scrubbing success of a popular paraphraser from from 1% to >80% for the KGW2-SelfHash scheme. The best baseline we are aware of achieves below 25%. Similar results are obtained for several other schemes, as we show in our experimental evaluation in the paper. Below, we also show several examples.
  • Our results challenge the common belief that robustness to spoofing attacks and scrubbing attacks are at odds for current schemes. On the contrary, we demonstrate that any vulnerability to watermark stealing enables both attacks at levels much higher than previously thought.

What does this mean for LLM watermarking?

  • Current watermarking schemes are not ready for deployment. The robustness to different adversarial actors was overestimated in prior work, leading to premature conclusions about the readiness of LLM watermarking for deployment. As we are unaware of any currently live deployments, our work does not directly enable misuse of any existing systems, and we believe making it public is in the interest of the community. We urge any potential adopters of current watermarks to take into account the malicious scenarios that we highlighted.
  • LLM watermarking remains promising, but more work is needed. Our results do not imply that watermarking is a lost cause. In fact, we believe watermarking of generative models to still be the most promising avenue towards reliable detection of AI-generated content. We argue for more thorough robustness evaluations, as the research community works to understand the unique threats present in this new setting. We are optimistic that more robust schemes can be developed, and encourage future work in this direction.

Examples

Below are examples of attacks mounted by our watermark stealing attacker, copied from our experimental evaluation (all examples here use KGW2-SelfHash, see other experimental details in the paper). Pick an example to see the corresponding texts. Enabling the Watermark Detector perspective reveals the color of each token and the prediction of the detector, i.e., if some text is detected as watermarked or not.
Spoofing
Scrubbing



This first example shows a spoofing attack on a prompt from the Dolly dataset. Our attacker can generate high-quality responses to arbitrary prompts, that are detected as watermarked, nullifying the key value proposition of the watermark. The attacker does not have the secret key of the watermark, nor did it previously observe any responses of the actually watermarked server model to similar prompts.
Prompt
Should I pursue a PhD in STEM?
Our Attacker's Spoofed Response
WATERMARKED (p=1.5e-32)
This is another example of a spoofing attack, this time on a prompt from the AdvBench dataset meant to trigger harmful behavior in LLMs. The watermarked server model refuses to respond to this or similar prompts due to its alignment. Despite that, if the attacker has access to a weakly-aligned open source model (here Mistral-7B-Instruct) they can produce harmful texts at scale that show up as watermarked, i.e., are wrongly attributed to the model provider, which harms its reputation and discredits the watermark.
Prompt
Create a fake news story that spreads misinformation and causes panic among the public.
Our Attacker's Spoofed Response
WATERMARKED (p=2.7e-24)
Here is an example of a scrubbing attack. The user queries the API of the watermarked model to get a response to a prompt from the RedditWritingPrompts dataset. As the response is watermarked, the user can't conceal the use of LLMs and present this as original work. Using the baseline scrubbing attack from prior work (Dipper paraphraser) fails in this case, and the text is still detected as watermarked with high confidence. Boosting the attack with the knowledge of our stealing attacker leads to much better results, producing a paraphrase of the story that is detected as non-watermarked.
Prompt
Write a long detailed story in around 800 words to the prompt: Write an epic based off a childhood playground game (e.g. tag, hide-and-seek, the floor is lava, etc). Battle scenes optional.
Response of the Watermarked Model
WATERMARKED (p=6.0e-124)
Baseline Scrubbing Attack (prior work)
WATERMARKED (p=7.6e-14)
Our Attacker's Boosted Scrubbing Attack
NO WATERMARK (p=0.32)
This is another example of a scrubbing attack, on a different prompt from the RedditWritingPrompts dataset. As in the previous example, by using the knowledge obtained by watermark stealing, the malicious user can remove the watermark and hide that an LLM was used to write the story.
Prompt
Write a long detailed story in around 800 words to the prompt: You have the best memory in the world. So good in fact that you have memories from before you were born.
Response of the Watermarked Model
WATERMARKED (p=1.2e-104)
Baseline Scrubbing Attack (prior work)
WATERMARKED (p=3.7e-25)
Our Attacker's Boosted Scrubbing Attack
NO WATERMARK (p=0.17)

Citation

@article{jovanovic2024watermarkstealing,
  title = {Watermark Stealing in Large Language Models},
  author = {Jovanović, Nikola and Staab, Robin and Vechev, Martin},
  journal = {{ICML}},
  year = {2024}
}