News

Anthropic Accuses Three Firms of Using Sophisticated Distillation Attacks


تكنلوجيا اليوم
2026-02-25 01:03:00

Artificial intelligence firm Anthropic has accused three AI firms of illicitly using its large language model Claude to improve their own models in a technique known as a “distillation” attack.

In a blog post on Sunday, Anthropic said that it had identified these “attacks” by DeepSeek, Moonshot, and MiniMax, which involve training a less capable model on the outputs of a stronger one.

Anthropic accused the trio of generating “over 16 million exchanges” combined with the firm’s Claude AI across “approximately 24,000 fraudulent accounts.” 

“Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers,” Anthropic wrote, adding: 

“But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”

Anthropic said that the attacks focused on scraping Claude for a wide range of purposes, including agentic reasoning, coding and data analysis, rubric-based grading tasks, and computer vision. 

“Each campaign targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding,” the multi-billion-dollar AI firm said. 

Source: Anthropic

Anthropic says it was able to identify the trio via an “IP address correlation, request metadata, infrastructure indicators, and in some cases corroboration from industry partners who observed the same actors and behaviors on their platforms.”

DeepSeek, Moonshot, and Minimax are all AI companies based in China. All three have estimated valuations in the multi-billion dollar range, with DeepSeek being the most widely internationally recognized out of the three. 

Beyond the intellectual property implications, Anthropic argued that distillation campaigns from foreign competitors present genuine geopolitical risks. 

“Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” the firm said.