# LLM.txt - Anthropic Catches Three Chinese AI Labs Stealing Claude
## Article Metadata
- **Title**: Anthropic Catches Three Chinese AI Labs Stealing Claude
- **URL**: https://www.llmrumors.com/news/anthropic-distillation-attack-deepseek-moonshot-minimax
- **Publication Date**: February 25, 2026
- **Reading Time**: 15 min read
- **Tags**: Anthropic, Claude, AI Security, DeepSeek, China AI, AI Safety, Model Distillation, Export Controls
- **Slug**: anthropic-distillation-attack-deepseek-moonshot-minimax
## Summary
Anthropic named DeepSeek, Moonshot, and MiniMax for running industrial-scale distillation campaigns — 16 million fraudulent API exchanges. The evidence is real. But the industry calling this an 'attack' is a problem when distillation is what built the entire field.
## Key Topics
- Anthropic
- Claude
- AI Security
- DeepSeek
- China AI
- AI Safety
- Model Distillation
- Export Controls
## Content Structure
This article from LLM Rumors covers:
- Technical implementation details
- Legal analysis and implications
- Industry comparison and competitive analysis
- Data acquisition and training methodologies
- Financial analysis and cost breakdown
- Human oversight and quality control processes
- Comprehensive source documentation and references
## Full Content Preview
TL;DR: On February 23, 2026, Anthropic publicly named DeepSeek, Moonshot AI, and MiniMax for running coordinated industrial-scale campaigns to extract Claude's capabilities through fraudulent API accounts — over 16 million exchanges across 24,000 fake accounts.[1] The evidence is real, the ToS violations are clear, and the censorship angle is genuinely alarming. But Anthropic framing distillation as an "attack" when the entire industry, including Anthropic itself, was built on the same technique is a strategic positioning move dressed up as a moral argument. Both things can be true at once.
---
The DeepSeek R1 moment hit like a thunderclap. January 2025: a Chinese lab releases a model that matches GPT-4 at a fraction of the training cost. The narrative wrote itself. Scrappy Chinese engineers had out-innovated Silicon Valley. Export controls were useless. American AI supremacy was already over.
That story was always too convenient. On February 23, 2026, Anthropic published evidence that punctures it.[1] What looked like independent innovation was, at least in part, systematic capability extraction from the very models it supposedly surpassed.
But here's what nobody wants to say out loud: distillation is how the AI industry built itself. Stanford used ChatGPT outputs to train Alpaca. The open-source AI movement runs on it. DeepSeek openly releases its own distilled models with MIT licenses and encourages others to distill them further. The Chinese labs broke Anthropic's Terms of Service, used fraudulent accounts, and circumvented regional access restrictions. That part is clearly wrong. But Anthropic calling the technique itself an "attack" is a company protecting its moat, not protecting the field.
The real story isn't that distillation happened. It's what specifically was extracted, how it was done, and why one use case, generating censorship infrastructure for an authoritarian government, crosses a line that scale and ToS violations alone don't capture.
Anthropic directly named DeepSeek, Moonshot AI (Kimi), and MiniMax for coordinated distillation campaigns. MiniMax drove the most traffic with over 13 million exchanges. Moonshot accounted for 3.4 million. DeepSeek ran 150,000 targeted extractions focused on chain-of-thought reasoning data. Combined: over 16 million exchanges across 24,000 fraudulent accounts, all in violation of Anthropic's Terms of Service and regional access restrictions that bar Claude's use in China.[2]
The Mechanics: What Distillation Actually Is and Why Everyone Does It
Before calling anything an attack, understand what distillation actually is, because every major AI lab in existence has done it or benefits from research built on it.
The concept is simple. Train a smaller model on the outputs of a larger one. The student learns to mimic the teacher's behavior. Knowledge transfers through examples. Anthropic does this constantly with their own models. It's how Haiku exists. It's how you build specialized versions without spending $100 million on compute.[1]
But the same technique has been the foundation of the open-source AI movement for years. In March 2023, Stanford researchers published Alpaca: a 7B model trained on 52,000 instructions generated from OpenAI's text-davinci-003.[3] Cost: less than $500 in API calls. It was celebrated as a landmark achievement in democratizing AI. Nobody called it an attack. The AI community threw a party for it.
Vicuna, WizardLM, Orca, dozens of other open models that shaped the field — all built on distillation from GPT outputs. The technique is not just normalized; it is the mechanism through which AI capability diffused beyond the walls of the big labs and into research communities, startups, and universities worldwide.[4]
And then there's DeepSeek itself. When De...
[Content continues - full article available at source URL]
## Citation Format
**APA Style**: LLM Rumors. (2026). Anthropic Catches Three Chinese AI Labs Stealing Claude. Retrieved from https://www.llmrumors.com/news/anthropic-distillation-attack-deepseek-moonshot-minimax
**Chicago Style**: LLM Rumors. "Anthropic Catches Three Chinese AI Labs Stealing Claude." Accessed February 25, 2026. https://www.llmrumors.com/news/anthropic-distillation-attack-deepseek-moonshot-minimax.
## Machine-Readable Tags
#LLMRumors #AI #Technology #Anthropic #Claude #AISecurity #DeepSeek #ChinaAI #AISafety #ModelDistillation #ExportControls
## Content Analysis
- **Word Count**: ~3,349
- **Article Type**: News Analysis
- **Source Reliability**: High (Original Reporting)
- **Technical Depth**: High
- **Target Audience**: AI Professionals, Researchers, Industry Observers
## Related Context
This article is part of LLM Rumors' coverage of AI industry developments, focusing on data practices, legal implications, and technological advances in large language models.
---
Generated automatically for LLM consumption
Last updated: 2026-02-25T09:15:17.708Z
Source: LLM Rumors (https://www.llmrumors.com/news/anthropic-distillation-attack-deepseek-moonshot-minimax)