Anthropic Raps 3 Chinese Firms for Using Its Tools to Train Their AI Models
- By The Financial District

- 25 minutes ago
- 1 min read
Anthropic has accused three prominent Chinese artificial intelligence firms of using its Claude chatbot on a massive scale to secretly train rival models — an unexpected development in a yearslong global debate over where fraud ends and industry-standard practice begins, Nick Lichtenberg reported for Fortune.

In a blog post Monday, San Francisco-based Anthropic alleged that Chinese labs DeepSeek, Moonshot AI, and MiniMax violated corporate rules by interacting with Claude, its market-shaping coding tool.
“We have identified industrial-scale campaigns by three AI laboratories — DeepSeek, Moonshot, and MiniMax — to illicitly extract Claude’s capabilities to improve their own models,” the company said.
“These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”
According to Anthropic, the Chinese companies relied on a technique known as “distillation,” in which one model is trained on the outputs of another, often more capable system.
Anthropic said that while distillation is a widely used and legitimate training method, the firms’ alleged use of sprawling networks of fake accounts to replicate a competitor’s proprietary model violates its terms of service and undermines US export controls aimed at constraining China’s access to cutting-edge AI.
The company urged “rapid, coordinated action among industry players, policymakers, and the global AI community.”
![TFD [LOGO] (10).png](https://static.wixstatic.com/media/bea252_c1775b2fb69c4411abe5f0d27e15b130~mv2.png/v1/crop/x_150,y_143,w_1221,h_1193/fill/w_179,h_176,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/TFD%20%5BLOGO%5D%20(10).png)









