How China steals Anthropic’s AI through 24,000 fake accounts and 16 million exchanges

Laetitia

March 1, 2026

découvrez comment la chine utilise 24 000 faux comptes et 16 millions d’échanges pour dérober l’intelligence artificielle d’anthropic, révélant une vaste opération de cyberespionnage.

In a universe where artificial intelligence has become a major strategic resource, the technological rivalry between nations takes on a new dimension. Among the most striking episodes of this secret war is a scandal revealed in early 2026: China is orchestrating a sophisticated cyberattack operation aimed at illegally extracting the capabilities of the AI model Claude, developed by the American startup Anthropic. This masterstroke is based on the creation of 24,000 fake accounts and an impressive volume of 16 million exchanges, which were allegedly used to “distill” Claude’s intelligence to strengthen Chinese proprietary models. What appeared at first glance to be a simple technological competition turns out to be a large-scale industrial espionage case, raising many questions about cybersecurity in the field of artificial intelligence.

This case, unveiled by Anthropic on February 23, highlights the immense stakes related to the protection of advanced AI technologies, essential both commercially and at the level of national sovereignty. Chinese laboratories such as DeepSeek, Moonshot AI, and MiniMax allegedly set up massive data extraction campaigns, using complex networks of fake accounts to query Claude in an automated manner. This mechanism demonstrates a dreadfully effective tactic, exploiting distillation, a legitimate technique in machine learning, but here diverted for espionage and intellectual property theft purposes. The scale of exchanges, their technical sophistication, and the rapid adaptation to new models reveal a well-oiled industrial system that profoundly disrupts the paradigms of cybersecurity in the domain of artificial intelligence.

The detailed mechanisms behind Anthropic’s data theft by China: an extraordinary industrial operation

The operation revealed by Anthropic relies on an impressive network of 24,000 fake accounts designed to submit no fewer than 16 million requests to Claude, the American company’s cutting-edge AI assistant. Behind this mechanism lies a perfectly well-oiled strategy, exploited by three Chinese AI giants – DeepSeek, Moonshot AI, and MiniMax – active in researching and developing competing models.

To better understand this mechanism, one must first examine the very nature of these fake accounts. Each account plays the role of a fictitious user, allowing the submission of targeted requests to Claude without raising suspicion. These accounts are controlled through sophisticated proxy networks, creating what Anthropic called a “hydra cluster” – a set of account clusters redirecting traffic through various third-party cloud servers. A configuration where a single proxy server can simultaneously manage twenty thousand accounts, emphasizing the industrial scale of this cyberattack.

The system is also based on a well-practiced evasion technique, consisting of mixing data extraction submissions with so-called “normal” usage traffic. This strategy makes detecting illegal attempts more difficult, especially since the repetitiveness and specific nature of the requests could be masked by apparently harmless interactions. This process therefore allows these actors to efficiently siphon Claude’s capabilities, thus contributing to the development of their own AI through the “distillation” of knowledge.

This exploitation continues at a sustained pace and adapts immediately to each update of the Claude model. For example, as soon as Anthropic launched a new version, MiniMax redirected nearly half of its requests to capture improvements and integrate these new features into its own products. This flexibility reflects an industrial organization that is both agile and powerful, making the protection of its models an increasing challenge for innovative companies.

discover how china uses 24,000 fake accounts and 16 million exchanges to steal the artificial intelligence developed by anthropic, and the stakes of this massive cyberattack.

Distillation in artificial intelligence: between legitimate training technique and a tool diverted for industrial espionage

The notion of distillation in artificial intelligence is at the heart of the debate concerning this case. Originally, distillation is a technical method used in machine learning to create smaller, more efficient, and less resource-hungry models from larger and more powerful models. This practice is common in many laboratories to produce lightweight versions while maintaining the quality of predictions.

However, in the context of this massive data theft, distillation becomes a tool diverted for industrial espionage. By extracting Claude’s responses by the millions, Chinese entities sought to recreate similar models without developing the technology from scratch. This amounts to stealing know-how accumulated by Anthropic over the years and can carry serious risks, notably in matters of national security, when these technologies are integrated into military, surveillance, or espionage systems.

Detection software, built-in safeguards, draconian licensing policies are many layers of protection that Anthropic has implemented to fight this threat. Yet, the company denounces that these protections are systematically bypassed, thanks to advanced management techniques of fraudulent accounts and the sophistication of proxy networks. This illegal exploitation weakens not only data security but also calls into question the ethical vision underpinning the development of this artificial intelligence.

The problem is all the more delicate because illegally applied distillation can also strip models of their control mechanisms, making it possible to bypass safety filters designed to prevent abuse or the dissemination of inappropriate content. This degradation of the original model can feed less transparent and more dangerous systems.

Regulatory and ethical stakes surrounding unauthorized distillation

Distillation is not illegal in itself, but its application by foreign actors subject to US commercial restrictions places it on a red line. Anthropic emphasizes that the activities of DeepSeek, Moonshot, and MiniMax clearly violate the terms of use of their APIs, as well as technological export regulations. This violation can have heavy legal and diplomatic consequences.

Beyond simple technical theft, illegal distillation raises the major issue of technological sovereignty. When the capabilities of a model like Claude are copied or exploited illegally, it weakens the competitiveness of American labs and poses a risk to global innovation, favoring a form of industrial espionage where a nation diverts foreign technologies on a large scale for its own interests.

discover how china uses 24,000 fake accounts and 16 million exchanges to steal the artificial intelligence developed by anthropic, highlighting cybersecurity risks in the technology sector.

Profiles and strategies of Chinese actors involved in the theft of Anthropic’s artificial intelligence

Three names emerge from this case: DeepSeek, Moonshot AI, and MiniMax. These Chinese companies stand out by distinctive profiles and strategies but pursue a common goal: to appropriate the advanced capabilities of the Claude model.

DeepSeek appears to focus on high-value exchanges around reasoning and sophisticated evaluation systems. Their techniques include prompt manipulation to bypass censorship mechanisms integrated into Claude, notably on politically sensitive subjects. This targeting reveals a desire to adapt obtained models for specific uses, potentially in areas related to surveillance or information control.

Moonshot AI, for its part, prioritizes agent reasoning fields, tool usage, programming, and data analysis. With over 3.4 million requests, this entity also exploits computer vision, reflecting an ambition to broaden the functional scope of the extracted models. These methods indicate a desire to integrate advanced capabilities, combining a wide spectrum of algorithmic skills necessary for complex projects.

MiniMax leads the most massive campaign, with over 13 million exchanges. Its specialty appears to revolve around coding and orchestrating autonomous agents. Its rapid adaptation to the latest versions of Claude shows an organization perfectly synchronized with Anthropic’s updates, a sign of a reliable infrastructure capable of instantly redirecting its traffic. This heavyweight fully embodies the industrial approach in the IA looting field.

Company Number of exchanges Targeted domains Main objective
DeepSeek 150,000+ Reasoning, censorship evasion, reformulation of sensitive requests Extracting chain of thought and bypassing filtering
Moonshot AI 3,400,000+ Agent reasoning, programming, data analysis, computer vision Reconstitution of advanced capabilities
MiniMax 13,000,000+ Coding, orchestration of autonomous agents Rapid assimilation of Claude’s new versions

Geopolitical and economic consequences of massive AI data looting

This type of AI data theft operation goes beyond simple commercial competition. It fits into a broader framework of geopolitical confrontation where mastery of these technologies determines the place of nations on the world chessboard. The massive industrial looting of Claude by Chinese groups fuels fears of an imbalance in the AI sector.

Artificial intelligence models today represent a strategic capital both for economic innovation and national security. If a foreign power manages to extract and recycle these technologies without local development, the competitive dynamic changes radically. This poses a risk of technological alienation where the competitive advantage is lost.

In 2026, this case puts Sino-American relations under strain, already weakened by various trade and technological disputes. It illustrates a new form of invisible cyberwar in the world of algorithms, where data and the ability to manipulate it become weapons. This forces a reconsideration of international treaties on cybersecurity and intellectual property protection in emerging fields.

Cybersecurity strategies to counter campaigns of fake accounts and massive AI extractions

Faced with industrial-scale attacks of such magnitude, cybersecurity must evolve rapidly. Anthropic has invested in developing robust systems capable of detecting and blocking abnormal behavioral patterns associated with distillation attempts.

These measures include sophisticated classifiers based on query pattern analysis, detection of excessive elicitation of chain of thought, as well as the monitoring of clusters of accounts operating in a coordinated manner. These tools allow more precise identification of fake account groups and limit their impact.

Moreover, strengthened verification of educational, research, and startup accounts has proven necessary since these segments are often hijacked to create fake legitimate accesses. This heightened vigilance is a key element in limiting the massive creation of fraudulent accounts.

Information sharing between AI companies, cloud service providers, and government authorities has intensified, creating a sort of common front for protecting critical resources. Nevertheless, this collective defense also has limits, notably in the face of the ever-increasing sophistication of attacks and the speed of adaptation of malicious actors.

  • Behavioral detection based on artificial intelligence to spot abnormal usage.
  • Request-limiting mechanisms per user to prevent abuse.
  • Strengthening multi-step validation processes when creating accounts.
  • International collaboration for rapid reporting of cybersecurity incidents.
  • Continuous innovation in authentication and network monitoring techniques.

Industrial and economic impacts on the global artificial intelligence market

The effects of this major industrial espionage case on the global AI market are numerous. Confidence in American models such as Claude could suffer from theft risks, pushing some actors to adopt more closed or restrictive strategies concerning access to their APIs. This could lead to increased fragmentation of the global AI ecosystem, with distinct technological blocs often isolated from each other.

From an economic point of view, data and knowledge theft forces innovative companies to significantly strengthen their cybersecurity measures, representing a non-negligible additional cost. The necessity to protect research and development investments becomes a strategic priority, on par with launching new offerings.

Finally, the technological arms race in AI may accelerate, with potential tensions on materials and skills necessary for developing the most advanced models. The legal protection of intellectual and technological creations also strengthens, even if legal proceedings remain long and complex to implement against transnational actors.

FAQ on the massive artificial intelligence data theft phenomenon and its consequences

How was China able to create 24,000 fake accounts without being detected?

The massive creation of fake accounts relies on the use of sophisticated proxy networks and insufficient account verification, often hijacking educational and startup segments. This system distributes traffic to mask malicious activity, making detection complex.

What is distillation in AI and why can it be problematic?

Distillation is a legitimate method to create smaller models from a larger model. However, when used without authorization, it allows illegal copying of a model’s capabilities, removing its security and ethical constraints, which represents a major risk.

What strategies has Anthropic implemented to protect itself?

Anthropic uses intelligent classifiers, behavioral detection, increased monitoring of account groups, and strengthens user verification to limit the impact of fake accounts. Collaboration with cloud providers and authorities also enables sharing of threat data.

What economic impacts can this industrial theft have?

The theft drives companies to reinforce their cybersecurity systems, increases costs related to innovation protection, and could lead to fragmentation of the global AI market with increased barriers between providers.

Why is this case a major geopolitical issue?

Mastery of AI technologies is a key factor in the global power game. Massive data theft fits into a geopolitical logic where technological benefits can strengthen a country’s strategic position, thus impacting national security and economic competition.

Nos partenaires (2)

  • digrazia.fr

    Digrazia est un magazine en ligne dédié à l’art de vivre. Voyages inspirants, gastronomie authentique, décoration élégante, maison chaleureuse et jardin naturel : chaque article célèbre le beau, le bon et le durable pour enrichir le quotidien.

  • maxilots-brest.fr

    maxilots-brest est un magazine d’actualité en ligne qui couvre l’information essentielle, les faits marquants, les tendances et les sujets qui comptent. Notre objectif est de proposer une information claire, accessible et réactive, avec un regard indépendant sur l’actualité.