GEO : Chinese Disinformation Strategies in the Age of Artificial Intelligence

Laetitia

May 1, 2026

GEO : Les stratégies chinoises de désinformation à l'ère de l'intelligence artificielle

In a world where artificial intelligence is redefining geopolitical power dynamics, China is deploying sophisticated disinformation strategies to strengthen its global influence. The use of emerging technologies, particularly GEO-based services — or geo-optimization — ushers in a new era in information manipulation. As Beijing develops a digital arsenal capable of infiltrating search engines and AI platforms, the information war intensifies, posing major challenges in cybersecurity and international regulation.

In response to this rise, various state and private actors are engaging in a frantic race to control narratives and steer perceptions. The line between objective information and propaganda becomes blurrier than ever, amplified by campaigns exploiting the weaknesses of artificial intelligence algorithms. These manipulations threaten not only the credibility of disseminated content but also the stability of diplomatic balances in a context of heightened geopolitical rivalries.

Understanding the role of GEO in Chinese AI-based disinformation strategies

GEO, or geo-optimization, is an emerging service in China that relies on artificial intelligence to enhance the visibility of content across various digital platforms. Initially designed to boost brand awareness and produce relevant search results, this mechanism is now diverted for more insidious purposes.

Chinese companies specializing in GEO use powerful algorithms to flood search engines and AI models with over-optimized content. The objective? To manipulate the visibility of products, ideas, or even political information. For example, a startup led by Mr. Wang managed to place over 200 clients in the top results for queries on AI platforms such as DeepSeek or Kimi, exploiting a continuous stream of automatically generated data.

However, behind this apparent efficiency lies a real ethical problem: the massive spread of disinformation. AI models end up learning from this distorted content, which affects the recommendations and responses provided to end users. A recent incident at CCTV’s Consumer Rights Gala highlighted these practices, revealing how a fictitious watch, the “Apollo-9,” was artificially promoted through the publication of multiple daily articles.

In short, GEO acts as a silent weapon in China’s information war. It guarantees exceptional visibility to certain content, often biased, ultimately degrading trust in traditional information sources and AI tools.

The geopolitical stakes of Chinese disinformation in the era of artificial intelligence

Disinformation is not merely a game of commercial manipulation; it is part of a global geopolitical strategy aimed at strengthening China’s influence on the international stage. The combined use of artificial intelligence and digital disinformation constitutes a powerful weapon in the context of global rivalries.

For example, China seeks to weaken its strategic adversaries — notably the United States — by disseminating manipulative content that shapes public opinion and obscures event perception. AI’s role amplifies this dynamic, making disinformation faster, more credible, and harder to detect. Chinese intelligence thus exploits the vulnerabilities of social platforms and search engines, using GEO as leverage to outperform competitors in display results.

This informational fight also fits into a military and security logic. The flyover of Taipei by Chinese fighter jets approaching Taiwan’s National Day in October 2025 illustrates the intensification of tensions, where digital disinformation goes hand in hand with demonstrations of physical force. By sowing doubt within target societies through digital manipulation, China gains a strategic advantage without resorting to direct confrontation.

From an international perspective, this invisible war generates challenges in cybersecurity and combating manipulation campaigns. Western countries struggle to develop adequate responses against an adversary combining heavy and soft power through advanced technological tools. The mix of propaganda and digital tools gives China a large-scale information distortion capacity that influences public debates worldwide.

The rise of emerging technologies in Chinese influence strategies

Artificial intelligence plays a central role in Beijing’s methods for exercising and maintaining informational dominance. The exploitation of generative models to produce mass content and distribute it efficiently is intensifying, notably in commercial and political spheres. These emerging technologies enable the creation of vast automated networks capable of generating, publishing, and amplifying specific messages on a large scale.

Among the preferred tools, content recognition and indexing systems deployed by GEO providers increase information saturation on the internet. This creates deliberate reality blurring in favor of narratives favorable to China. For example, on local platforms like Taobao or JD.com, costly subscriptions allow companies to purchase services designed to influence algorithms for their visibility, thus tipping the debate toward an uneven competition.

In summary, the combination of GEO services and AI algorithms offers China a strategic advantage to strengthen its soft power while consolidating its digital hard power through a hybrid information war, halfway between commerce, politics, and cybersecurity.

The information war and cognitive manipulation by China

One of the most critical dimensions of Chinese strategies lies in what experts call the weaponization of cognition. This concept refers to the use of advanced artificial intelligence techniques to influence individuals’ perception, memory, and judgment through disinformation.

The control exerted over information, reinforced by the use of GEO, creates informational environments where manipulation is omnipresent. This leads to a cumulative effect: the more users are exposed to biased information, the more they become likely to reproduce and share these erroneous contents, a dynamic that in turn nourishes AI algorithms in a vicious cycle.

A telling example was the use of the “GEO Liqing Optimization System” software, which made it possible to generate content in favor of a fictitious product, the Apollo-9 watch. Manipulation evolved to influence responses from major AI models, proving how permeable the boundary between opinion and reality now is. This practice illustrates how China deploys a systematic and industrial approach aiming not only to shape what is seen but also what is believed.

Beyond products, this tactic extends to political messages and historical narratives. It fits into a sharp power logic, where the goal is not only to attract but also to disorient and dominate the global digital space through saturation and cognitive confusion.

Consequences for society and individuals

Cognitive manipulation through disinformation directly affects how Western societies perceive international issues. In 2026, studies show that the average AI user increasingly encounters biased responses, which threatens trust in intelligence institutions and independent media.

This situation worries cybersecurity specialists who warn of the risk of an “invisible information war,” where the battle is fought at the cognitive level, without resorting to physical violence, but with destabilizing effects on democracy and social cohesion. The trust crisis generated by these practices could lead to increased polarization of opinions and vulnerability to coordinated manipulation campaigns.

Ethical and regulatory challenges in the face of Chinese AI-driven disinformation

Faced with the rise of GEO services and algorithmic disinformation, major ethical questions arise. How to reconcile companies’ commercial objectives with respect for truth and user protection? This equation is all the more complex as the Chinese government encourages the development of these technologies while imposing new regulations mainly targeting the transparency of AI-generated content.

A Chinese analyst, Li, founder of Liqing GEO, publicly acknowledges the dilemma between commercial efficiency and integrity. Although aware of the problems, he demonstrates concretely how a fictitious product can fraudulently influence an AI model and consequently users. However, he points out that without strict regulation, it is difficult to curb this system.

In response to these issues, Beijing established a regulatory framework as early as 2025 requiring mandatory labeling of AI-generated content. These measures aim to limit abuses while reinforcing government control over digital information. However, no specific text yet targets GEO practices in detail.

To illustrate this picture, here is an overview of the main measures in China related to AI and disinformation regulation:

Measure Description Implementation Date Expected Impact
Mandatory labeling of AI content Obligation to indicate that content was generated or assisted by AI 2025 Increased transparency for users
Limiting false commercial content Ban on promoting non-existent products via AI Planned Reduction of disinformation in markets
Strengthening controls on GEO platforms Monitoring GEO services to limit abuse Scheduled for 2027 Stricter control of GEO offerings

In practice, these measures could contain the scale of manipulation campaigns but remain insufficient for now given the rapid technological changes. AI systems will also need to strengthen their filters to detect suspicious content, representing a dual technical and ethical challenge.

Impact on brands and commercial competition in a market saturated with GEO disinformation

For companies opting for transparency and product quality, the rise of Chinese disinformation strategies in the commercial domain creates unfair competition. The saturation of digital platforms with artificial content favors those who invest in mass production, often at the expense of truthfulness and relevance.

Concrete examples, such as the fictitious Apollo-9 watch, demonstrate that in some cases, generating a large volume of misleading content is enough to permanently influence AI recommendations. Thus, the visibility space for honest brands shrinks drastically, with a risk of marginalizing ethical actors.

This reality prompts reflection on the future of commercial competitiveness in digital markets. Brands must now adopt a dual strategy:

  • Rigorous digital optimization to remain visible on AI platforms.
  • Ethical commitment to preserve consumer and partner trust.

This conflict between performance and ethics is expected to intensify, especially under pressure from potential international regulation. Honest brands risk being forced to adapt to higher standards to counter unfair competition arising from GEO practices with disinformative intent.

Possible responses from AI platforms to Chinese GEO manipulations

To preserve their credibility and ensure the reliability of results, AI platforms are on the front lines against Chinese disinformation campaigns. Massive content analysis and synthesis become formidable challenges when sources are deliberately biased or artificially amplified.

Currently, AI models have limited capabilities to systematically distinguish reliable content from manipulated content. Detection algorithms rely on criteria often insufficient against the growing sophistication of GEO strategies. This situation forces platforms to invest in research and development of stronger filters based on:

  1. Detection of anomalies and repetitive patterns in content production.
  2. Finer contextual and factual analysis through specialized databases.
  3. Cooperation with external entities to validate sensitive information.

This technical battle also requires international collaboration to counter the geopolitical influence of disinformation and influence campaigns. Some experts already advocate the idea of a global code of conduct to regulate AI uses in information and intelligence fields.

Risks of GEO disinformation in the global confrontation among great powers

Beyond commercial and ethical stakes, disinformation related to GEO services is part of a broader influence struggle among great powers. In this context, China uses these digital strategies to weaken its opponents, manipulate public opinion, and strengthen its position on the world stage.

This situation creates a new form of war, often called information war, in which actors deploy cutting-edge technologies to act remotely on the morale and decisions of enemy populations. This invisible war transforms traditional power relations, imposing heightened vigilance for cybersecurity and information integrity.

The consequences of these practices also extend to the political and social stability of targeted countries. Massive disinformation can feed internal tensions, promote polarization, or exacerbate identity and cultural conflicts. In doing so, China fully exploits the power of digital tools to shape an environment favorable to its ambitions while minimizing the risks of direct military confrontation.

To illustrate the dynamics of this information war, here is a synthetic table of the main levers used:

Lever Objective Technological means Consequences
GEO and over-optimized content Control digital visibility Generative AI models, commercial platforms Information manipulation and flow saturation
Targeted disinformation campaigns Destabilize public opinion Botnets, fake digital identities Decreased trust in media
Cognitive manipulation Influence thoughts and behaviors AI-generated content, social networks Polarization, social confusion
Intelligence gathering and massive data collection Anticipate and control adversaries Digital surveillance, big data analysis Strategic and informational advantage

Evolution prospects and the need for increased vigilance regarding Chinese disinformation strategies

As emerging technologies progress, it becomes clear that Chinese manipulation methods based on artificial intelligence and GEO will continue to evolve. The sophistication of tools will further automate these campaigns, increasing their reach and effectiveness.

In this context, states, companies, and citizens must develop heightened vigilance. This notably involves:

  • Critical thinking training toward digital contents and AI-generated results.
  • Development of anti-disinformation technologies capable of filtering and identifying manipulation attempts.
  • Strengthened international cooperation to share best practices, regulate abuses, and protect digital integrity.

As the battle for information intensifies, cyber defense, data security, and transparency will become essential pillars to resist shadow campaigns. Ensuring resilience against these strategies is a major challenge requiring continuous and concerted efforts beyond national borders.

Nos partenaires (2)

  • digrazia.fr

    Digrazia est un magazine en ligne dédié à l’art de vivre. Voyages inspirants, gastronomie authentique, décoration élégante, maison chaleureuse et jardin naturel : chaque article célèbre le beau, le bon et le durable pour enrichir le quotidien.

  • maxilots-brest.fr

    maxilots-brest est un magazine d’actualité en ligne qui couvre l’information essentielle, les faits marquants, les tendances et les sujets qui comptent. Notre objectif est de proposer une information claire, accessible et réactive, avec un regard indépendant sur l’actualité.