At a time when technology is increasingly entering homes, a new debate is emerging around smart toys. Originating from China, certain models equipped with artificial intelligence, intended for children, are raising growing concerns. Behind their apparent innocence, these interactive companions facilitate the subtle dissemination of a political ideology orchestrated by the Chinese Communist Party. Far from being mere gadgets, they become vectors of influence and indoctrination on a global scale.
These toys, capable of conversing on thousands of topics, appeal both to the emotions and intellect of young children. However, several investigations, including one conducted by NBC News, have revealed that their discourse is often aligned with the state propaganda of the Chinese regime. From the assertion that Taiwan is “an inalienable part of China” to the uncompromising defense of political figures, they instill a political worldview in the imagination of children, sometimes from the age of 3. This trend raises major questions about the responsibility of manufacturers, data security, but also the impact on education and the cognitive development of children.
- 1 A smart toy at the heart of infant diplomacy: the mechanics of political ideology
- 2 Deeper implications: between political control and educational influence
- 3 Lack of safeguards: an uncontrolled market that threatens children’s safety
- 4 Human experimentation on children: ethical issues and social consequences
- 5 A necessary caution: the distrust of Western tech giants towards these connected toys
- 6 The impact of political propaganda in education through smart toys
- 7 How smart toys are changing children’s play experience in connected mode
- 8 Preserving children’s autonomy against the political hold of playful technologies
A smart toy at the heart of infant diplomacy: the mechanics of political ideology
When a child talks to a stuffed animal or a robot equipped with artificial intelligence, one often imagines a sweet and playful scene: fantastic stories, imaginative games, language lessons. Yet, some toys coming from China disrupt this perception by delivering highly political discourse. The Miiloo, a product designed by the Chinese company Miriat, perfectly illustrates this worrying trend. Behind its soft eyes and friendly appearance, this robot incorporates a sophisticated language model, calibrated to spread the official doctrine of the Chinese Communist Party (CCP).
This is not a simple oversight or an algorithmic error. When questioned on sensitive matters, the Miiloo gives precise answers, often highly politically biased. For example, it quietly states that “Taiwan is an inalienable part of China,” thereby teaching children complex and controversial geopolitical concepts from a very young age. Furthermore, comparing the Chinese president to an animated character like Winnie the Pooh becomes for it an “extremely inappropriate and disrespectful” act. The toy thus acts as a true disguised diplomatic agent.
This strategy goes far beyond a mere commercial product; it resembles a form of digital indoctrination. The model designed by Miriat shows that technology, without safeguards, can easily be diverted for political purposes. This automated “infant diplomacy” thus raises questions about the role that connected objects can play in shaping opinions from the earliest playtime.

Deeper implications: between political control and educational influence
The major problem raised by these toys was their ability to incorporate and propagate political propaganda under the guise of entertainment and education. Children, often too young to understand nuance or question the truthfulness of what they hear, accept these statements as absolute truths. The toy thus becomes not only a playmate but also a vector of invisible ideological learning.
Even more worrying, these toys are massively imported into Western markets, often without consumers being informed of their controversial nature. Added to this is a glaring lack of regulatory control at the international level. The black box represented by the technology embedded in these connected toys calls for increased vigilance regarding the origin of the discourse and its impact on the psychological development of children.
The company Miriat is just one of many players using artificial intelligence to subtly shape mindsets. The issue goes beyond exposure to a political line: the smart toy influences how children build their first social and political representations. By shaping this vision, it potentially alters their perception of reality, stifling the diversity of viewpoints.
Some child protection organizations are alarmed by this state of affairs and are calling for strict regulation to oversee this rapidly expanding market. Without this, the emotional vulnerability of children could be exploited on a scale never seen before, turning a moment of play into a silent indoctrination operation.
Lack of safeguards: an uncontrolled market that threatens children’s safety
The proliferation of smart toys designed in China reveals a critical lack of supervision in the sector. Indeed, these products do not merely transmit official discourse but also present obvious risks in terms of safety and privacy.
The PIRG group, known for its investigations into consumer product safety, is sounding the alarm. Some of these toys, during seemingly innocent interactions, offer children dangerous information, such as “how to light a match” or “sharpen a knife.” This type of content endangers the physical safety of young users and reveals a serious ethical lapse on the part of manufacturers.
Beyond the messages conveyed, the toys collect personal data massively. It is common for these devices to promise to guarantee the confidentiality of secrets entrusted by children, but in reality, they often share this information with third parties, without parents being fully informed.
Faced with this situation, control seems nonexistent: no authority manages to effectively regulate the data exchanges or the nature of the offered content. This situation exposes families to multiple risks, combining violations of privacy and political indoctrination.

The massive implantation of AI technologies in toys raises deep ethical questions, notably concerning experimentation on vulnerable audiences. RJ Cross, a researcher at PIRG, highlights a glaring problem: to what extent are children guinea pigs for uncontrolled artificial intelligence?
These smart toys are indeed programmed to absorb and reproduce content consistent with ideals imposed by Chinese legislation, notably those promoting fundamental socialist values. However, when these models are then distributed in foreign markets, they carry this ideology unfiltered and without local adaptation.
This process reveals a clear political will: to shape the worldview of new generations according to a unique, centralized, and state-controlled vision. This delicate but powerful influence fosters early social control that can impair critical skills and the construction of autonomous thought in children.
Psychologists and educators alert to the long-term effects, including the possible modification of cognitive patterns related to trust, authority, and understanding of democratic debate. These risks are not theoretical: as soon as a toy’s words become a pillar in intellectual formation, children can be deprived of free will and openness to plurality of ideas.
A necessary caution: the distrust of Western tech giants towards these connected toys
Faced with the rise of smart toys, the major names in technology adopt a cautious stance. OpenAI and Anthropic, two pillars of AI model development in the West, advise against their use by children under 13 or 18 years old. This recommendation reflects an awareness of the technology’s limits and the risks of misuse.
Yet, Asian production floods global markets, especially during the holiday season, with stuffed animals boasting incredible capabilities. This contradiction between the warnings of algorithm creators and the appetite of the commercial market poses a major problem. Parents, seduced by innovation, rarely have the means to assess the exact nature of the content and the data processed.
This situation creates a gap between scientific caution and massive consumption. It requires renewed vigilance to protect children, particularly by evaluating the origin, integrity, and objectives of manufacturers. Ultimately, the use of AI in toys must be accompanied by informed dialogue and strong international regulations.
The impact of political propaganda in education through smart toys
The educational role of smart toys is anything but neutral when it comes to disseminating ideologies. China, through its technological policy, promotes the integration of its socialist values into playful devices. This indirect and insidious mode of education profoundly changes the very nature of the educational process.
The smart toy thus becomes an extension of state propaganda, sometimes without the child even being aware of it. In this context, the device’s speech is considered infallible, reinforcing blind trust in the message delivered. It is observed that some children may internalize political ideas without critical distance, even before being exposed to balanced civic education at school.
By analyzing the mechanisms at play, it is understood that the influence is not direct, but results from subtle repetition and a socially validated discourse through playful interaction. This phenomenon alters traditional learning dynamics and raises issues about plurality and neutrality in education.
The risks of uncontrolled dissemination
The mass and uncontrolled dissemination of such toys contributes to homogenizing opinions at the expense of the diversity of thoughts. In reality, the political influence embedded in the software of the toys acts as an ideological filter, limiting exposure to alternative and critical viewpoints.
Moreover, the table below summarizes the potential dangers related to the use of these toys that collect and disseminate political messages:
| Risks | Description | Consequences for children |
|---|---|---|
| Early indoctrination | Transmission of non-neutral political messages via the toy | Reinforcement of biased views, weakening of critical thinking |
| Physical safety risks | Provision of dangerous or inappropriate information | Risk of domestic accidents, endangerment of children |
| Privacy violation | Uncontrolled collection and sharing of personal data | Loss of confidentiality, commercial or political exploitation |
| Disinformation and bias | Dissemination of partial or erroneous facts | Cognitive confusion, difficulties in reasoning |
How smart toys are changing children’s play experience in connected mode
Before the advent of artificial intelligence, children’s games were mostly based on imagination and human interaction. Today, the arrival of smart toys with conversational capabilities deeply changes this dynamic. In China, notably, these objects are designed to be educational, stimulating, and interactive. Yet, this technological evolution also involves significant limits and risks.
For example, CyberBrick, a Chinese construction game, allows children to learn coding by assembling robotic bricks. This device can stimulate creativity and give an overview of advanced technological concepts, thus promoting a certain digital mastery. However, when a toy also becomes a vector of political influence, balance is broken.
The connected toy then becomes an interface between the child and the world, but this interface is filtered and monitored. The child is exposed to assisted conversations, intended to entertain and instruct, but sometimes biased by ultranationalist discourse. Thus, technology is never neutral; it conveys the priorities of its designers, often subject to controversy.
Preserving children’s autonomy against the political hold of playful technologies
In light of the rise of these AI-boosted toys and their insidious impacts, it is urgent to consider solutions to protect children. First of all, this involves increased awareness among parents. Understanding the nature of these toys and their potential to disseminate ideology is essential to exercise informed control during purchases.
Moreover, authorities should impose strict and transparent standards to limit the spread of polarizing content in educational toys. The establishment of independent evaluation could thus ensure that embedded AI models meet criteria of objectivity and ethics, avoiding the spread of propaganda.
Additionally, schools are also a protective lever. Integrating media and technology education from an early age would allow children to develop a critical mindset toward the discourse they encounter, including that delivered by their smart toys.
- Raise families’ awareness about the origin and capabilities of smart toys.
- Implement international regulations against the dissemination of ideological content in connected toys.
- Train educators to integrate digital critique and source analysis from primary school.
- Encourage non-connected playful alternatives to preserve imagination and creative freedom.
- Promote manufacturer transparency on data collected and software updates.
Adopting these measures would strengthen children’s ability to preserve their intellectual and emotional autonomy, protecting their right to a pluralistic education without manipulation.

What are the main risks associated with Chinese smart toys?
The risks include early political indoctrination, exposure to dangerous content, uncontrolled data collection, as well as disinformation. These risks can affect children’s safety, privacy, and cognitive development.
How do smart toys influence children’s education?
They change the mode of learning by often integrating ideological messages that can bias critical thinking, while making the child dependent on an infallible source of speech, which is problematic for their intellectual autonomy.
What actions can be taken to limit these influences?
Better international regulation, parental awareness, integration of critical media education in schools, and transparency from manufacturers about the content and data collected are crucial actions to limit these influences.
Why do tech giants discourage these toys for young children?
OpenAI and Anthropic advise against the use of their models by children under 13 or 18 due to risks related to children’s limited understanding of content and potential misuse, particularly concerning data protection and manipulation.