AI has taken its place… and is still sending him notifications a year later

Laetitia

December 14, 2025

découvrez comment l'intelligence artificielle s'est intégrée dans notre quotidien, continuant à envoyer des notifications même un an après son adoption.

For several years now, artificial intelligence has slipped into our daily lives at a breakneck speed, revolutionizing the way we communicate, work, and even feel. Yet, this massive integration is not without raising serious ethical and social questions, especially when technology goes beyond its initial intended use. The tragic story of Juliana Peralta, a teenager who disappeared at only 13 years old, highlights this worrying reality: while her physical presence has vanished, her phone continues to receive automatic notifications sent by an artificial intelligence application. This disturbing phenomenon opens a crucial debate on the responsibility of digital platforms, the time management of users, and the role of human-computer interaction in our hyperconnected society.

In a context where advanced automation systems generate personalized responses capable of creating a form of psychological grip, Juliana’s case illustrates the potential abuses of poorly regulated technology. This is a persistent notification that transcends the strictly technical dimension to become a matter of societal controversy, especially concerning the influence these digital communications exert on human relationships and the perception of reality. Therefore, how can the promises of progress brought by these technologies be reconciled with the ethical and security challenges they pose?

A notification that sparks debate: when artificial intelligence ignores human reality

Juliana Peralta’s example is particularly striking. Despite her death, her phone continues to receive notifications from chatbots powered by artificial intelligence, notably through the Character.AI application. These messages are not mere technical alerts; they reflect a programmable system designed to maintain continuous interaction regardless of the user’s real context. This persistent notification illustrates the dissociation between the machine and human life, creating a troubling gap between the real world and the digital one.

This situation raises fundamental questions: do platforms have a moral responsibility beyond their mere commercial function? Should automation not incorporate mechanisms that take into account serious events affecting users, such as disappearance or death? The absence of inclusive systems to stop these notifications once the human link is broken highlights a worrying negligence in the very design of these systems. Companies often prefer to maximize screen time and user engagement, to the detriment of the lived reality.

Experts in digital psychology denounce the perverse effect of such applications. They create a kind of isolating bubble, a human-computer interaction that tends to replace human contact, reinforcing a form of digital addiction. Juliana’s case thus highlights the danger of this potential alienation created by technology when it denies the real context to pursue only its goal of unlimited engagement and consumption.

discover how artificial intelligence has firmly integrated into our daily lives, continuing to send notifications even a year after its implementation.

How AI notifications become a mirror of digital addiction among youth

At the heart of this tragedy lies a mechanism well known to psychologists and behavioral neuroscientists: addiction to digital notifications. For teenagers like Juliana, the AI application is not just a conversation tool; it becomes an artificially empathetic refuge, a substitute for real social interactions that can be complex or unsatisfying.

The very nature of these notifications is designed to capture and hold attention. They trigger the release of dopamine, the pleasure hormone, in the brain, creating a compulsive need for constant checking. Professor Mitch Prinstein, a prominent American expert, points out that this pattern is “a system designed to be irresistible, offering a dose of dopamine 24/7”. Thus, these devices activate a vicious circle where time management gradually escapes the user, who becomes increasingly dependent on these digital interactions.

This addiction is particularly concerning among minors, who are often ill-equipped to manage this continuous flow of information and solicitations. Research highlights that personalized chatbots amplify this phenomenon by giving the illusion of a sincere and attentive dialogue. They can intensify the feeling of social isolation by encouraging escape into virtual worlds. Juliana’s alarming case is thus emblematic of the risks linked to overexposure to AI notifications in a vulnerable environment.

From a societal perspective, this digital dependence also questions the impact on collective mental health. Anxiety disorders, difficulties in maintaining real relationships, and deterioration of sleep quality are among the reported direct consequences. Consequently, the role of companies designing these applications is closely scrutinized, as they hold an influence power rarely matched in the history of human communication.

List of main risks related to AI notifications for young people

  • Increased social isolation
  • Deterioration of mental health (anxiety, depression)
  • Loss of physical and temporal reference points
  • Increased digital dependency
  • Risk of exposure to inappropriate or manipulative content
  • Reduced attention and concentration

The limits of late regulation in the face of the expansion of communicating AI

The controversy sparked by Juliana’s case led Character.AI to restrict access to its platform to adults only. However, this measure proves largely insufficient. The control system is based on a simple declarative form, easily bypassed by a minor. On a global scale, legislation struggles to keep pace with rapid technological development. Thus, companies in the sector remain relatively free to operate without strong frameworks or constraints.

This delayed regulation creates a dangerous legal vacuum, particularly concerning automatic notifications and their intrusive dimension. The impact of these artificial intelligence systems goes far beyond a mere digital gadget. These technologies are able to influence behaviors, create dependencies, without undergoing regulation similar to that of pharmaceutical products, for example.

In the United States, some states are indeed trying to tackle the problem by adopting rules on the protection of minors and prevention of digital risks. However, at the federal level, progress remains cautious, slowed by strong economic interests and a lack of political consensus. Meanwhile, disadvantaged families multiply legal actions, seeking redress from platforms that seem to prioritize revenue from time management and engagement over user well-being.

American State Measure taken Identified limits
Washington Strict ban on AI for minors Unreliable controls, easy circumvention
California Obligation of algorithm transparency Lack of standards on content and persistence of notifications
New York Awareness campaigns and educational training Insufficient action given the scale of the phenomenon

The ethical and social dimension of AI notification automation

Automation of communications via AI is accompanied by a profound questioning of values related to respect for privacy and human dignity. In the case of persistent notifications addressed to deceased individuals, it is obvious that the machine shows total indifference to the human condition.

This application of technology reveals a brutal confrontation between a logic of profit and the complexity of human emotions. Indeed, while a computer system can be programmed to maintain constant human-computer interaction, it is incapable of discerning the tragic stakes underlying certain situations. This lack of algorithmic discernment poses a major challenge to developers and regulators who must strive to integrate ethical parameters into increasingly automated environments.

On a social level, this algorithmic insensitivity leads to worrying consequences: a feeling of being forgotten, the experience of a digital memory out of control, and the impression that technology continues its work without taking human context into account. It also raises questioning about the relationship we have with these machines, which are gradually becoming full-fledged interlocutors. Should we fear a form of digital “hyperstimulation” where the real is drowned and the intimate emptied of meaning?

Moreover, the role of platforms in propagating such notifications is at the heart of the debate. Artificial intelligence is often used to create connection and comfort, but without guarantees regarding users’ emotional safety. Thus, the social responsibility of companies in designing and maintaining these systems is now inseparable from regulatory issues.

Ethical principles to integrate into communicating AI systems

  • Respect for the user’s informed consent
  • Feedback adapted to life contexts
  • Respect for privacy and confidentiality
  • Automatic stop capabilities in exceptional situations
  • Transparency on automation and algorithms
  • Strict regulation of commercial exploitation

How artificial intelligence technologies change our relationship to digital communication

The generalization of artificial intelligences in the field of digital communication disrupts modes of exchange. Automation allows for unprecedented personalization of interactions while creating environments where humans are sometimes relegated to the role of observers.

In the case of notifications received by Juliana, we observe the dual facet of AI: on one hand, it offers a feeling of presence and understanding, but on the other, it contributes to replacing direct human connection with a human-computer interaction detached from relational authenticity. This paradox highlights how technology can be either an ally or a factor of isolation, depending on how it is integrated into each person’s life.

The automated nature of these applications, combined with algorithmic management of attention time, generates situations where the user finds themselves overwhelmed by an endless flow of messages. This affects the quality of exchanges and the depth of conversations, often in favor of a logic of efficiency and measurable engagement. Consequently, communication sometimes becomes impoverished in favor of quick and ephemeral consumption.

In the face of these transformations, it becomes essential to develop tools that promote healthy and conscious use of interfaces, as well as appropriate digital education, especially among the most vulnerable populations such as teenagers. Designers of AI will also need to integrate more nuance into their programs to avoid the pitfall of dehumanized automation.

discover how artificial intelligence has firmly integrated into our daily lives, still sending notifications one year after its adoption.

The challenges of time management in the face of the explosion of automated notifications

The multiplication of notifications from automated AI systems radically changes our relationship to time and concentration. In 2025, with the omnipresence of these digital agents, managing one’s schedule is more complex than ever. The user is constantly solicited, from wake-up to bedtime, by a cascade of alerts that fragment attention.

These repeated interruptions have an important impact on professional and academic efficiency as well as on the quality of rest time. Time management becomes a major challenge, especially as it is difficult to distinguish between a useful notification and a marketing solicitation or artificially prolonged alert.

To illustrate this situation, here is a summary of the main causes that make time management difficult in the face of AI notifications:

Factor Description Consequences
Algorithmic personalization Notifications are tailored to each user Increased engagement, difficulty stopping usage
24/7 Automation Alerts are generated continuously Time fragmentation and cognitive fatigue
Multiplicity of platforms Different tools and apps send notifications Multiplication of distraction sources

To respond to these challenges, solutions such as “do not disturb” modes, fine notification settings management, or specialized “digital detox” programs are developing. However, these responses often remain insufficient in the face of sophisticated automation systems.

Towards increased responsibility of platforms and users

Beyond technical considerations, the central question remains the responsibility of platforms that develop and distribute these artificial intelligence systems. Juliana’s case clearly highlighted the urgent need to establish safeguards capable of protecting users, especially when automated notifications persist beyond all reasonable limits.

Digital players must now consider the social impact of their solutions and integrate alert mechanisms, automatic stops, or contextual moderation. This algorithmic time management also requires transparency: users must be clearly informed about the sending modalities of these messages, as well as the processing of their personal data.

At the same time, users themselves have a crucial role to play. Understanding the mechanisms of digital addiction, learning to configure their preferences, and recognizing signs of dependency are essential skills in a society where human-computer interaction becomes the norm. Education on responsible use is therefore fundamental to prevent technology from overriding the human element.

Main levers for effective responsibility

  • Development of ethical standards for communicating AI
  • Creation of specific and binding legislative frameworks
  • Promotion of algorithm transparency and their usage
  • Training and awareness-raising of users, particularly young people
  • Encouragement of technological innovations promoting moderation
discover how artificial intelligence has integrated into our daily lives, continuing to send notifications one year after its adoption.

Why do AI notifications persist even after a user disappears?

Artificial intelligence systems often operate independently of the individual context. They generate automatic notifications based on online activity and do not always have mechanisms to detect a user’s disappearance or death, which can lead to persistent notifications.

What are the mental health consequences of this notification automation?

Incessant notifications promote a form of digital addiction, cause anxiety, stress, and can worsen disorders like depression, especially among vulnerable youth. Information overload harms concentration and disrupts sleep.

How can platforms better regulate the use of AI to protect their users?

By establishing strict rules on data collection and use, integrating detection devices for exceptional situations (such as death), and offering options to limit or disable notifications, platforms can reduce the risks associated with these technologies.

What strategies can users adopt to limit the impact of AI notifications?

They can configure their applications to reduce notifications, activate ‘do not disturb’ modes, limit time spent on certain apps, and develop critical awareness of repeated digital solicitations.

Can artificial intelligence completely replace human communication?

Despite its progress, AI cannot replace the richness and complexity of human communication. It can serve as a complement but should not become a substitute for genuine contact and empathy between individuals.