At the dawn of 2026, artificial intelligence has become omnipresent in our daily lives, revolutionizing the way we work, learn, and communicate. Yet, this spectacular technological advance does not come without raising important questions. A Harvard alert sounds, notably through the voice of astronomer and professor Avi Loeb, who warns against a worrying phenomenon: the intensive use of artificial intelligence could lead to a gradual decline of human intelligence. This observation is the result of meticulous studies supported by recent research that reveals the profound impact of these technologies on our cognition, critical thinking, and even our digital identity.
Faced with the omnipresence of chatbots capable of instantly generating texts, action plans, or ideas, many wonder if artificial intelligence is not becoming an intellectual crutch at the expense of individual mental effort. Far from being a mere technological evolution, this growing dependency could have lasting consequences on how we think, learn, and interact socially. The social impact of this digital revolution also questions the ethical dimensions inherent in the tools we now use daily.
- 1 The risks of the intensive use of artificial intelligence on human intelligence
- 2 Cognitive debt: a new threat to critical thinking in the AI era
- 3 Students facing the rise of AI: a challenge for modern education
- 4 “I’m ChatGPT-ing, therefore I am”: the blurred boundary between human identity and artificial intelligence
- 5 The social impact of artificial intelligence: towards a collective dependency?
- 6 Collaborating with artificial intelligence: opportunity or cognitive trap?
- 7 Towards a balanced future between technology and human intelligence
- 7.1 Is intensive AI use really dangerous for human intelligence?
- 7.2 How does cognitive debt influence the way we think?
- 7.3 What solutions ensure AI is a tool and not an intellectual crutch?
- 7.4 Does artificial intelligence threaten our digital identity?
- 7.5 Can AI amplify our intellectual abilities rather than diminish them?
The risks of the intensive use of artificial intelligence on human intelligence
The massive integration of artificial intelligence into our cognitive activities raises a major alert. Avi Loeb, expert from the prestigious Harvard University, already observes in some intensive users a form of cognitive atrophy, a phenomenon he calls “cognitive dependency.” This dependency manifests as a tendency to systematically delegate intellectual tasks to machines, thus creating a progressive reduction in mental exercise.
The most fitting analogy is that of muscles: if we stop using them, they weaken. The brain could suffer a similar fate if the natural balance between personal effort and technological assistance is broken. This situation is all the more concerning as it particularly affects younger generations growing up in a digital environment where generative AI is a systematic reflex to quickly answer any question or writing need.
A concrete example is that of students, for whom AI has become an essential aid. When an essay or an assignment can be produced almost instantly by a chatbot, the desire or necessity to think deeply decreases. As a result, reasoning, critical analysis, and personal creativity skills are at risk in the long term if this use is not regulated.
The risk of ” unlearning” is not a vain hypothesis: increasingly, users rely on a continuous flow of ready-made solutions, moving away from the complex cognitive processes that shape intelligence. This reality pushes some educators to deeply rethink their pedagogy, even considering exams without access to AI tools, to preserve the integrity of real skills.

Cognitive debt: a new threat to critical thinking in the AI era
The concept of “cognitive debt” is now central in the reflection on the impact of AI. This term describes individuals’ tendency to outsource certain mental operations to external supports, which, when pushed to the extreme, weakens the brain’s intrinsic capacities.
Historically, this externalization mainly concerned memory, with the advent of search engines. However, current artificial intelligence models no longer merely store and transmit data: they directly generate meaning, synthesize, analyze, and even argue. This is a major qualitative leap that profoundly modifies cognitive habits.
Research by Dr. Michael Gerlich published in 2025 reveals a direct link between the frequency of AI use and a noticeable drop in critical thinking performance. This study highlights that when AI becomes the primary source of answers, the mental effort required to evaluate, challenge, and construct personal reflection diminishes. This phenomenon is all the more worrying as it affects essential skills such as the ability to distinguish reliable information, nuance arguments, and create original ideas.
In the table below, we summarize the main identified effects of intensive AI use on human cognition:
| Cognitive Aspect | Impact of Intensive AI Use | Long-term Consequences |
|---|---|---|
| Memory | Decrease in active memorization | Passive knowledge transmission, reliance on external supports |
| Creativity | Reduction in generating original ideas | Uniformity of thought, impoverishment of personal productions |
| Critical Analysis | Weakening of evaluation abilities | Increased vulnerability to fake news and misinformation |
| Intellectual Autonomy | Growing dependence on AI tools | Loss of personal initiative and confidence in one’s abilities |
This table highlights not only a quantitative decline but also a qualitative deterioration of human intelligence, sounding the alarm on the necessity of ethical regulation and adapted education.
Students facing the rise of AI: a challenge for modern education
At school and university, the social impact of generative AI is tangible. According to a Pew Research Center survey, more than half of teenagers report regularly using AI tools, whether to find answers or write their assignments. This normalization radically changes traditional learning methods.
Teachers face a paradox: these technologies open new pedagogical perspectives but also weaken the ability to evaluate the work genuinely produced by a student. Moreover, easy access to pre-made content can discourage slower and more thorough approaches, which are essential to developing analytical skills.
Faced with this observation, some institutions are experimenting with innovative methodologies, notably:
- Organizing “offline” exams, without Internet or AI access.
- Implementing collaborative projects that encourage personal production and critical thinking.
- Using AI as a supervised pedagogical tool, to support without replacing intellectual effort.
- Training students in ethical and responsible use of digital technologies.
These initiatives testify to a progressive but urgent awareness of the need to preserve human cognition while integrating the undeniable benefits of technology. The challenge now is to learn to think with AI without becoming dependent on it.

“I’m ChatGPT-ing, therefore I am”: the blurred boundary between human identity and artificial intelligence
With the exponential development of intelligent assistants, a new issue emerges: the possibility of seeing our digital identity merge with these automated systems. Avi Loeb worries that AI, by assimilating vast datasets, could create digital copies of our style of thinking and communication.
This trend raises a fundamental question about the ethics and the very nature of individuality in a world where artificial intelligence no longer merely assists but also perfectly mimics our behaviors.
Technological scenarios envision autonomous agents managing daily interactions on behalf of a user, whether responding to messages, posting on social media, or even debating online. The line between the true human personality and its algorithmic representation blurs, posing unprecedented challenges to authenticity and social trust.
Loeb himself was a victim of this problem when his face and voice were used to create scientific videos he never produced. This experience perfectly illustrates the risks of manipulation and misinformation linked to the proliferation of deepfakes and other synthetic content.

Beyond individual effects, the intensive use of AI has a profound impact on the social fabric. A society where interactions, decisions, and even personal opinions are influenced or dictated by artificial intelligences poses a real democratic challenge.
The risk is twofold: both a homogenization of thoughts through algorithms that favor certain types of content, and a collective dependency on these tools, to the detriment of public debate and intellectual diversity.
Several experts call for increased vigilance, reminding that ethics must be at the heart of technological development. Among the proposals, we can mention:
- Transparency of AI models and their decision-making mechanisms.
- Protection of personal data to avoid instrumentalization of identity.
- Promotion of digital skills that strengthen critical thinking.
- Coordinated global regulation at the international level.
Without these measures, artificial intelligence risks not only eroding individual cognition but also weakening social cohesion and trust in institutions.
Collaborating with artificial intelligence: opportunity or cognitive trap?
One of the major current debates is whether artificial intelligence should be seen as an intellectual partner or as a mental substitute. The analogy of the intellectual exoskeleton is often used to highlight the potential to amplify human capacities through these tools.
When used judiciously, AI stimulates creativity, speeds up fact-checking, and opens new horizons for research and innovation. It can become a true ally for developing complex hypotheses, simulating models, or organizing massive data.
However, the line between assistance and dependency is thin. If individuals stop exercising their own thinking to systematically rely on machines, they risk losing intellectual autonomy and their ability to solve problems without external help.
The pedagogical and social challenge is therefore to learn to collaborate with these technologies while maintaining active independent thinking practice. This involves:
- Encouraging reasoned and critical use of AI.
- Promoting training in augmented cognition, which combines the best of human and machine.
- Implementing hybrid work and learning environments.
- Developing evaluation tools that also measure the ability to think independently.
Towards a balanced future between technology and human intelligence
History teaches us that every technological revolution disrupts our relationship with knowledge. The criticism raised today about artificial intelligence fits into this continuity: the calculator, GPS, and Internet have all questioned our skills and uses.
However, a singularity stands out with generative AI: these machines now participate in idea creation, which can durably modify our very conception of intelligence. It is no longer just facilitating information access but interacting with a digital partner capable of generating intellectual content.
The challenge of this decade will be to learn to find the right balance between technological aid and autonomous thinking capacity. This challenge extends from the educational field to professional and personal spheres, where mastering this balance will define the quality of our relationship to knowledge and society.
Technology does not necessarily make humans weaker; it simply transforms the way we use our brains. Preserving critical thinking, encouraging attention, and cultivating analysis become the keys to navigating this evolving digital universe.
Is intensive AI use really dangerous for human intelligence?
Yes, according to recent studies, excessive use of artificial intelligence can lead to a decrease in critical thinking, memorization, and creativity abilities, which can result in a decline in human intelligence.
How does cognitive debt influence the way we think?
Cognitive debt refers to outsourcing part of intellectual tasks to external tools. This weakens mental exercise and decreases the ability to analyze, synthesize, and critique information independently.
What solutions ensure AI is a tool and not an intellectual crutch?
It is essential to integrate AI in a supervised way, encouraging the development of critical thinking skills, training for thoughtful use of technological tools, and offering learning environments disconnected from AI for certain exams.
Does artificial intelligence threaten our digital identity?
The proliferation of technologies such as deepfakes and autonomous agents raises significant risks for the authenticity of our online identity, with possible manipulations and image usurpations.
Can AI amplify our intellectual abilities rather than diminish them?
Yes, when properly used, AI can be a cognitive exoskeleton that stimulates creativity, accelerates learning, and improves decision-making. The challenge is to find a balance between collaboration and self-reflection.