A researcher leaves OpenAI denouncing a veil of truth within the company

Adrien

December 16, 2025

un chercheur d'openai démissionne en dénonçant un manque de transparence et un voile de vérité au sein de l'entreprise.

In 2025, an atmosphere of concern settles behind the scenes at OpenAI, one of the major players in the field of artificial intelligence. Tom Cunningham, a researcher in business economics, has decided to leave this emblematic institution, denouncing a resignation of transparency and a manipulation of the truth that raises fundamental questions about internal practices. This sensational departure reveals that behind the smooth image of a progressive company working for the common good, there hides a veil of secrecy and latent conflicts around the economic issues related to AI.

This phenomenon is not isolated. Other researchers specialized in security or public policy have also broken away from OpenAI, denouncing a worrying slide where strategic decisions prevail over scientific rigor, and where communication controls the story rather than the research. In this context, it is the uncomfortable truths about the real impact of artificial intelligence on employment, inequalities, and economic stability that are put aside, in favor of an exclusively optimistic and consensual discourse.

This report aims to decipher this troubled climate, relying on internal testimonies, investigations conducted by the specialized press, and the evolution of the philosophy of a company once pioneering a militant model, now turned into a giant economic machine. Far from simple quarrels among researchers, it is a real conflict between science and strategy that is playing out, with major implications for society, public debate, and the future regulation of this major technology.

Tom Cunningham’s departure reveals a transparency crisis at OpenAI

Tom Cunningham’s departure, a prominent figure in OpenAI’s economic research, draws attention to a deep and often unspoken tension in the tech spheres. After several years studying the macroeconomic effects of artificial intelligence tools, Cunningham chose a remarkable exit, denouncing a well-kept secret: economic research is no longer but a communication tool, shaped to support the desired image of the company.

Contrary to what one might expect from a scientific institution, the results and reports produced tend to overestimate the benefits of AI – value creation, productivity increase, market modernization – while minimizing or hiding negative effects. Yet, the spotlight on the latter, such as potential job destruction or worsening inequalities, would be “not aligned” with the corporate strategy, and likely to create a major conflict of interest.

This situation illustrates the trap in which OpenAI finds itself: the company is both developer of technology and judge of its impacts. This dual role poses a complex ethical and scientific dilemma, leading to compromises or self-censorship. Cunningham’s departure symbolizes this growing gap between scientific truth and the official communication dictated by management.

Internally, his farewell message quickly circulated among teams and raised a thorny question: can we still speak of independent and objective research when studies are forced to “tell the good story”? This questioning also raises issues about the company culture and its ability to accommodate the necessary criticism and controversies for controlled innovation.

un chercheur quitte openai en dénonçant un voile de vérité, soulevant des questions sur la transparence et les pratiques internes de l'entreprise.

Signs of a biased economic research

Before Cunningham made his decision, several signs already alarmed observers: internal reports are becoming more and more homogeneous, all unanimously praising the benefits of AI. For instance, a report written under the direction of Aaron Chatterji, head of economic research, recently emphasized spectacular productivity gains achieved thanks to ChatGPT, implying rapid global adoption. Yet, this document has hardly ever mentioned financial and social risks, nor the unequal consequences of new technologies.

A former team collaborator, speaking anonymously, confirms that research is turning away from its original questions, preferring today to conform to the official narrative dictated by marketing strategy. This dismissal of doubt, this voluntary self-censorship of shadowy areas, distorts what should be a rigorous analysis, served solely by the pursuit of truth.

According to some, this phenomenon even stems from a deliberate desire to manage perception, rather than a pure coincidence. Research stops being a place of free exploration, to become a tool serving the financial and strategic interests of OpenAI, which today weighs several hundred billion dollars in the global economy.

An economic and strategic model influencing scientific freedom

The control of narratives around artificial intelligence cannot be understood without grasping the evolution of OpenAI’s transformation, which is rapidly moving away from its initial DNA. Founded in 2016 as an open organization committed to knowledge sharing, it has turned into an ultra-commercial company at the forefront of closed technology. Its strategic refocusing now aims at a colossal capitalization estimated at nearly one trillion dollars.

This formidable metamorphosis places OpenAI in a delicate position: how to reconcile a mission for public interest with the demands of a brutal financial market? The pressure from investors, political and media actors is immutable and leads to prioritizing positive and reassuring communications.

The consequences are multiple:

  • Research orientation: studies are selected and written to produce a favorable impact in terms of image and reassurance.
  • Exclusion of sensitive issues: the possibility that AI creates economic shocks or exacerbates social inequalities is significantly minimized.
  • Limitation of publications: the freedom to publish results that could contradict OpenAI’s commercial trajectory is restricted.

These elements outline a double pressure: scientific self-censorship and directed communication that feed a vicious circle, at the origin of discomfort and the departure of researchers like Cunningham.

un chercheur d'openai dénonce un manque de transparence au sein de l'entreprise en quittant son poste, révélant un voile de vérité autour des pratiques internes.

Comparative table between OpenAI’s original and current values

Aspect Original Values (2016) Current Position (2025)
Openness and transparency Priority to open source code, academic exchanges Closed models, control over shared information
Mission Common good and ethics Maximization of profits and financial capitalization
Research approach Independent, exploratory Strategic, oriented towards positive communication
Relationship with regulation Collaborative Defensive, protection of economic interests

Do these successive resignations signify a major internal conflict?

The case of Tom Cunningham is just one episode in a broader sequence where several key researchers express their frustration or refusal of current practices. William Saunders, former member of the “Superalignment” team, left because of the company’s choice to prioritize rapid launch of attractive products without sufficiently considering associated security risks.

Steven Adler, another security researcher, publicly shared criticisms about poor management of psychological risks related to ChatGPT, highlighting that some users were caught in delusional spirals without appropriate interventions.

Miles Brundage, who led public policy research, criticizes the growing difficulty in publishing analyses on sensitive topics, such as ethics or regulation. He explains that the pressure to publish only consensual results slows down the progress of a necessary debate within artificial intelligence technologies themselves.

The convergence of these departures testifies to a deep conflict between the desire for rapid and lucrative innovation, and the long-term responsibility of a potentially disruptive technology. These researchers choose to distance themselves not from AI itself, but from the mechanisms that now control its narrative and research.

The risks of a single orientation in AI scientific narration

The control exercised by OpenAI over its own studies concerns not only a commercial issue but also a democratic one. Indeed, the research produced by this company is largely used by public decision-makers, regulators, and journalists to guide policies and social perception of artificial intelligence.

A weakened transparency and uniform results distort the collective understanding of the real effects of AI. The risk is that society lacks critical information to regulate and effectively frame this technology. The absence of dissenting voices within OpenAI weakens the quality of public debate.

To illustrate this phenomenon, one can observe how crucial areas—such as employment disruptions, algorithmic biases, or concentration of economic power—are under-studied or absent in publications, depriving decision-makers of reliable data.

Such a state of affairs creates a vicious circle: as long as uncomfortable truths are not revealed, the trend to promote AI as a panacea strengthens, thus legitimizing a massive deployment without sufficient safeguards.

List of potential consequences of a biased narrative:

  • Misjudgment of socio-economic risks
  • Development of insufficiently rigorous public policies
  • Underestimated increase in inequalities
  • Loss of public trust in scientific research
  • Consolidation of private power to the detriment of the general interest

When corporate strategy dictates science: the example of the internal message “Build solutions, not papers”

An internal message relayed soon after Cunningham’s resignation crystallized the discomfort. Jason Kwon, the chief strategy officer, emphasized the need for OpenAI not just to publish research on problems but also to build commercial solutions.

This approach reveals a profound shift: research stops being a critical and independent exercise, becoming a lever serving immediate economic and marketing objectives. This logic values results that contribute to building a positive image and reducing obstacles to the rapid deployment of products.

A researcher confided privately that this phrase could be summed up as “choose your battles, avoid uncomfortable truths.” According to him, when information dissemination is dictated by corporate strategy, truth and transparency become variables to be adapted to the context and financial stakes of the moment.

un chercheur quitte openai en dénonçant un voile de vérité au sein de l’entreprise, soulevant des questions sur la transparence et l’éthique de l’organisation.

Colossal economic stakes make the denunciation risky but necessary

OpenAI has become an economic giant worth several hundred billion dollars. Its financial stakes are enormous, whether through license sales, strategic partnerships, or upcoming IPO. In this environment, the emergence of reports or testimonies that could destabilize this model is seen as a direct threat.

The denunciation of the veil of truth by Cunningham, as well as criticisms from other researchers, therefore constitutes a courageous act that highlights risks linked to the excessive concentration of power around a few major players. The problem goes far beyond the internal sphere of the company: it is a global issue about how public narratives are constructed for major technologies, and about how control and transparency mechanisms are established or closed.

This fight finally raises an essential question: to ensure ethical and responsible development of artificial intelligence, shouldn’t we promote a plurality of actors capable of freely assessing its impacts, outside of an overly heavy economic and strategic tutelage?

Why did Tom Cunningham leave OpenAI?

He denounced a strategic orientation that favors positive communication to the detriment of independent and transparent economic research.

What are the main risks of biased research at OpenAI?

Underestimation of AI’s negative effects, misperception of socio-economic risks, and weakening of democratic debate.

How does OpenAI justify this orientation?

The company emphasizes the need to build concrete solutions and ensure the rapid and secure deployment of its technologies.

Which other key figures have left OpenAI for similar reasons?

William Saunders, Steven Adler, and Miles Brundage, notably, for principles related to security, research policy, or management of psychological risks.

What is the importance of transparency in the development of artificial intelligence?

Transparency allows a balanced democratic debate, better regulation, and enhanced public trust.