After leaving OpenAI, he launches a $1 billion quest to build a revolutionary AI

Adrien

February 1, 2026

After leaving OpenAI, he launches a $1 billion quest to build a revolutionary AI

In 2026, the world of artificial intelligence is experiencing a major turning point, driven by the bold vision of Eric Zelikman. A former pillar of OpenAI, he chose a discreet but determined departure, refusing the spotlight and grandiose announcements to focus on an unprecedented ambition. He is not simply seeking to improve AI as we know it, but to reinvent it around a central idea: AI can only reach its full potential if it manages to facilitate human collaboration by better understanding the social and decision-making dynamics that govern collective work.

In a context where industry giants like OpenAI, Google, or Meta rely on increasingly powerful and efficient AI models for individual tasks, Zelikman takes an opposite stance. He raised $480 million in record time without even presenting a public prototype, convincing investors and experts thanks to his humanistic and pragmatic vision. His startup, Humans&, is now aiming for a $1 billion funding round to develop a system capable not only of responding or coding, but of truly “orchestrating” human interactions. This bold quest raises a fundamental question: how can an artificial intelligence truly revolutionize the way we work together without reproducing or amplifying the current difficulties of group work?

A strategic departure from OpenAI to reshuffle the cards of collaborative artificial intelligence

Eric Zelikman’s departure from OpenAI did not take place amid the usual media turmoil that often accompanies major changes in tech. He left one of the most coveted AI labs with calculated discretion, at a time when the company dominated the market and stirred all envies. This decision was not a whim but the consequence of a deep divergence over the very trajectory AI should take. While many focused their efforts on creating solitary AIs capable of extreme cognitive performance, Zelikman sounded the alarm: current AI, brilliant as it may be, struggles to grasp the essence of human collective work.

The observation is simple but heavy with consequences. He sums up his thinking: “AI does not lack technical intelligence, it knows how to code, respond, and analyze. What it still does not know is how to manage the complexity of human interactions, arbitrate conflicts, and advance decision-making over time with multiple actors.” This inability to understand coordination and group dynamics limits its real impact within companies or organizations. This gap partly explains why current AI solutions, despite their technical success, struggle to deeply integrate into everyday professional life.

By leaving OpenAI, Eric Zelikman took a calculated risk, betting on a future where artificial intelligence would no longer be a top-performing individual force, but a catalyst for collaboration. This paradigm shift opens up a markedly different direction for the sector’s future, reconciling cognitive power and social intelligence.

The revolutionary vision of Humans&: an AI that supports human collaboration

Following this departure, Zelikman founded Humans&, an atypical startup whose goal is not to launch a “super AI” capable of doing everything, but to build a system that understands the complexity of human interactions within groups. Rather than an isolated AI, Humans& aims for collective intelligence, a “connective tissue” between machines and humans. This AI is meant not only to produce answers but to serve as a dynamic interface capable of monitoring and supporting collaborative processes over time.

In this spirit, the AI developed by Humans& integrates several key innovations. First, it applies long-term reinforcement learning, where the machine observes, plans, and adapts its interventions according to the evolution of human dynamics. This training mode is fundamental for building an AI that does not settle for a one-off interaction but can guide a project or decision over several weeks.

Secondly, Humans& chose a multi-agent architecture, meaning that several artificial intelligences interact not only among themselves but also constantly with human users. This better reflects the reality of professional environments, where decisions and compromises often arise from complex negotiations among various parties with divergent interests.

Finally, a crucial element is this AI’s persistent memory. Unlike classic models, Humans& allows the machine to remember previous episodes, past agreements, as well as tensions or shifts in mindset within teams. This “living” memory enables the AI to avoid repeating mistakes and to contextualize its advice, creating true continuity in collective work.

Such a system inaugurates a new era where artificial intelligence becomes a true ally of teamwork and coordinated decision-making, going beyond simple virtual assistants.

The concrete benefits of a collaboration-oriented AI

The expected impact of Humans& goes beyond the technical framework to directly affect organizational methods in companies. For example:

  • Reduction of unresolved conflicts: AI can identify early sources of tension or blockages within a team, proposing enlightened arbitrations or facilitating communication.
  • Improvement of decision tracking: Thanks to persistent memory, every step of a process is retained, allowing better traceability and accountability.
  • Increase in collective productivity: By optimizing exchanges and avoiding redundant efforts, the team can move more efficiently towards shared goals.

Through these innovations, Humans& does not just automate or assist but redefines collaboration, opening the way to a truly social artificial intelligence.

A record fundraising that illustrates the magnitude of trust in this project

Barely founded, Humans& distinguished itself by an exceptional $480 million fundraising round, proposing an impressive valuation of $4.48 billion. This funding comes from prestigious investors such as Ron Conway from SV Angel, Nvidia, Jeff Bezos, as well as Alphabet’s GV. This record illustrates both the trust in the project’s relevance and the market’s appetite for an AI that goes beyond mere calculation or text generation.

It is remarkable to note that this financial enthusiasm manifested without Humans& yet revealing any product or prototype. This situation highlights a new trend in the startup ecosystem: investors now bet on ideas, strategic visions, and the quality of teams more than on finished products. They seek to claim a central role in what Zelikman calls the “connective layer” of the digital future.

The presence of Nvidia, a leader in AI-specialized hardware, is no coincidence. It signals that Humans& will require massive computing power and that it is engaged in an intense technological competition to build architectures suited to its ambitions.

Humans& versus the giants: a disruption announced in collaboration tools

Humans& does not aim to compete directly with classic collaborative tools such as Slack, Notion, or Google Docs but to disrupt the way these platforms operate. All these tools rely on a fragmented approach: separate conversations, independent documents, and often disconnected management of real human processes.

Humans&’s strategy is deeper: to profoundly redefine collaboration by introducing a layer of social intelligence capable of harmonizing divergences, participating in informal team governance, and tracking the evolution of decisions over time. This ambition poses a major threat to traditional providers and large labs that develop their own AIs without necessarily rethinking the very structure of human cooperation.

Anthropic, Google, or OpenAI certainly work on AIs capable of collaborative tasks but remain attached to models originally designed for individual interactions. Humans& takes the opposite stance: starting from social intelligence as a foundation, a bold bet that could disrupt the sector’s paradigms.

Ethical implications and the invisible power of a coordination AI

The promise of an artificial intelligence capable of arbitrating human relationships, memorizing past tensions, and influencing collective decisions raises fundamental questions. Who defines the criteria for what is “good” for the group? Where does assistance stop and manipulation begin? These questions are not only theoretical but essential to the trust users will place in such technology.

Eric Zelikman states that Humans& aims to “augment” and not dispossess humans of their power. However, embedding an invisible layer of coordination can quickly become a source of opaque control, where strategic decisions are influenced by an algorithm that no one fully understands. It is a delicate balance between utility and influence, between transparency and third-party operation.

The startup will therefore also have to engage in rigorous technological ethics approaches, guarantee algorithmic accountability, and offer users real control over the AI’s functioning and recommendations. This dual technical and ethical mission is certainly one of the major challenges of this billion-dollar quest.

Some essential ethical issues:

  • Transparency of algorithmic decisions: Users must understand how and why the AI influences certain actions
  • Respect for privacy: Persistent memory raises questions about personal data and confidentiality
  • Limits of influence: Clarity on the boundaries between aiding decisions and taking control
  • Shared responsibility: Clear attribution of human and algorithmic responsibilities in case of error or conflict

The technicality behind Humans&: an AI designed to last and adapt

The core of the project is based not only on a new conceptual approach but on major technical breakthroughs. Long-term reinforcement learning allows the development of models that go beyond static responses to evolve with the users’ environment. This learning mode gives the AI the ability to integrate continuous feedback, adapt its strategies, and correct its actions in real time.

Multi-agent reinforcement learning introduces a complex interaction between various digital agents, each potentially representing different aspects or stakeholders of a project. These increasing interactions simulate the real functioning of a human organization, where divergent interests must find common ground. This complexity is necessary for the AI to understand negotiations, compromises, and subtle arbitrations that represent both the richness and the complication of group work.

Finally, persistent memory, in other words the ability to keep a detailed and exploitable history, prevents the AI from reproducing the functional amnesia that penalizes current tools. It enables continuity and coherence in decisions, even after several weeks or months, a major asset for companies facing complex deadlines and stakes.

Technology Objective Key Benefit
Long-term reinforcement learning Continuous monitoring and adaptation Durable support of projects
Multi-agent reinforcement learning Interaction between several AIs and humans Realistic management of negotiations and conflicts
Persistent memory Preservation of decision-making history Continuity and coherence of decisions

A billion dollars to revolutionize human collaboration through AI

While Humans& has already crossed an impressive milestone with nearly half a billion dollars in funding, a new step is approaching: raising a billion dollars to realize its global vision. This extraordinary amount reflects the ambition to build a technological and organizational infrastructure capable of operating on a global scale, bringing together human teams and artificial intelligences in a permanent and effective dialogue.

This sum will serve not only to strengthen technical capabilities, especially in high-performance computing, but also to attract talents from the best labs such as Google, Meta, Anthropic, OpenAI, or DeepMind. The current team, already composed of prestigious figures such as Georges Harik (ex-Google) and Noah Goodman (Stanford), will be expanded to accelerate research and development.

This massive funding also responds to a strategic necessity: anticipating the upcoming computing war between the world’s largest AI players, where the ability to process complex data in real time will determine the future of the industry. For Zelikman, the challenge is not to have an isolated AI with superhuman intelligence, but an infallible social intelligence at the service of humans.

Perspective on the future of social and collaborative artificial intelligence

As artificial intelligence continues to progress at the rapid pace expected in 2026, the vision carried by Humans& offers a renewed, more human and pragmatic direction. Rather than aiming for a solitary super-intelligence disconnected from realities on the ground, it is about creating an AI integrated into collective life, capable of managing tensions, facilitating decisions, and aligning efforts toward common goals.

This technological evolution is accompanied by a profound cultural shift in how AI is envisioned. The future will not be one where the machine replaces humans, but one where it becomes an intelligent mediator, strengthening collective competence and helping overcome the inherent weaknesses of human interactions.

It remains to be seen whether Humans& will succeed in conquering this ambitious market and impose a new standard, but the proposed model already marks a clear break in the ongoing revolution. The coordination of people — hitherto the Achilles’ heel of technological projects — now becomes the main issue to address to build the promised revolutionary AI.

Why did Eric Zelikman leave OpenAI?

He left OpenAI because he believed current AI was not excelling in understanding human dynamics and collaboration, which is crucial to solving real collective problems.

What is the uniqueness of the AI developed by Humans&?

This AI relies on long-term learning, multi-agent reinforcement learning, and persistent memory to sustainably support human collaboration in collective decision-making.

Why did Humans& raise so much money without a product?

Investors bet on the vision of a new form of AI capable of building the key coordination layer between humans and machines, a central strategic position for the future.

What are the main ethical challenges related to this AI?

The questions revolve around transparency, privacy protection, the boundary between assistance and influence, and accountability in case of dispute or error.

How could this project change collaboration in companies?

It could transform work processes by improving conflict management, decision tracking, and collective productivity thanks to an AI that understands and coordinates human interactions.

Nos partenaires (2)

  • digrazia.fr

    Digrazia est un magazine en ligne dédié à l’art de vivre. Voyages inspirants, gastronomie authentique, décoration élégante, maison chaleureuse et jardin naturel : chaque article célèbre le beau, le bon et le durable pour enrichir le quotidien.

  • maxilots-brest.fr

    maxilots-brest est un magazine d’actualité en ligne qui couvre l’information essentielle, les faits marquants, les tendances et les sujets qui comptent. Notre objectif est de proposer une information claire, accessible et réactive, avec un regard indépendant sur l’actualité.