As artificial intelligence (AI) increasingly asserts itself in Human Resources (HR) services, issues related to the security of sensitive data become crucial. In 2026, the Mercor case highlighted the inherent vulnerabilities in training AI models using internal and potentially confidential information. The open supply chain, the multiplicity of external stakeholders, and the massive reliance on independent subcontractors have turned HR data into a major entry point for cyberattacks. The use of open source platforms and sometimes lax management of security protocols pose multiple threats to companies, which must now rethink their data governance to preserve confidentiality and regulatory compliance.
Faced with this dual challenge of innovation and protection, both IT and HR departments must integrate cybersecurity from the design phase of artificial intelligence projects. Beyond purely technical aspects, this involves managing the entire human chain involved in model training: trainers, annotators, external platforms, and third-party tools. This holistic approach is essential to ensure the protection of sensitive data, guarantee compliance with AI ethics rules, and meet GDPR requirements, while benefiting from the efficiency gains brought by artificial intelligence in talent management and strategic decision-making.
- 1 The Major Risks Related to Training AI Models in Human Resources
- 2 Why Is the Governance of Sensitive Data Crucial in AI Applied to HR?
- 3 Best Practices for Protecting Sensitive Data in HR AI Training
- 4 How AI Transforms Human Resource Management While Increasing Cyberattack Risks
- 5 Regulatory and Ethical Challenges of AI Use in HR Management
- 6 Impacts of the AI Supply Chain on HR Security: The Mercor Case
- 7 Transforming Risk Management in AI for HR
- 8 Balancing AI Performance and Confidentiality Respect: Emerging Technical Solutions
- 9 Training and Raising Awareness of AI Issues in HR to Guarantee Security and Ethics
The Major Risks Related to Training AI Models in Human Resources
The Mercor incident revealed a critical flaw in the artificial intelligence ecosystem applied to Human Resources. While training models require a significant amount of data, often personal in nature, outsourcing them to poorly controlled providers poses a fundamental information security problem.
The multiplicity of actors involved in the collection, annotation, and validation of data creates a significant exposure surface. For example, poorly informed independent workers may handle internal exchanges, resumes, evaluations, or work histories without knowing confidentiality standards or regulatory requirements. This opacity harms traceability and weakens data governance within companies.
The use of open source tools, such as the LiteLLM project used by Mercor, also exposes infrastructures to technical risks. These software packages are regularly updated by external communities, sometimes without thorough security audits, creating vulnerabilities exploitable by malicious actors. These flaws can compromise not only internal data but also exchanges between humans and AI systems, as demonstrated by the compromise of Slack exchanges during the attack.
Another major threat lies in the nature of the data themselves. The information present in training databases often includes personal addresses, unique identifiers, or even social security numbers. Their exposure not only jeopardizes the individual confidentiality of employees but also entails a significant legal and reputational risk for the company. A leak can affect trust within teams, disrupt the employer brand, and generate sanctions in case of non-compliance with regulatory obligations.
The human dimension adds an additional risk factor. Workers involved in training, often assigned to precarious and dispersed missions, are sometimes poorly aware of cybersecurity issues. High turnover, underqualification regarding data protection, and the absence of clear contracts complicate maintaining a rigorous security policy. These human weaknesses add to technical and organizational vulnerabilities.
Here is a condensed list of the main risks related to AI model training in HR:
- Multiplicity and low control of external stakeholders
- Obsolescence and vulnerabilities of open source tools
- Exposure of sensitive information and critical personal data
- Lack of awareness and training of workers in AI
- Absence of traceability and rigorous governance
- Legal risk related to non-compliance with GDPR and other standards
- Reputational consequences on the employer brand
These risks underline the necessity for HR and IT departments to collaborate closely to anticipate and mitigate these vulnerabilities. Appropriate governance is essential to transform the management of sensitive data into a secured advantage.
Why Is the Governance of Sensitive Data Crucial in AI Applied to HR?
The question of data governance lies at the heart of current debates on AI in Human Resources. The increasing sophistication of models exploiting millions of data points forces companies to adopt systematic and cross-functional approaches.
Governance covers a multitude of aspects, from precise mapping of data flows, defining secure access protocols, to controlling external partners and managing legal risks. However, this responsibility cannot rest solely on IT teams. HR departments must also incorporate this dimension into their strategy.
A fundamental element is to accurately identify the data used for training models to minimize their volume and sensitivity level. This involves respecting the GDPR minimization principle, which requires processing only strictly necessary information. For example, when selecting resumes or evaluations, only essential data should be included and anonymized if possible.
Then, establishing clear and demanding contracts with providers is essential. These agreements must include guarantees on security, confidentiality, but also on the working conditions of stakeholders. Thus, any external intervention is carried out within a transparent and controlled framework.
To ensure this governance, several best practices emerge:
- Regular security and compliance audits
- Continuous training of internal and external teams on cybersecurity issues
- Implementation of secure and isolated training environments
- Use of advanced encryption protocols for data transmission and storage
- Adoption of traceability tools that allow real-time tracking of data usage
- Development of strict internal policies combining technical requirements and ethical rules
This last point highlights the importance of AI ethics: it is not only about protecting data but also ensuring fairness, transparency, and accountability of automated decisions. In the HR context, this includes preventing discriminatory biases during automated talent file analysis or guaranteeing human decision-making as the final step.
| Governance Aspect | Objectives | Concrete Examples |
|---|---|---|
| Data Mapping | Identify all sources and flows of sensitive data | Internal dashboards cataloging resume databases, evaluations, and HR histories |
| Provider Control | Ensure compliance with security standards and working conditions | Contract clauses including regular audits and planned training |
| Data Minimization | Reduce the volume and sensitivity of the data used | Anonymization of personal data in training datasets |
| Technical Security | Protect against intrusions and leaks | Use of virtual private networks (VPNs) and data encryption |
| AI Ethics | Ensure transparency and fairness in HR decisions | Regular reports on bias reduction and human recourse in automated decisions |
By taking these measures, organizations can not only comply with their legal obligations but also strengthen the trust of employees and partners in their AI-based digital processes.
Best Practices for Protecting Sensitive Data in HR AI Training
Ensuring data protection during AI model training has become a strategic issue. At a time when regulations are tightening and attacks are increasing, companies must deploy a coordinated set of practices aimed at securing every step of the process.
The first point remains securing data access. It is necessary to limit the number of stakeholders to a strict minimum, implement strengthened authentications, and monitor all data movements in real-time thanks to advanced monitoring tools. The goal is also to avoid overexposure to risks caused by dispersion across multiple platforms.
Moreover, training and awareness of teams, both internal and external, constitute an essential barrier. Deploying a technical solution is no longer sufficient if users ignore best practices or the seriousness of the risks. These trainings can include specific modules on GDPR standards, cybersecurity protocols, as well as AI ethics principles.
Another effective lever is conducting regular attack simulations and penetration tests. These exercises quickly identify weak points in AI-related architectures and processes. This feedback nourishes the continuous improvement loop and strengthens system resilience.
Here is a list of the main best practices to apply:
- Implementation of multi-factor authentication protocols
- Use of compartmentalized environments for training (sandboxing)
- Systematic encryption of data at rest and in transit
- Detailed audit logs to trace every data manipulation
- Rigorous evaluation process for providers and subcontractors
- Implementation of internal policies compliant with GDPR and CNIL recommendations
- Proactive anomaly monitoring enabled by AI itself
A notable example is that of a European banking company that adopted a completely isolated training environment, combining homomorphic encryption and human supervision to avoid any exposure of internal data. This advanced technical solution has made it possible to reconcile AI efficiency with strict respect for confidentiality rules.
In summary, the protection of sensitive data in AI training is based on a defensive approach combining technical security, human awareness, and regulatory compliance. It is this global strategy that guarantees the sustainability of AI projects in a secure HR environment.
How AI Transforms Human Resource Management While Increasing Cyberattack Risks
The integration of AI in Human Resource management revolutionizes daily practices: recruitment, performance tracking, workforce planning, and automation of administrative tasks gain efficiency and precision. However, this digital transformation is accompanied by a notable increase in cybersecurity risks.
Publishers of AI-based HR tools develop powerful training models capable of analyzing enormous volumes of internal data, sometimes very sensitive, to anticipate talent needs or objectively evaluate employees. Yet, the concentration of this information on centralized platforms encourages cybercriminals to more frequently target these systems. The Mercor case is a striking example.
Automation of decisions, one of AI’s major contributions, must be accompanied by rigorous risk management. Indeed, an erroneous or biased algorithmic decision can not only cause discrimination but also degrade social climate. Thus, information security no longer concerns only protection against intrusions but also guarantees reliable, ethical, and compliant information.
Simultaneously, digital transformation requires constant adaptation of human resources themselves. IT HR teams must now include skills related to cybersecurity and risk management: this impacts their working methods, tools, but also continuous training.
Here is a summary table of AI impacts on HR management and associated risks:
| AI Transformation in HR | Key Benefits | Risks and Challenges |
|---|---|---|
| Recruitment Automation | Time savings, better application analysis | Algorithmic biases, leakage of CV data |
| Talent Workforce Planning | Optimized staffing, anticipation of needs | Exposure of sensitive data, prediction errors |
| Performance Tracking | Increased transparency, improved decision-making | Privacy violations, database security |
| Automation of Administrative Tasks | Error reduction, speed of execution | System error risks, technical vulnerabilities |
In light of these challenges, organizations must imperatively integrate strategic reflection combining cybersecurity, confidentiality, ethics, and GDPR standards. By doing so, AI in HR can become a true lever of excellence while managing associated risks.
Regulatory and Ethical Challenges of AI Use in HR Management
The application of artificial intelligence in Human Resources raises major regulatory challenges, particularly under the lens of GDPR and new European directives. In 2026, CNIL reinforced its recommendations to strictly regulate automated processing of personal data, especially in sensitive contexts such as recruitment, career management, and disciplinary decision-making.
Data collection, processing, and storage must be carried out within a strictly compliant framework, based on solid legal grounds such as explicit consent or the legitimate interest of the employer. Training AI models often complicates this process, as it involves massive and sometimes unclear use of datasets containing sensitive personal data.
An ethical challenge adds to these legal requirements: how to ensure that algorithms do not reproduce, or even amplify, social biases (gender, origin, age) that could turn HR decisions into discrimination? Model transparency and the need for human supervision in critical processes then become essential.
To address these challenges, here are the key axes that HR stakeholders must consider:
- Regular compliance audits of AI models on data protection aspects
- Detailed documentation of processing and explainability of algorithms
- Establishment of ethical committees dedicated to AI use in HR
- Mandatory training of HR teams on ethical and legal AI principles
- Systematic recourse to human supervision before any automated decision
- Strict application of minimization principle and consent framework
Ethics and regulation should not be seen as obstacles but as levers to strengthen trust and legitimacy of AI projects in HR. The balance between technological innovation and respect for individual rights conditions the success and durability of these initiatives.
Impacts of the AI Supply Chain on HR Security: The Mercor Case
The Mercor incident is a textbook case perfectly illustrating the complexity and risks associated with the supply chain in the artificial intelligence sector applied to Human Resources. Mercor, a major player in AI model training, relies on a heterogeneous network composed of independents, subcontractors, and open source platforms, thus exposing sensitive data to multiple risks.
The technical flaw related to the open source project LiteLLM allowed malicious actors to access Slack exchanges and information exchanges between humans and AI. These compromises illustrate the absence of rigorous controls over tools and information flows transiting through these partners.
Behind this technical flaw lies an important social issue: the precarious working conditions of stakeholders who participate in model training. These workers, often independent, juggle several missions, without appropriate cybersecurity training or visibility on the purpose of the data handled. This human factor increases the intrinsic vulnerability of the AI supply chain.
The rapid reaction of new major clients, such as Meta which suspended its collaboration with Mercor, shows that securing this supply chain is a strategic issue, as much as protecting industrial secrets. Indeed, exposure of a single company can have cascading repercussions on the entire sector.
To limit these risks, it is essential that client companies:
- Conduct a rigorous assessment of HR practices and security protocols of partners
- Request implementation of verifiable operational and organizational security measures
- Require mandatory cybersecurity and ethics training for all stakeholders
- Promote clear contracting with specific clauses related to data protection
- Adopt a proactive approach to monitoring and continuous auditing of compliance
This heightened vigilance helps secure the supply chain and thus guarantees the protection of sensitive data entrusted to AI actors in HR.
Transforming Risk Management in AI for HR
In the era of artificial intelligence, risk management in Human Resources can no longer rely on traditional approaches. The Mercor incident reminds all professionals in the sector that AI model training introduces new vulnerabilities that must be identified and controlled.
Within this framework, companies must adopt an integrated approach combining cybersecurity, regulatory compliance, data governance, and ethics. Among key strategies, implementing proactive risk management is imperative. This includes, among others:
- Detailed mapping of risks related to sensitive data and training models
- Definition of action plans and rapid response to incidents
- Implementation of automated monitoring systems integrating AI tools in cybersecurity
- Recourse to strong partnerships between HR, IT, and compliance teams
- Adoption of collaborative approaches with suppliers and subcontractors to secure the chain
A concrete example concerns a digital services company that developed a dedicated dashboard for AI risk management. This dashboard centralizes alerts related to data management, abnormal behaviors on training platforms, and possible contractual breaches. This increased visibility has prevented several attempts of intrusion and data leaks.
This transformation in risk management is also an opportunity for HR functions to strengthen their strategic role. By anticipating security and data protection challenges, they actively contribute to the sustainability of AI innovations and the creation of a climate of trust within organizations.
Balancing AI Performance and Confidentiality Respect: Emerging Technical Solutions
The tension between the need to train complex models and protect sensitive data forces companies to explore new technical solutions. Several recent advances in 2026 contribute to reconciling these two often perceived antagonistic requirements.
Homomorphic encryption is a promising technology that allows calculations directly on encrypted data. This approach limits the exposure of sensitive information during model training. Many large companies are currently exploring this technique to strengthen their security.
Another approach is “federated learning,” which consists of training a shared model from several decentralized data sources, without ever sharing raw data. Each participant performs partial training locally, and only model parameters are transmitted and aggregated. This method considerably reduces the risks of exfiltration.
The adoption of isolated virtual environments (or sandboxing) and the implementation of strict source code verification processes complete the technical arsenal. In addition, the integration of blockchain traceability solutions is also beginning to emerge to guarantee the integrity and provenance of data used in AI.
Here is a summary table of the main evolving technologies to protect confidentiality during AI training:
| Technology | Features | Main Advantages |
|---|---|---|
| Homomorphic Encryption | Calculations on encrypted data without decryption | Maximum security without performance loss |
| Federated Learning | Decentralized model training | Reduction of sensitive data exfiltration risk |
| Sandboxing | Isolated environments for testing and training | Reduction of internal attack and leak risks |
| Blockchain for Traceability | Immutable recording of actions and data | Strengthening transparency and trust |
These advances light the way towards a respectful and secure use of AI confidentiality. They encourage companies to rethink their technical architectures and training strategies to optimize both performance and data protection.
Training and Raising Awareness of AI Issues in HR to Guarantee Security and Ethics
One of the essential pillars for securing the use of AI in human resources lies in training and raising awareness among the concerned actors. Without a thorough understanding of issues related to sensitive data protection, risk management, and AI ethics, it becomes difficult to implement coherent and effective practices.
Trainings must cover several key dimensions: knowledge of regulations such as GDPR, good cybersecurity practices, ethical challenges, as well as specific risks related to training models. The goal is to anchor a permanent vigilance and a shared sense of responsibility within corporate culture.
It is also essential to implement devices adapted to different profiles: technical sessions for IT and data science teams, as well as dedicated modules for HR managers, so that they understand strategic implications.
Finally, continuous awareness can rely on innovative tools, such as serious games, incident simulations, or feedback on real cases. Integrating these elements into daily operations prevents risk trivialization and encourages proactive behavior.
- Training programs adapted to technical and managerial profiles
- Use of concrete case studies from recent incidents (e.g., Mercor)
- Regular simulations of incident management and audits
- Continuous monitoring and updating of knowledge in response to rapid evolution of AI and regulations
- Promotion of a strong ethical culture around data use
Through this approach, companies foster environments where artificial intelligence can flourish calmly, guaranteeing the protection of sensitive data and respect for the founding principles of HR ethics.