Skip to main content
palantrix logo
Compliance & Ethics

EU AI Act — Recruitment

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. For recruitment and hiring, it introduces binding requirements for any AI system used in employment decisions — covering transparency, human oversight, audit trails, and candidate rights. Following a provisional agreement in May 2026 under the Digital AI Omnibus, the high-risk employment provisions are now expected to be enforceable from December 2027.
Illustration for EU AI Act — Recruitment

High-Risk Classification for Employment AI

The EU AI Act classifies AI systems used in recruitment, selection, and employment decisions as high-risk. This is not a marginal or edge-case designation — it is the same risk tier as AI used in medical devices, critical infrastructure, and law enforcement. The classification reflects the EU legislature's view that employment decisions are consequential enough to warrant binding oversight requirements.

The high-risk classification applies to AI used for: sorting or screening job applications; evaluating candidates in interviews; assessing suitability for promotion or role assignment; and monitoring employee performance. If your hiring process uses AI at any of these points, you are in scope.

What Employers Must Do

Transparency: candidates must be informed that AI is being used in their assessment, what it is assessing, and how the output will inform the hiring decision. Vague disclosures are insufficient — the information must be clear, specific, and provided before the assessment begins.

Human oversight: AI outputs cannot be the final decision-maker. A human decision-maker must review AI-generated scores or recommendations before any candidate selection or rejection decision is made. The human reviewer must have access to the underlying evidence — not just the score — and the ability to override AI recommendations.

Audit Trails and Documentation

High-risk AI systems must maintain detailed records of their operation, including: the data used to train or configure the model; the scoring logic applied to each candidate; the outputs generated; and evidence of human review. These records must be retained for a period sufficient to allow post-decision audit or challenge.

Employers must also conduct a conformity assessment demonstrating that the system meets EU AI Act requirements, and must register the system with the EU AI Act's designated authority. For most employers, this means working with an AI platform that takes on provider obligations and can provide the necessary technical documentation.

What the EU AI Act Does Not Ban

A common misconception is that the EU AI Act prohibits the use of AI in hiring. It does not. It prohibits the use of AI in hiring without appropriate transparency, oversight, and documentation. The regulation is designed to enable trustworthy AI use, not to prevent it.

Specifically prohibited, however: AI systems that use emotion recognition in employment contexts (analysing facial expressions, vocal tone, or physiological signals to infer emotional state); and AI systems that sub-consciously manipulate candidates or exploit vulnerabilities. These are absolute prohibitions, not subject to the oversight and transparency requirements applicable to high-risk systems generally.

Timeline and Preparation

The original enforcement date for high-risk AI provisions was August 2026. In May 2026, the Council of the EU and European Parliament reached a provisional political agreement under the Digital AI Omnibus to extend this deadline to December 2, 2027. This agreement still requires formal adoption to become law — but given the proximity of the original deadline and the breadth of political support, passage is widely expected.

The extension provides additional preparation time, but employers should not treat it as a reason to pause compliance work. The requirements themselves have not changed — only the deadline. Organisations that begin auditing their AI tools, reviewing candidate disclosures, and engaging vendors on compliance readiness now will be well-positioned regardless of whether the December 2027 date is formalised as expected.

How Palantrix is built for EU AI Act compliance

Palantrix was designed from the ground up with the EU AI Act's requirements in mind. Every candidate is notified before their interview that AI will score their responses. Every scoring decision is based on transcript content — not facial expressions, vocal affect, or emotion recognition. Every score is reviewable and overridable by a human hiring manager. The full audit trail — questions asked, responses given, scores generated, human reviews recorded — is retained and accessible. Data is hosted on EU/Irish AWS infrastructure. Palantrix is not retrofitting compliance onto an existing system; it is the foundation the platform was built on.

See how AI Video Interviews work

Frequently Asked Questions

1

When does the EU AI Act apply to recruitment?

The original deadline was August 2, 2026. Following a provisional political agreement in May 2026 under the Digital AI Omnibus, the deadline for high-risk AI provisions — including employment and recruitment AI — has been extended to December 2, 2027, pending formal adoption. Employers should continue compliance preparations regardless: the requirements have not changed, only the timeline.

2

Does the EU AI Act apply to small companies?

Yes. The EU AI Act applies to any organisation deploying AI systems in scope — there is no SME exemption for high-risk applications. The compliance obligations fall on both the AI system provider and the employer deploying the system. Smaller organisations benefit from the fact that their AI vendor should take on a significant portion of the technical documentation and conformity assessment obligations.

3

What are candidates' rights under the EU AI Act?

Candidates have the right to be informed that AI is assessing them, to understand the basis of AI-generated outputs, and to request human review of any AI-based decision that affects them. Employers must be able to explain how a candidate was scored and provide access to the evidence behind that score. Decisions cannot be made solely on the basis of AI output without human review.

4

Is facial expression analysis in video interviews prohibited?

Yes, in employment contexts. The EU AI Act explicitly prohibits emotion recognition AI in employment settings. Any video interview platform that analyses facial expressions, vocal tone, or physiological signals to infer emotional state or personality is non-compliant. Transcript-based analysis — evaluating the content of what candidates say — is permitted, provided the other high-risk requirements are met.

5

What happens if an employer is non-compliant?

Penalties under the EU AI Act are significant: up to €30 million or 6% of global annual turnover (whichever is higher) for the most serious violations; up to €20 million or 4% of turnover for other non-compliances. Enforcement is the responsibility of national market surveillance authorities in each member state. Ireland's enforcement body has not yet been designated at time of writing, but enforcement is expected to be taken seriously given the Irish government's broader approach to digital regulation.