Skip to main content
palantrix logo
Compliance & Ethics

Right to Explanation (AI Hiring)

The right to explanation in AI hiring refers to a candidate's legal entitlement to understand how an AI system assessed them and how that assessment influenced a hiring decision. This right is grounded in both the GDPR's provisions on automated decision-making and the EU AI Act's transparency requirements for high-risk AI in employment contexts.
Illustration for Right to Explanation (AI Hiring)

The Legal Foundations

Two pieces of EU legislation establish the right to explanation in AI hiring. GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produces significant effects — including employment decisions — and the right to request meaningful information about the logic involved. This has applied since 2018.

The EU AI Act, enforceable from December 2027, reinforces and extends this right specifically for high-risk AI systems in employment. It requires that AI systems used in recruitment provide explainable outputs, that human review is available for every AI-influenced decision, and that the technical documentation of the system is maintained in a form that allows regulatory audit.

What Employers Must Be Able to Explain

If a candidate asks how the AI assessed them, an employer must be able to provide: a description of what the AI was measuring and why those criteria are relevant to the role; the score or output the AI produced for that candidate; the input data — typically a transcript or assessment responses — on which the score was based; and a description of how the score influenced the hiring decision (whether it was a threshold, a ranking, or an input to a human review).

Explaining that 'an AI assessed you' without being able to describe what it assessed or how is not sufficient. The explanation must be meaningful — specific enough that the candidate can understand the basis of the assessment and, where they believe it is inaccurate, identify the specific point they wish to contest.

Human Review as Part of the Right

The right to explanation is paired with the right to human review. Under both GDPR and the EU AI Act, candidates can request that a human decision-maker review any AI-generated assessment that influenced an employment decision. This human reviewer must have genuine authority to override the AI output — a rubber-stamping process does not satisfy the requirement.

In practice, this means hiring processes must be designed so that human review is a real, accessible step — not a nominal safeguard that exists on paper but is never invoked. Employers should document how human review requests are handled, and retain records of reviews conducted.

Practical Implementation

Explainability is a design requirement, not an afterthought. AI hiring systems must be built so that scores can be decomposed into their contributing factors and explained in plain language. Systems that produce a single undifferentiated score from an opaque model cannot meet this requirement. The practical test: can you tell a candidate, in non-technical language, exactly what drove their score?

Candidate-facing disclosure should happen before assessment begins — not buried in a general privacy notice. Candidates should know that AI will assess their responses, what traits or competencies the AI is evaluating, and what their right to human review entails. Early, transparent disclosure reduces the risk of candidates experiencing AI assessment as a black-box process over which they had no warning or recourse.

How Palantrix supports the right to explanation

Palantrix is built for explainability. Every Trait Alignment Score is decomposed into individual trait scores, each grounded in specific passages of the candidate's interview transcript. Hiring managers can see exactly which traits were scored, how highly, and with reference to what evidence. Candidates can access their own interview data — transcripts and scores — through the Palantrix candidate portal. When a candidate requests human review of their score, the hiring manager has the full evidential record available to conduct that review. The entire decision chain is documented and auditable for EU AI Act compliance.

How Team DNA Profiling works

Frequently Asked Questions

1

Does every candidate have the right to explanation?

Under GDPR Article 22, the right applies to decisions based solely on automated processing. Where a human decision-maker reviews and can override AI outputs — as required by the EU AI Act — the decision is not purely automated, and the strict Article 22 right may not apply. However, the EU AI Act's broader transparency requirements mean that candidates assessed by high-risk AI systems have a right to information about the AI system and to human review, regardless of whether the Article 22 threshold is met.

2

What does a meaningful explanation look like in practice?

A meaningful explanation describes: what the AI assessed (which traits or competencies); how those criteria were relevant to the role; what the AI found in the candidate's responses (ideally with reference to specific transcript passages); what score or ranking was produced; and how that score was used in the hiring decision. Generic descriptions ('the AI evaluated your communication skills') without reference to the candidate's specific responses are not meaningful.

3

Can a candidate challenge an AI-generated score?

Yes. A candidate who believes their score is inaccurate — for example, because a transcript misrepresented what they said, or because a criterion was applied incorrectly — can request human review of the decision and raise a specific objection. Employers should have a process for handling these requests, and the human reviewer must have genuine authority to revise the outcome.

4

Does the right to explanation apply to all AI hiring tools?

The EU AI Act's high-risk classification covers AI used in recruitment, selection, and employment decisions. This includes CV screening algorithms, AI-scored video interviews, AI candidate ranking systems, and automated psychometric interpretation tools. If an AI tool is used in any stage of an employment decision, the transparency and explainability requirements apply.

5

What happens if an employer cannot explain their AI assessment?

Inability to explain an AI-generated employment decision is both a GDPR compliance failure and, from December 2027, an EU AI Act compliance failure. Candidates can lodge complaints with the national data protection authority. Employers face potential fines and reputational consequences. More practically, AI hiring tools that cannot produce explainable outputs should not be used in employment decision processes in the EU.