Skip to main content
palantrix logo
Hiring Process

How do I conduct evidence-based interviews that comply with AI regulations?

Direct Answer

To conduct evidence-based interviews that comply with EU AI Act requirements, use structured, competency-based questions tied to a documented role framework; score responses against pre-defined criteria; ensure any AI analysis is limited to response content rather than facial expressions or tone; maintain a full audit trail; and provide candidates with the right to request human review of any AI-generated assessment.

Evidence-based and AI-compliant are the same discipline

Evidence-based interviewing — structured questions, pre-defined scoring criteria, documented decisions — is not just good hiring practice. It is also the mechanism through which EU AI Act compliance is achieved. The Act's requirements for transparency, explainability, and human oversight are met naturally by a process that already generates documented, criteria-referenced evidence for every decision.

Conversely, AI hiring tools applied to unstructured processes create compliance risk: if there is no pre-defined framework, AI scores cannot be meaningfully explained, and the audit trail is thin.

The compliance requirements in practice

Inform candidates before their interview that AI will be used to score their responses, what it is assessing, and how the output will influence the hiring decision. This disclosure must be specific — not a reference to a general AI policy buried in terms and conditions.

Use transcript-based AI analysis only. The EU AI Act explicitly prohibits emotion recognition in employment contexts — AI systems that analyse facial expressions, vocal tone, or physiological signals to infer emotional state or personality. Ensure your platform analyses the content of responses, not physical characteristics.

Audit trail and human oversight

Every AI-generated score must be reviewable by a human decision-maker with genuine authority to override it. Retain the full scoring record — questions asked, responses given, scores generated, and any human review or override — for the period required by GDPR (typically three to six months for unsuccessful candidates).

Candidates must be able to request explanation of how they were scored and human review of any AI-generated assessment. Design your process so these requests can be fulfilled promptly and specifically — explaining exactly which traits were scored, how, and on what evidence.

How Palantrix handles compliance by design

Palantrix's compliance architecture is built in, not bolted on. Candidate disclosure happens automatically before the interview begins. Scoring is entirely transcript-based. Every score is decomposed by trait with supporting evidence. Human review is always available. The full record is retained and accessible. Employers using Palantrix do not need to build compliance infrastructure separately — it is the default operating model.

See how AI Video Interviews work

Frequently Asked Questions

1

When does EU AI Act compliance apply to hiring?

The high-risk AI provisions covering employment and recruitment AI are expected to be enforceable from December 2027, following a provisional agreement in May 2026 to extend the original August 2026 deadline. This agreement is pending formal adoption. Preparation should continue regardless — the requirements have not changed, only the timeline.

2

Does the EU AI Act apply to manual interviews?

The EU AI Act specifically governs AI systems. Manual interviews without AI assistance are not in scope. However, where AI is used at any point in the evaluation process — scoring video responses, ranking CVs, or generating assessments — those AI components are subject to the Act's high-risk employment provisions.

3

What is the penalty for non-compliance?

Up to €30 million or 6% of global annual turnover for the most serious violations; up to €20 million or 4% for other non-compliances. Enforcement is by national market surveillance authorities in each EU member state.