
How Adverse Impact Is Defined
Adverse impact is a statistical concept: a hiring procedure has adverse impact on a protected group if members of that group are selected at substantially lower rates than the highest-selected group. The most widely used threshold — established in US guidelines but broadly adopted as an informal benchmark — is the four-fifths rule: if the selection rate for a protected group is less than 80% of the rate for the highest-selected group, adverse impact is indicated.
In the EU context, the relevant legal framework is employment discrimination law — both domestic (such as Ireland's Employment Equality Acts) and EU Directives — which prohibits indirect discrimination: a provision, criterion, or practice that is neutral on its face but places persons of a protected characteristic at a particular disadvantage, unless objectively justified by a legitimate aim and the means are proportionate.
Why It Occurs in Assessment
Adverse impact can arise in hiring assessments through several mechanisms. A selection criterion that is not genuinely job-relevant may correlate with demographic characteristics — educational pedigree, for example, is associated with socioeconomic background. An assessment tool that performs less accurately for certain groups — such as AI transcription with higher error rates for non-native speakers — can produce systematically lower scores for those groups. Interview questions that rely on familiarity with a specific professional or cultural context can disadvantage candidates without that background.
Importantly, adverse impact is not the same as discriminatory intent. A procedure can have adverse impact — and may be legally challengeable — even when it was designed with no intention to disadvantage any group. The standard is whether the outcome is disparate, not whether the intent was.
Monitoring and Auditing
Identifying adverse impact requires data. Employers must track selection outcomes by protected characteristic across each stage of the hiring process: application-to-screen, screen-to-shortlist, shortlist-to-offer. Without stage-level data segmented by group, adverse impact is invisible — it can only be detected once the data is collected and analysed.
For AI-assisted hiring, the EU AI Act requires that providers of high-risk AI systems conduct testing for group-level disparities before deployment and at regular intervals thereafter. Employers deploying AI hiring tools should request this data from vendors and should conduct their own outcome audits using their actual hiring population.
Adverse Impact and AI Hiring
AI hiring systems can both reduce and amplify adverse impact depending on how they are designed. A model trained on structured, role-relevant competency data — evaluated consistently across all candidates — removes some of the sources of inconsistency that produce adverse impact in human-led processes. A model trained on historical data that reflects past discriminatory patterns, or that uses demographic proxies as predictors, amplifies those patterns at scale.
The EU AI Act's requirements for high-risk AI in employment — including data governance requirements, monitoring obligations, and the need to demonstrate the system does not lead to discrimination — are specifically designed to address this risk. Explainability requirements serve double duty: they allow candidates to challenge inaccurate assessments, and they allow auditors to examine whether scoring criteria are producing disparate outcomes.
How Palantrix approaches adverse impact
Palantrix's scoring is based entirely on transcript content — what candidates say, not who they are. The Team DNA Profile is derived from the traits and behaviours of your high-performing team, not from demographic characteristics. AI scoring is applied consistently to every candidate's responses against the same criteria. The platform's audit trail — full transcripts, individual trait scores, overall rankings — supports the ongoing monitoring required to detect and address disparate impact if it emerges. Employers are encouraged to review their own outcome data regularly and use the Palantrix records as the data source for that analysis.
How Team DNA Profiling works →Frequently Asked Questions
Is adverse impact the same as discrimination?
Not precisely. Discrimination is the broader legal concept — treating someone less favourably because of a protected characteristic. Adverse impact is specifically indirect discrimination: a facially neutral practice that produces disparate outcomes. A hiring practice that has adverse impact on a protected group may be lawful if the employer can demonstrate it is objectively justified by a legitimate aim and proportionate. Adverse impact without justification is unlawful indirect discrimination.
What is the four-fifths rule?
The four-fifths (or 80%) rule is an informal benchmark used to flag potential adverse impact: if the selection rate for a protected group is less than 80% of the selection rate for the highest-selected group, adverse impact is indicated and the procedure warrants scrutiny. It is not a legal standard in the EU — EU employment law uses a broader 'particular disadvantage' threshold — but it remains a useful operational guideline for internal auditing.
Which protected characteristics are most relevant to hiring adverse impact?
EU employment discrimination law protects: sex, racial or ethnic origin, religion or belief, disability, age, and sexual orientation. Gender and age are the most frequently litigated in hiring contexts. For AI hiring systems, race/ethnicity and national origin are particularly relevant because AI systems trained on text can encode linguistic or cultural patterns that produce disparate outcomes across these groups.
How often should hiring processes be audited for adverse impact?
At least annually for organisations with sufficient hiring volume to generate statistically meaningful data. For high-volume roles or processes using AI-assisted screening, more frequent monitoring — quarterly or at the conclusion of each significant hiring campaign — allows earlier detection of emerging disparities. Without regular monitoring, adverse impact can go undetected until it accumulates into a legally significant pattern.
Does removing names and photos from CVs eliminate adverse impact?
Anonymised shortlisting removes one source of disparate impact — name-based discrimination, which is documented in experimental research across multiple European countries. It does not eliminate adverse impact from the assessment criteria themselves. An anonymous CV still contains educational background, employment history, and other signals that may correlate with protected characteristics. Anonymisation is a useful partial measure, not a complete solution.
