From bias to balance: how AI is rewriting the rules of diversity in hiring
featured image
  • 03 September 2025 11:00

Artificial Intelligence (AI) is transforming how companies hire. Algorithms now scan résumés, rank candidates, and even analyse facial expressions in video interviews. For employers under pressure to recruit efficiently, AI promises speed, consistency, and reduced administrative cost. Yet while the technology can streamline hiring, evidence increasingly shows that it can also replicate — and even amplify — discrimination, deepening barriers for vulnerable groups already struggling to access decent work.

 

Faster, Smarter, Fairer?

AI’s efficiency advantages are undeniable. Automated résumé parsers and job-matching algorithms allow recruiters to process thousands of applicants in minutes. According to the OECD (2022), such systems can improve productivity and enhance the matching of skills to job requirements when applied responsibly. For smaller organisations, this automation can mean faster hiring cycles and more data-informed decisions.

AI can widen access to opportunity — but it can just as easily scale exclusion if left unchecked.”

However, as the now-famous Amazon case demonstrates, AI can also reproduce bias. The company’s experimental recruiting engine, trained on ten years of résumés from a male-dominated tech workforce, learned to downgrade applications that included the word “women” or references to female-coded activities (Dastin, 2018). Amazon eventually scrapped the system — a cautionary tale that hiring algorithms learn the inequalities embedded in historical data.

 

When the Camera Judges You

Other technologies raise more subtle risks. Video interview platforms once marketed facial- and voice-analysis features that promised to assess “personality fit.” Yet investigations revealed that such tools could penalise candidates with atypical speech patterns, cultural expressions, or visible disabilities (Maurer, 2021;Keane, 2021). Critics warned that AI was effectively scoring candidates on traits unrelated to job performance — from tone of voice to eye contact — introducing new forms of discrimination disguised as data-driven objectivity.

People with neurological or speech conditions, migrants with accents, or those simply uncomfortable on camera could all be unfairly filtered out.

 

Invisible Walls in the Digital Job Market

These failures matter because they reinforce structural exclusion. Groups already underrepresented in the workforce — women, ethnic minorities, older workers, migrants, and persons with disabilities — are the most likely to be misclassified by biased algorithms.

As Chen, Xu, & Zhang (2023) note in Nature Human Behaviour, models trained on small or skewed datasets are least reliable for minority subgroups, leading to “disparate impact” in hiring outcomes. For those affected, the cost is personal: lost income, eroded confidence, and fewer pathways to stable employment.

 

Europe’s Regulatory Wake-Up Call

Policymakers are beginning to respond. TheEuropean Union Artificial Intelligence Act (European Commission, 2024) classifies recruitment and worker-management systems as “high-risk”, requiring transparency, risk assessment, and human oversight. Under these new rules, organisations deploying AI in hiring must document their training data, explain decision logic, and ensure candidates can appeal automated outcomes.

This regulatory shift signals a clear message: efficiency cannot come at the expense of fairness.

 

What Responsible AI in Hiring Looks Like

Researchers and advocacy groups recommend concrete steps for employers:

     Audit for bias: Conduct independent assessments to detect disparate impact on protected groups.

     Demand transparency: Require vendors to provide full documentation of datasets and model design.

     Keep humans in the loop: Automated rejections should always be reviewable and appealable.

     Avoid “black box” features: Reject facial analysis or emotion detection unless scientifically validated and ethically justified.

As the OECD (2022) stresses, governance — not blind trust — is what turns AI from risk to opportunity.

“The success of AI in recruitment will not be measured by how quickly it fills vacancies, but by who gets the chance to be seen.”

AI can help identify overlooked talent and reduce human error, but without accountability it risks magnifying inequality. The challenge for the next decade is not whether companies will use AI in hiring — but whether they will use it responsibly.

 

An article by Raina Melissinou, Senior Project Manager KEAN

 


References (APA style)




Contens
  • References (APA style)


Share contents

Would you like to receive more information and follow all the results of the project?