inclusion4eu@gmail.com

Inclusion4EU

Co-Design for Inclusion in Software Development Design

Algorithmic Bias in Automated Recruitment Systems

Who does this case study involve?

Job applicants from minority ethnic backgrounds and women applying for technology roles

The case

In recent years, many large organisations have adopted automated recruitment systems that use artificial intelligence (AI) to screen CVs and rank candidates. These systems are often promoted as efficient and objective alternatives to human recruiters. However, concerns have emerged that such tools may reinforce existing social inequalities rather than reduce them.

 

A notable case involved a large multinational technology company that developed an AI-based recruitment tool trained on historical hiring data from the previous ten years. During internal testing, the company discovered that the system consistently ranked CVs from male applicants higher than those from female applicants. This occurred because the historical data reflected a workforce that was predominantly male, particularly in technical roles. As a result, the AI system learned to associate successful candidates with male-coded language, experiences, and educational backgrounds.

 

Similarly, researchers examining commercially available recruitment tools found that candidates from minority ethnic backgrounds were more likely to be filtered out at early stages of automated screening. Factors such as gaps in employment, non-Western educational institutions, or linguistic differences in CV writing were often interpreted negatively by the algorithms, even when candidates were well qualified.

Findings

This case highlights how AI systems can unintentionally discriminate when they are trained on biased or unrepresentative data. Rather than being neutral, automated recruitment tools may amplify existing inequalities in the labour market. The findings suggest that exclusion can occur not because of malicious intent, but because design decisions fail to account for social context and diversity.

 

To reduce exclusion, researchers recommend greater transparency in how recruitment algorithms operate, regular auditing of training data, and the inclusion of diverse stakeholders in the design and evaluation process. Importantly, automated tools should support — not replace — human judgement, particularly in high-stakes decisions such as employment.

References

Dastin, J. (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
Raghavan, M., Barocas, S., Kleinberg, J. and Levy, K. (2020) “Mitigating bias in algorithmic hiring”, Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, pp. 469–481.

Algorithmic Bias in Automated Recruitment Systems​
Scroll to top