inclusion4eu@gmail.com

Inclusion4EU

Co-Design for Inclusion in Software Development Design

Facial Recognition Technology in Public Spaces

Who does this case study involve?

Members of ethnic minority communities in urban environments

The case

Facial recognition technology is increasingly used in public spaces such as airports, shopping centres, and city streets, often justified as a tool for improving security and efficiency. These systems analyse facial features captured by cameras and compare them against large databases to identify individuals. While promoted as neutral and objective, evidence suggests that these technologies perform unevenly across different demographic groups.

 

Multiple independent studies have shown that facial recognition systems have significantly higher error rates for people from ethnic minority backgrounds, particularly women with darker skin tones. In some cities where facial recognition has been trialled by law enforcement, individuals from minority groups were disproportionately flagged as potential suspects. These false positives led to increased surveillance, questioning, and emotional distress for those affected. Researchers have linked these issues to unbalanced training datasets that contain far fewer images of non-white faces.

Findings

This case illustrates how technological systems can reproduce and amplify existing social inequalities when diversity is not adequately considered during design and deployment. The findings emphasise the need for representative datasets, rigorous testing across demographic groups, and clear legal and ethical safeguards. Without these measures, facial recognition technologies risk reinforcing racial profiling and social exclusion in public spaces.

References

Buolamwini, J. and Gebru, T. (2018) “Gender shades: Intersectional accuracy disparities in commercial gender classification”, Proceedings of Machine Learning Research, 81, pp. 1–15.

Facial Recognition Technology in Public Spaces
Scroll to top