Voice Assistants and Accent Bias
- Exclusion
Who does this case study involve?
Users with strong regional or non-native accents
The case
Voice assistants are now commonly used in smartphones, smart speakers, and vehicles to perform tasks such as searching for information, setting reminders, and controlling devices. These systems rely on automatic speech recognition technologies trained on large datasets of recorded speech. However, many users report that voice assistants struggle to accurately recognise accents that differ from those most commonly represented in training data.
Research has shown that speakers with strong regional accents or non-native accents experience higher error rates and frequent misinterpretations. Users often report having to slow down, simplify their speech, or adopt a more “standard” accent to be understood. Over time, this leads to frustration and discourages continued use of the technology.
Findings
This case demonstrates how linguistic diversity can be overlooked in technology design. Accent bias in voice assistants creates exclusion by privileging certain speech patterns while marginalising others. Addressing this issue requires more diverse speech datasets, inclusive testing practices, and recognition that language variation is a social norm rather than an error.
References
Koenecke, A. et al. (2020) “Racial disparities in automated speech recognition”, Proceedings of the National Academy of Sciences, 117(14), pp. 7684–7689.
