ao link
Charity Digital
Search
Remember Login

New to Charity Digital?

User Menu
Remember Login

New to Charity Digital?

Remember Login

New to Charity Digital?

Search

You are viewing 1 of your 1 articles as an unregistered user

For unlimited access to our free content, please register or login.

Is your voice assistant biased?

We explore the social implications of this artificial intelligence technology, and what it says about how our tech can do better

A big arrow made up of smaller backwards arrows on a blue background
Is your voice assistant biased?

Voice assistants have become a regular part of daily life for many people, providing a seamless, hands-free way to carry out every-day tasks, from accessing vital resources to researching questions of interest throughout the day. The technology can be particularly useful to some people who have disabilities, such as visual or mobility impairments, to access digital resources.

 

However, there have long been concerns over how biased designs of voice assistants can perpetuate social inequality. Here, we explore these concerns, considering the role of digital technology in either helping or harming society – and looking at the ongoing journey to making technological impacts more positive.

 

 

Gender bias in voice assistants

 

You may have noticed that the most popular voice assistants are associated with female names and female-sounding voices, although male and female-sounding voices are equally intelligible and capable of delivering information.

 

A UNESCO report comments: “it sends the signal that women are obliging, docile, and eager-to-please helpers, available at the touch of a button or with a blunt voice command like ‘hey’ or ‘OK’. The assistant holds no power of agency beyond what the commander asks of it…In many communities, this reinforces commonly held gender biases that women are subservient and tolerant of poor treatment.”

 

Gender bias has also manifested in what voice assistants are scripted to say when a user exhibits gendered abuse and harassment towards it. UNESCO comments, “Siri’s ‘female’ obsequiousness – and the servility expressed by so many other digital assistants projected as young women – provides a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education.”

 

Their report explains how gender-responsive education could help reset gendered views of technology and ensure equality for women and girls.

 

 

Language and racial bias in voice assistants

 

According to USwitch research, Alexa and other smart speaker devices struggled to understand over 23% of accents in the UK, with Welsh accents being the least understood.

 

Other research has found room for improvement for all voice assistants in its speech recognition abilities when asked about commonly dispensed medications. There was lower comprehension performance for those with a “foreign” accent using Siri, while Google Assistant came out on top for understanding diverse accents.

 

Bilingual users have also found difficulty with the technology, with the Washington Post revealing that Chinese and Spanish accents as the hardest for Alexa and Google Home to understand – despite these being the top two most spoken languages globally by number of native speakers.

 

Automated speech recognition systems used by virtual assistants (as well as other technologies such as closed captioning and hands-free computing) also hold large racial disparities, according to research from the US. In particular, the research indicated that the systems were confused by some characteristics of the African American Vernacular English dialect, likely due to insufficient use of audio data from black speakers when training the models.

 

Further research states that black participants were more negatively impacted than white participants to voice assistants’ errors when they occurred, and were impacted in similar ways to when experiencing racial microaggressions.

 

The research says that given these findings, “one vital implication for the design of voice assistants is the importance of addressing or reversing any harm caused by errors in speech recognition, particularly from users from marginalised groups”.

 

 

What will happen to voice assistants in the future?

 

The tech industry is aware of their bias, as Eric Schmidt, the former chief and executive chairman of Google made clear in 2019: “We know the data has bias in it. You don’t need to yell that as a new fact. Humans have bias in them, our systems have bias in them. The question is: What do we do about it?”

 

Well – improvements have begun. To tackle gender bias, companies have made changes to the ways that voice assistants respond to sexist and abusive inputs, with Siri for example changing one of its responses from “I’d blush if I could” to “I won’t respond to that”. As Leah Fessler points out however, responses like these could be improved further to a more assertive and informative one such as: “That sounds like sexual harassment. Sexual harassment is not acceptable under any circumstances and is often rooted in sexism”, followed by resources to help users understand the issue more.

 

New ethical guidelines have also been developed to help tech companies reduce social harm with their products. Josie Young’s Feminist Chatbot Design Process, for example, helps teams identify issues to do with bias in their chatbot design.

 

The consulting firm Accenture and a group of linguists, sound designers, and computer science researchers have also each developed “non-binary” or “genderless” voices to combat stereotypes that equate the usually female-sounding voices used with secretaries or assistants. Meanwhile, Apple now allows users to change Siri’s voice, and doesn’t pre-select a feminine voice as default.

 

In the future, the voices used could become more imaginative to avoid these types of challenges entirely, says Mark West, Project officer at UNESCO: “We saw examples of AI assistants that were projected as cartoons – talking animals, for example…Makers of AI assistants would do well to lean into the non-human identity of their creations, rather than trying to give them a human veneer.”

 

For more structural change, much of the research on the subject repeats the idea of involving more diverse groups of people in the development of these types of products. This is an issue of digital inclusion: designing a digital world that is more equitable for all. Girls Who Code is one charity working to address this by reaching girls around the world to close the gender gap in new entry-level tech jobs by 2030.

 

West, who was also the lead author of the UNESCO report ‘I’d blush if I could’, also calls for more transparency surrounding these companies’ systems and AI engines. “If a company claims to have an unbiased system, that’s great: prove it, show us what is under the hood, explain how it works and how it learns.”

 

 

Why does bias in voice assistants matter?

 

For many, it’s easy not to question digital technology and imagine it as a neutral, amoral tool. However, looking at cases such as this demonstrates how the ways humans use digital technology can impact the health of our society and the charitable causes we care about.

 

This speaks to a range of digital ethical issues: from the ways our use of tech is currently harming the planet, to the social risks of artificial intelligence, to the importance of carefully stewarding personal data. The impact of digital depends entirely on how humans choose to create, use, and adapt technologies: no technology is, in itself, “good” or “evil”, and it is up to us to change our world through our use of technologies.

 

Finally, it shows an appetite for change, and opportunities to get involved in shaping how our world works by paying attention to the role of digital tech in our lives and work. For more information, check out how charities are tackling inclusion, wellbeing, and safety online.


Related Articles

Dell Technologies: Access to Discounted RatesDell Technologies: Access to Discounted Rates
Podcast: How to be digitally inclusivePodcast: How to be digitally inclusive
The future of online communicationsThe future of online communications
Artificial intelligence and inequalityArtificial intelligence and inequality
The complete guide to accessibility softwareThe complete guide to accessibility software

Related Media

Who we areWho we are

More on this topic

How AI can serve everyone

How AI can serve everyone

Charity Digital Academy

Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.

 

Tell me more

Recite Me toolbar