The gendering of AI – and why it matters

Tim Unwin Blog

Digital technologies are all too often seen as being neutral and value free, and with a power of their own to transform the world.  However, even a brief reflection indicates that this taken-for-granted assumption is fundamentally flawed.  Technologies are created by people, who have very specific interests, and they construct or craft them for particular purposes, more often than not to generate profit.  These technologies therefore carry within them the biases and prejudices of the people who create them.

This is as true of Artificial Intelligence (AI) as it is of other digital technologies, such as mobile devices and robots.  Gender, with all of its diversity, is one of the most important categories through which most people seek to understand the world, and we frequently assign gender categories to non-human objects such as technologies.  This is evident even in the languages that we use, especially in the context of technology.  It should not therefore be surprising that AI is gendered.  Yet, until recently few people appreciated the implication of this.

The AI and machine learning underlying an increasing number of decision-making processes, from recruitment to medical diagnostics, from surveillance technologies to e-commerce, is indeed gendered, and will therefore reproduce existing gender biases in society unless specific actions are taken to counter it.  Three issues seem to be of particular importance here:

  • AI is generally used to manipulate very large data sets.  If these data sets themselves are a manifestation of gender bias, then the conclusions reached through the algorithms will also be biased.
  • Most professionals working in the AI field are male; the World Economic Forum’s 2018 Global Gender Gap Report thus reports that only 22% of AI professionals globally are women. The algorithms themselves are therefore being shaped primarily from a male perspective, and ignore the potential contributions that women can make to their design.
  • AI, rather than being neutral, is serving to reproduce, and indeed accelerate, existing gender biases and stereotypes.  This is typified in the use of female voices in digital assistants such as Alexa and Siri, which often suggest negative or subservient associations with women.  A recent report by UNESCO for EQUALS, for example, emphasises the point that those in the field therefore need to work together to “prevent digital assistant technologies from perpetuating existing gender biases and creating new forms of gender inequality”.

These issues highlight the growing importance of binary biases in AI.  However, it must also be recognised that they have ramifications for its intersection with the nuanced and diverse definitions of gender associated with those who identify as LGBTIQ.  In 2017, for example, HRC and Glaad thus criticised a study claiming to show that deep neural networks could correctly differentiate between gay and straight men 81% of the time, and women 74% of the time, on the grounds that it could put gay people at risk and made overly broad assumptions about gender and sexuality.

The panel session on Diversity by Design: mitigating gender bias in AI at this year’s ITU Telecom World in Budapest (11 September, 14.00-15.15) is designed specifically to address these complex issues.  As moderator, I will be encouraging the distinguished panel of speakers, drawn from industry, academia and civil society, not only to tease out these challenging issues in more depth, but also to suggest how we can design AI with diversity in mind.  This is of critical importance if we are collectively to prevent AI from increasing inequalities at all scales, and to ensure that in the future it more broadly represents the rich diversity of humanity.

About the Author

Tim Unwin

Professor Tim Unwin CMG is UNESCO Chair in ICT4D at Royal Holloway, University of London

Share this