Diversity by design: mitigating gender bias in AI

Digital World 2019 Daily Highlights Day 3

As the presence – and influence – of AI systems continues to grow throughout society, how can we ensure that we do not increase digital gender inequality? Can we design with diversity, and can we remove bias.

These were the questions explored by diverse expert panel and engaged audience moderated by Tim Unwin, Emeritus Professor of Geography and Chairholder of the UNESCO Chair in ICT4D Royal Holloway, University of London. And the first surprising answer was that many in the room were not initially aware that gender bias in AI existed.

Realizing the extent of the issue for the first time, despite AI’s increasing ubiquity across all sectors and countries, Allana Abdullah, CEO, Bahaso cited the example of Japan, where robots are replacing in women in many jobs, from secretaries to sex workers. Her own team at Bahaso may be diverse, but its ICT team only included one woman out of thirty. Not because women are not capable, but because they did not apply – perhaps, Unwin suggested, because job adverts are off putting and gendered in language, shaping and imagery.

Mastercard Chief Privacy Officer, Caroline Louveaux explained how the company had invested heavily in AI to secure its trusted status by modelling and depicting scenarios, but was very aware of its responsibilities in ensuring AI was used for the good. “We need to make sure our AI tools don’t discriminate on any basis, gender, race, religion and so on, to ensure fairness of decisions,” she said, noting that discrimination of this nature could deprive people of key benefits, lines of credit and access to the online economy.

Responding to the pervasive gendering of voice assistant and AI conversation playing on the traditional trope of females in a service role, Coral Manton, Researcher and Developer, Women reclaiming AI spoke of the need to provide different voices and personalities. Even if women themselves often feel more comfortable with other women in caring or service roles, there is no need to continue to add to the system, to add more bias – we can change.

Women need to feel more comfortable with AI, she continued, pointing out that depictions and images of AI are often “blue, masculine, mythical, godlike,” creating a barrier to participation. Transparent, cooperative and collaborative teams are the way forward.

One step on that journey to transparency is developing AI governance models from the top and ensuring the right policies and frameworks are in place, said Louveaux. The process of ethics by design and AI impact assessment focuses on avoiding discrimination and doing no harm – but it

“Within digital transformation, we recognize that emerging tech should be an equalizer not differentiating even more,” said Martin Koyabe, Head of Technical Support and Consultancy, CTO. It is the data fuelling AI that brings the bias, so empowering women throughout the tech industry, and specifically within the AI sector, can ensure that bias in data is reduced or eliminated.

Abdullah agreed that, “Behind AI are the people working on it, so from the bottom up we have to start with women in research and strategy, implementing and building it, and being CTOs.”  Women have to be present and represented throughout the process, not just at the top, to bring equality.

Education has a central role to play, agreed all the panelists, bringing girls and women into tech from a young age, showing what AI is in detail and in concrete applications, not just the concepts of machine learning. Women need to really inhabit positions in tech, providing role models for others to see and admire. Capacity building is key to making a difference – both reaching out to, training and recruiting women, and building awareness of AI’s capabilities for both good and bad.

Transparency and accountability for the data behind AI are critical to reducing bias, but very difficult to govern or enforce. There is a lengthy inventory of different biases which can be identified, documented and used to define parameters, clean data and make sure models function.

AI reflects society, argued Manton, so how can we mitigate bad things happening in society? AI will provide many solutions, but we must avoid seeing it as godlike, as unquestionable.  “Technology is seen as truth, and AI even more so. Whenever we build a data asset, we have to interrogate, to look very critically, to have transparency.”

Abdullah called for the ITU to take the initiative in promoting AI technology and awareness equally throughout the world, preventing the abuse of gender bias and inequality, and promoting AI for good on a massive scale.

Governments need to take the initiative in discussing AI issues, stressed Koyabe, putting in AI frameworks relevant to the specific local context. It should be on the national agenda, irrespective of the differing stages of AI deployment in different countries, and should be brought to the level of international bodies such as ITU to provide more traction in the decision-making process

 “If we want AI to deliver its true promise and mitigate for gender equality, no one can do it alone,” said Louveaux, “There is no magic solution – we have to learn from each other, share practices and think together about how to develop the environment and mitigate for bias and gender equality.”

.

About the Author

Digital World

Accelerating ICT innovation to improve lives faster. The global event for SMEs corporates and governments

Share this