Categories
Blog

Cybersecurity: the importance – and the challenges – of cybersecurity

There is no doubt that the near-instant ability to access or process data is completely changing the way we work, consume, organise and socialise. More and more of the global population are becoming connected to the internet, giving them access to goods, services and information on demand. Whilst the growth of connected users has been rapid, corresponding efforts to understand the data and security landscape has been slower. In addition, where easier access to data, goods and services is good for users and customers, it also makes it easier for those who want to steal data, disrupt services or commit other types of criminal activity online. The more innovation and technology is enabled, the more risks that are posed to customers, businesses and organisations.

One of the ways to tackle this is at the national/international level. The European Union has been active in recent years with two initiatives aimed at making its citizens more secure. The General Data Protection Regulation (GDPR) aims to ensure that personal data is allocated sufficient protections with significant financial penalties for organisations failing to comply. One of the benefits to consumers of this regulation is that it has raised awareness that their personal data belongs to them, and that they have certain rights about how that data is stored and processed. It has also introduced a burden on companies and organisations to comply with the GDPR. This means that regulation design must closely balance data protection with limiting the implementation and operating burdens on companies and organisations.

The Directive on security of network and information systems (NIS Directive) is the first piece of EU-wide legislation on cybersecurity. It provides legal measures to boost the overall level of cybersecurity in the EU and targets critical national infrastructure in two areas: Operators of Essential Services, which are established within the EU, and Digital Service Providers that offer services to people within the EU. This is successfully raising the profile of what constitutes critical national infrastructure and is both compelling and assisting those affected in improving security as society becomes more connected.

Another option is to target selected industries by demonstrating the benefits of solving these issues through a common approach, and as long as the benefits can be demonstrated to outweigh the challenges then buy-in from companies and organisations should be much less complicated. The normal challenges of different appetites, working cultures and even time zones can be overcome with a clear strategy. More difficult challenges may be as follows:

  • Building relationships between organisations that may well be in direct commercial competition with each other.
  • Understanding and absorbing different regional regulations.
  • Demonstrating a return on time and cost.
  • Avoiding breaches of regulation, such as anti-competition rules.

However, the benefits for organisations that collaborate in order to build models that can address and implement regulations and standards can include:

  • A shared cost burden
  • Efficiency in ensuring that organisations that use similar business and operating models reduce friction when interacting – this is particularly important in international supply chains, or where different companies are involved in delivering one product.
  • Strength in numbers whereby organisations and companies can effectively communicate challenges to regulators as a single unified voice.
  • Identifying common risks and addressing them in a uniform manner.

During the panel session on “Regulating the future: safe, inclusive, connected” at ITU Telecom World 2019 in Budapest this September, I will address these challenges and opportunities in more detail and talk through some of the ways to effectively enable industries to solve these issues. I look forward to a lively and interesting discussion!

Categories
Blog

The gendering of AI – and why it matters

Digital technologies are all too often seen as being neutral and value free, and with a power of their own to transform the world.  However, even a brief reflection indicates that this taken-for-granted assumption is fundamentally flawed.  Technologies are created by people, who have very specific interests, and they construct or craft them for particular purposes, more often than not to generate profit.  These technologies therefore carry within them the biases and prejudices of the people who create them.

This is as true of Artificial Intelligence (AI) as it is of other digital technologies, such as mobile devices and robots.  Gender, with all of its diversity, is one of the most important categories through which most people seek to understand the world, and we frequently assign gender categories to non-human objects such as technologies.  This is evident even in the languages that we use, especially in the context of technology.  It should not therefore be surprising that AI is gendered.  Yet, until recently few people appreciated the implication of this.

The AI and machine learning underlying an increasing number of decision-making processes, from recruitment to medical diagnostics, from surveillance technologies to e-commerce, is indeed gendered, and will therefore reproduce existing gender biases in society unless specific actions are taken to counter it.  Three issues seem to be of particular importance here:

  • AI is generally used to manipulate very large data sets.  If these data sets themselves are a manifestation of gender bias, then the conclusions reached through the algorithms will also be biased.
  • Most professionals working in the AI field are male; the World Economic Forum’s 2018 Global Gender Gap Report thus reports that only 22% of AI professionals globally are women. The algorithms themselves are therefore being shaped primarily from a male perspective, and ignore the potential contributions that women can make to their design.
  • AI, rather than being neutral, is serving to reproduce, and indeed accelerate, existing gender biases and stereotypes.  This is typified in the use of female voices in digital assistants such as Alexa and Siri, which often suggest negative or subservient associations with women.  A recent report by UNESCO for EQUALS, for example, emphasises the point that those in the field therefore need to work together to “prevent digital assistant technologies from perpetuating existing gender biases and creating new forms of gender inequality”.

These issues highlight the growing importance of binary biases in AI.  However, it must also be recognised that they have ramifications for its intersection with the nuanced and diverse definitions of gender associated with those who identify as LGBTIQ.  In 2017, for example, HRC and Glaad thus criticised a study claiming to show that deep neural networks could correctly differentiate between gay and straight men 81% of the time, and women 74% of the time, on the grounds that it could put gay people at risk and made overly broad assumptions about gender and sexuality.

The panel session on Diversity by Design: mitigating gender bias in AI at this year’s ITU Telecom World in Budapest (11 September, 14.00-15.15) is designed specifically to address these complex issues.  As moderator, I will be encouraging the distinguished panel of speakers, drawn from industry, academia and civil society, not only to tease out these challenging issues in more depth, but also to suggest how we can design AI with diversity in mind.  This is of critical importance if we are collectively to prevent AI from increasing inequalities at all scales, and to ensure that in the future it more broadly represents the rich diversity of humanity.