Human at heart: privacy, transparency and accountability in AI

Digital World 2021 Highlights November

Opening this highly interesting session on the impact of Artificial Intelligence (AI) on humanity, its tremendous potential in meeting major global challenges such as health or climate change, and the fundamental issue of trust, moderator David Kirkpatrick, Founder & Editor-in-Chief, Techonomy Media asked the panel of experts for a definition. Is AI really just software that can learn or make its own decisions whilst it operates – and why is this technology so different from other types of software?

Defining AI

Contrary to widespread and sometimes frankly frightening media misrepresentations, AI is not a threatening superhuman robot, stated Iveta Lohovska, Principal Data Scientist, HPE. Instead, at heart, “AI is nothing more than complex linear algebra”. Its power is unleashed by the combination of this linear algebra with extremely powerful computing and enormous data sets to feed into complex algorithms. Many people do not understand the technology, which creates fear and mistrust, but the only issues of concern are in terms of privacy and security, not the technology itself.

Dalith Steiger Co-Founder and Managing Partner, SwissCognitive strongly agreed that AI is about how technology can support humankind by accelerating capacity – not about robots. Cognitive technology may be a better way to describe it, she added, to avoid the idea of intentionally mimicking the human brain which is understood in the term “artificial” intelligence. It is important to understand the principle of the algorithms, but also to see that AI is essentially extremely sophisticated statistics. All statistical decisions depend on accurate and unbiased data. Any bias in this system comes from humans, who can undo that bias once aware of it, as “the algorithm just puts up a mirror.”

“We humans are now in a new age,” she added, and we have an opportunity to do more things differently. Every new technology has its downsides, so we need to focus on the positive, take responsibility for the risks and design to avoid known pitfalls. Cognitive technology forces us to think and challenge ourselves, with our core human competency of thinking in an emotional and inclusive way, and “the algorithms are there to support us where human beings are weak.”

For Wojciech Samek, Head of the Department of Artificial Intelligence and the Explainable AI Group at Fraunhofer Heinrich Hertz Institute, what makes AI so special is its big promise in scientific applications. By allowing us to learn from the complex relationships of genetic, neural or protein expression data, for example, we have a tremendous opportunity to understand the physical and medical mechanisms hidden in the data. Explanation methods provide insights into why AI predicts what it does, allowing us to understand the system and the limitations of current AI models, to debug where necessary and refine solutions.

AI helps to fill the gaps in systems, projects and programmes that were too hard for other technology and which then fell back onto humans, said Zee Kin Yeong, Assistant Chief Executive (Data Innovation and Protection), IMDA (Singapore) and Deputy Commissioner, Personal Data Protection Commission. Bringing in enough data means we can create models on a probabilistic basis to make suggestions and bridge the gap. “It is not a panacea, but has a lot of potential to enable us to overcome what we previously knew were the limits of technology,” he said, mentioning how well optical character recognition now performs, following its switch to using data driven models.

“There were walls and dead ends which technology previously could not reach, and which AI allows us to reach now,” he continued, calling for as many companies as possible to understand this and know how to use AI, so that we can all benefit from economic progress.

Use cases

Cognitive technologies can amplify human capacity and improve aspects of life at a macro level, stated Kirkpatrick, but what concrete examples do we have of AI enabling things that can really impact lives?

In health care, AI can save lives and support humans by providing second medical opinions on diagnoses and screening, explained Steiger, as well as reducing costs in the healthcare industry. Developing technology with humans means merging the two for better outcomes; but we must take responsibility in how we design and develop, learning through iterative processes and incorporating ethical aspects.

Lohovska agreed that AI has a critical role to play in tackling the UN’s Sustainable Development Goals, particularly on health through the use of precision medicine. This includes remote sensing, collecting anonymized medical data and imaging to enrich data sets and create a narrative at micro or macro level that can make a huge difference. Using AI in this way is where we should focus our energy, rather than on marketing, she pointed out. Much like the internet, AI technologies can be used for good or for bad, but as long as, on average, the positive outweighs the negative, we are on the right track.

A prime example of the success of AI in healthcare, panellists agreed, was the swift procurement of the COVID-19 vaccine. Building on the breakthroughs and experiences of earlier programmes such as Ebola, together with advances in genomics and pharmaceuticals, AI was used to create a COVID-19 vaccine within two weeks, with the rest of the development time spent on trials and approvals. Progress here is exponential, not linear. “The combination of different technologies,” highlighted Kirkpatrick, “is where we get the most societal power for progress, there are so many areas where technology is advancing at the same time  and these technologies work in tandem.”

The power of combined technologies is also evident in precision agriculture, where smart sensors using AI and IoT are distributed in remote locations where no information is otherwise available. The data provided on the evolving situation on the ground, the impact of climate change or water scarcity, for example, enables resources to be adjusted and distributed very specifically. Precision agriculture is already happening across much of Asia and sub-Saharan Africa, explained Lohovska, deepening our understanding of how crops work with changing environmental factors. We have sufficient proof of the benefits of the technology from initial use cases and “we now need to scale this to a level where humanity in general can benefit, not just small groups,” she added.

The massive set of climatic problems we are facing as a planet can also be better addressed by AI tools. “AI is an enabler of technologies on a macro level,” said Samek, creating new business insights and making life easier. Climate research is very complex, and AI can help us understand the process, provide better tools for decisions, manage and analyze major data sets.

Alongside new business models and augmenting human capacity, Steiger stressed the need to discuss the opportunities opened up for disabled people by technological advancement. For the first time, people with disabilities can join the workforce and be fully integrated into society – and we take this important advancement into account when discussing the threat to jobs AI might pose in other areas.

Trust issues

Yeong explained that the commonsensical elements of consumer trust – based on experience with a product, service or company, meeting customer expectations, the reputation of the company and its engagement with its customers – also apply to trust in AI products and services. Once trust is established, consumers will buy and use AI products, enabling companies to invest further in development of products or implementation in operations and processes to deliver better services, in a virtuous circle.

Ensuring good governance structures within companies, with the right decisions made at the right level within companies, is therefore critical to maintaining company reputations and establishing trustworthy AI. To work correctly, the AI model then requires good quality data which is from the right sources, sufficiently representative, and monitored in its development. The end user is then given a trusted, rich and fulfilling experience, with reasonable rather than radical recommendations. Communication with the customer is also important, providing the right amount of information at the right time. Transparency and explainability are high level concepts: consumer trust comes down to providing sufficient information to convince a user to buy the product or service; providing enough information to explain how the product works or recommendations are made; and having the opportunity, where a decision effects the user, to understand why it was made, and to challenge it if unhappy.

The technological, legal and social aspects of an AI product or service must all be considered to reach unified agreement on trustworthy AI, said Lohovska. This involves building definitions for trustworthiness in different governments, societies and communities – and then building ethical principles around this. As a global technological player, HPE take into account the principles of privacy and security; a human focus in terms of compliancy with law enforcement and the individual in the loop; inclusivity; responsibility and robustness; and embracing good biases whilst minimizing bad biases, as sometimes embedding biases in the data set or algorithm will produce the best outcome. It is helpful to apply lessons learnt in building software into the complexity of AI systems rather than reinventing everything.

Cause for concern?

“AI gives us a fundamentally new set of capabilities for manipulating data, with ramifications in every realm,” said Kirkpatrick. Perhaps, however, the gap between ordinary citizens and the creators and appliers of AI technology is too large, particularly given how control of, and access to, giant data sets is concentrated in the hands of a very limited number of major global companies.

For Lohovska, this is not be a major concern. There are software patches to improve biases or cyber security, but not for ignorance – here, the only solution is education, she stressed. We need to be fully informed on the topic, what threats are real and what is hyped by the media, and better understand the issues of data privacy and data security embedded in AI by governments and corporate initiatives. The scale of change may be frightening, so big corporates, governments and civic communities should make people aware of the measures and techniques that can be used within AI, balancing the need to regulate for privacy and security against allowing for innovation and growth.

AI offers many new services to help us and make life easier, agreed Samek.  International initiatives to create certificates of trustworthiness are important to build the trust that AI models are working as expected in sensitive applications such as healthcare. We can also reuse or repurpose procedures and concepts established in other fields, such as drug design, to create trust in technology and demonstrate its reliability. These factors are important in AI, but “people should not worry about it, but should seize the opportunity offered.”

It is a foregone conclusion that our lives will be effected in positive ways by AI, said Yeong. Our dependence on AI will grow as it becomes more and more convenient, and we must be aware of how this very convenience can restrict our options. If we rely on social media for our news, for example, we are limiting ourselves and our exposure to the world. We need to understand and correct our behavior or change our habits as necessary. Learning to live with AI as a tool means being able to edit the recommendations AI provides or reconfigure AI tools to change future recommendations. Knowledge of how AI works and how we can best use it is important to avoid being reliant on it – it should serve us as the end users.

The panel agreed on the need for active discussions on giving agency in AI systems to ordinary citizens, providing them with more knowledge and control. Creating awareness of cognitive technologies and concrete use cases, and explaining its use in simple language, will bring more people on board.

This applies equally to policy makers using AI in government as to businesses implementing it in their projects or consumers making use of it in their daily lives. There is no substitute for the learning effect of first hand personal experience of AI, its benefits and limitations, added Yeong.

Ways forward

“AI will be more and more a basic infrastructure like electricity, across all industries, commodities and technologies,” said Steiger, pointing out that it will increasingly be used in combination cybersecurity, blockchain or other developments.

From a macro perspective, it is absolutely crucial for as many companies as possible to understand and know how to use AI to enable us to benefit from the economic progress it promises, said Yeung. Tackling the obstacles of consumer fear, uncertainty and lack of trust in AI means investing in developing technologies, skilled engineers and project managers to understand its strategic importance – and communicate it through “trustworthy AI and public awareness programmes to demystify AI.”

At governmental level, there are increasing numbers of national strategies to reap the benefit of AI, with international organizations providing guidance and support to policy makers on where to invest in research education and civil society. AI technologies are seen as critical for delivering public services within the digital economy, but governments must drive awareness to ensure AI is more widely implemented throughout the private sector.

“Today AI is a huge collection of narrow models trained to do specific things on different data sets,” he said, and we need to see how we can bring this together and reach as many companies as possible to promote the use of AI in the economy. Establishing consumer trust in AI is imperative.

“The more we know, the better we can be supported by AI,” said Steiger, calling for maximum openness and data sharing, balanced against privacy concerns, for the wider good of society. Only then can data be inclusive, diverse and unbiased – and AI ethical.

Large tech companies such as HPE must be involved in discussions on the complexity and ethics of AI, as the solutions and products they develop have a huge impact on so many individuals around the world, added Lohovska. .

Closing thoughts

It is very important for us all to be more aware of the degree to which these new cognitive technology systems are affecting our lives, given their power and potential, stated Kirkpatrick.

For Samek, “what is important is to make progress in the field of AI is collaboration,” establishing transparency by providing code, open source and model initiatives in research as well as open data, speeding progress by reusing models and data for different purposes. .

At a human and societal level, we should focus on building “data-native communities who are numeric and can understand the concepts,” said Lohovska, to challenge technological organizations with different perspectives.  At the individual user level, we need to look more at the terms and conditions of products and services using AI to understand what we agreeing to, and the trade-off between our data and the services we are using.

Addressing the concern that an AI gap may grow between developed and developing nations on the lines of the digital divide, Yeong pointed out that volume of data is what is important for AI development, so the key is to get the technology into as many hands as possible. As so many AI models are open source, given sufficient data sets it should be possible to create a start-up culture in developing countries, with support and training assistance provided by developed countries.

A final thought from Steiger closed the session: “We do have the emotional intelligence of the human being and the rise of AI, so we are talking about AI and the human being together. We have to shift from technologically literate people to people-literate technology.”

About the Author

Digital World

Accelerating ICT innovation to improve lives faster. The global event for SMEs corporates and governments.

Share this