AI risks must be mitigated to realise benefits of the technology

19th May 2023 By: Schalk Burger - Creamer Media Senior Deputy Editor

Artificial intelligence (AI), as part of the growing use of technology that is reshaping society and how people work and live, is set to become more pervasive and infuse many systems with intelligence to improve their performance.

However, AI systems are trained on data generated by humans and, in many of their applications, there is a reflection of the biases contained in the datasets, said Council for Scientific and Industrial Research Data Intensive Research Initiative of South Africa (DIRISA) director Dr Anwar Vahed.

Further, AI can also be used for unethical and illegal uses, such as to produce smart phishing emails to perpetrate fraud or to produce targeted disinformation to sway public opinion.

Therefore, the risks of AI need to be minimised and mitigated and balanced against the benefits that its use would provide, said information and communications technology company EOH Group chief commercial officer Ziaad Suleman during a National Science and Technology Forum (NSTF) discussion on AI on May 18.

Therefore, responsible use of AI systems and development of AI products that are used in a responsible manner is important, he said.

"This means we have to think about how we use AI and the different sources of data and balance the benefits and risks. We must also collaborate on ways to control the technology," he added.

AI can provide, and is providing in many circumstances, tremendous benefits across multiple industries and sectors, he emphasised.

There is a generational shift taking place in AI, as well as a generational shift in the use of technology, he added.

"The variety, volume and velocity of data is changing, and AI can help to digest and make large datasets understandable. Being able to understand large datasets is important to support decision making processes, but the final decisions must be made by people," Suleman noted.

Society and businesses will need some level of AI to navigate this data-intensive world, but AI must be applied in a manner that is useful and beneficial. The world is dependent on technology to create more efficiencies in people's lives and business processes, he added.

"AI will bring substantial disruption and will impact on all industries and our whole system of production. Therefore, we have to manage its governance to ensure its outputs are useful for the intended applications.

"AI-supported systems can dramatically improve or change the way people and industries approach problems. For example, when used as part of an Internet of Things sensors water infrastructure monitoring system, AI can enable leak detection or predict failures based on changes in the sensor data. An engineer at his desk can then proactively schedule maintenance while ensuring that disruption is minimised," Suleman illustrated.

There are many applications of AI that can benefit human beings, but we must ensure humans work closely with machine systems to drive better outcomes, he said.

This collaborative approach will eventually enhance automation, productivity, supply chains and economic growth across multiple industries and sectors.

"Research by global organisations show that companies achieve greater growth as a result of the suitable use of technology. This is not only owing to changes in economies, but also because people are able to drive better outcomes through existing processes in a shorter time," he highlighted.

Crucially, the impact of AI requires that companies and countries ensure that people are reskilled and upskilled to be in a position to use AI and other new technology to do their work and live their lives. Reskilling and upskilling will be important for any economy, Suleman added.

"If we do not have the right governance and rules about how we develop and use AI and algorithms, then they will make the wrong interpretation of the data and deliver the wrong output and move us in the wrong direction.

"People and AI need to work together responsibly. The risks of the applications of AI is how we use the outputs, and that remains the responsibility of people," he said.

There are significant issues concerning the ethics and ethical use of AI, and these are being considered by many organisations worldwide that have made recommendations, such as by the United Nations Educational, Scientific and Cultural Organisation, Vahed said.

"There are also privacy issues, and other ethical issues such as using AI for predictive policing and social grading. There are some of the major issues that are already happening and need to be addressed and considered in terms of the developments taking place in AI," he said.

"However, AI is a technology, and its application is the important aspect," he emphasised.

Meanwhile, there is a movement in AI research to create systems of which their outputs are more understandable to humans, can be verified post-hoc and in which the decision process can be understood. This is termed explainable AI, said University of Johannesburg Academy of Computer Science and Software Engineering department head Professor Duncan Coulter.

Decisions taken by AI system will eventually be challenged and how they arrived at decisions should be clear. This can be done through decision trees and symbolic regressions to show how the system arrived at the answer, he advised.

AI is a broad term, and various different machine learning and neural network techniques are more explainable than others. Their complexity sometimes lends to making their processes more difficult to understand and explain.

"One example of unintended bias is when a large company used an AI system to evaluate the [curricula vitae] of people applying to work for it, and the system began discarding the CVs of women because it had been trained on older data in which men predominated as employees," Coulter illustrated.

The embedded bias in the dataset became entrenched in the system and was hidden in the weighting matrix, which made it difficult to identify and explain.

AI researchers and developers are developing various ways of making the different AI decision processes more explainable, and to avoid unintended outputs and produce useful outputs.