University of Pretoria Department of Philosophy academic Professor Emma Ruttkamp-Bloem has been elected as the chairperson of the United Nations Educational, Scientific and Cultural Organisation (Unesco) Ad Hoc Expert Group (AHEG) on Artificial Intelligence (AI), which will formulate a recommendation for the first global standard-setting instrument on the ethics of AI.
The group comprises 24 renowned specialists with multidisciplinary and pluralistic expertise on the ethics of AI, and has been appointed by Unesco for a 24-month period. The AHEG will formulate a recommendation following the decision of Unesco’s General Conference in November 2019. Its work began in March.
Ruttkamp-Bloem is also a member of Unesco’s World Commission on the Ethics of Scientific Knowledge and Technology (Comest) and the African Union High Level Panel on Emerging Technologies. Her work on these platforms focuses, among other things, on developing AI for the growth and benefit of humanity.
She explains that the reason for there being no global instrument for the ethics of AI yet, is not so much because of the fact that it is unchartered territory.
“It is more the nature of AI as a disruptive technology, the complexity of its impact on the core sectors such as civil society, the future of work, security and surveillance, the financial sector, and education, and the difference in AI readiness of countries across the globe, that has contributed to the difficulty around formulating a global instrument,” she says.
While she is not yet at liberty to say specifically what issues the AHEG is considering, she cautions that the general issues facing the development of AI are complex and include real threats as diverse as transgression of the right to privacy to security threats posed by the possible deployment of lethal autonomous weapon systems. Other threats relate to concerns around bias, transparency and accountability in the context of automated decision-making systems.
“Fairness usually refers to structural bias present in data, which is sometimes inadvertently, and sometimes deliberately, exacerbated by machine learning processes. Think here of gender-, race-, ethnic or age-related bias, as examples. Transparency refers to making evident the processes of the system and links closely to issues of explainability.
"A simplified way to think about it is that transparency relates to understanding how machine learning systems are designed, developed and deployed, while explainability relates to understanding the outcomes of these systems,” she notes.
Further, accountability relates to ascribing ultimate human responsibility for the outcomes produced by machine learning systems, and the auditability and traceability of the workings of such systems, as well as the consideration of ethical questions such as for instance, should there be blanket disclaimers allowed in this kind of technology. As mentioned already, a lot is being done on a wide array of platforms to mitigate these threats and challenges.
Ruttkamp-Bloem says her vision is for the group to contribute to a global instrument that will ensure humans remain at the centre of interactions with AI technologies. She also stresses that humans should take up this opportunity with integrity and responsibility and not just take it for granted.
“The challenge is to become the best humans we can be very fast.
“Above all, AI technologies should enhance human flourishing and peace and harmony and protect human rights. I see the most important objective of the work of the bureau as continuously striving to find the most efficient ways in which to include and reflect the expertise and contribution of every member of the AHEG throughout this process we have embarked on, in order to ensure the content of the recommendation is as rich and balanced as possible,” she says.