https://www.engineeringnews.co.za
Africa|Building|Energy|Engineering|Infrastructure|Innovation|PROJECT|Resources|Safety|SECURITY|Service|Services|Systems|Technology|Water|Products|Environmental|Infrastructure|Operations
Africa|Building|Energy|Engineering|Infrastructure|Innovation|PROJECT|Resources|Safety|SECURITY|Service|Services|Systems|Technology|Water|Products|Environmental|Infrastructure|Operations
africa|building|energy|engineering|infrastructure|innovation|project|resources|safety|security|service|services|systems|technology|water|products|environmental|infrastructure|operations

Microsoft taking principles-based, collaborative approach to responsible AI

The Microsoft logo outside its head office 0424 1022

Photo by Microsoft

4th April 2024

By: Schalk Burger

Creamer Media Senior Deputy Editor

     

Font size: - +

Information technology multinational Microsoft has set out principles for the responsible use of artificial intelligence (AI) and has developed the second version of its responsible AI standards that govern how the company approaches AI.

This is because AI can have beneficial or harmful impacts, comparable to the ethics surrounding the splitting of the atom, said Microsoft South Africa principal corporate counsel and attorney Theo Watson.

"[The world and companies like Microsoft] need to be responsible in how AI is developed, deployed, accessed and used. Securing the future that we want can only happen if we build something of value that is trustworthy," he said during an April 3 media briefing.

To ensure the responsible development and use of AI, Microsoft is taking a multidisciplinary and multistakeholder approach within its corporate operations and with its customers, researchers, academia, industry stakeholders and governments.

Building an ecosystem around responsible AI requires that all layers of organisations help to drive the standards and responsible AI use. It must be driven from the top executives and leadership, as well as from the bottom by developers and customers, said Watson.

In Microsoft, the executive leadership is responsible for developing a responsible AI framework and standards. This has been devolved to the Office for Responsible AI, under which sits the research, policy development and engineering teams, he pointed out.

"In terms of engineering, for example, it is not just what tools we use to develop responsible AI, but it is about ensuring that we bake responsible AI into every AI product and service so that we have an end-to-end responsible AI approach," he said.

The standards, of which the Office for Responsible AI has developed a second iteration, operationalise how Microsoft approaches AI development and deployment.

They are based on the principles for responsible AI that Microsoft has adopted, namely fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.

"Most of these principles are self-evident, but they also go to the heart of some of the challenges, such as the ability to access and use AI systems, in terms of inclusiveness.

"However, the principles of transparency and accountability are the main overarching principles. Being transparent on how we are developing and using AI and being accountable for AI products and services is incredibly important," Watson said.

Microsoft CEO Satya Nadella had previously said that "businesses and users are going to use technology only if they can trust it".

The standards operationalise the principles, such as when it is building an AI product or service, the Microsoft teams must set the goals for the project and how AI can be used to meet these. For example, the teams must determine whether the AI systems need to be auditable, such as for use in highly regulated industries, or must determine how datasets are reviewed to ensure they are unbiased and inclusive, Watson explained.

"It all has to do with implementation and oversight [of responsible AI principles and standards]," he emphasised.

Additionally, the principles are not set in stone, nor is Microsoft's approach and how it thinks about them, and the company continuously tests them and critically looks at the standards, Watson added.

"We want to broadly share with our customers and ensure stakeholders have the ability to access our AI. In this regard, we have set out several goals in terms of accessibility, as well as on enabling AI innovation and fostering competition," he added.

In addition to the access commitments, Microsoft has also made AI customer commitments, under which it broadly shares its expertise with stakeholders. It also has an AI assurance programme in place, which focuses particularly on highly regulated sectors.

"We are bringing our knowledge to bear in the ecosystem and are supporting our customers with our broad resources across the globe," he said.

Further, Microsoft has advanced an array of partnerships on AI, and not only for users but also with the building blocks, including chip suppliers that make chips for AI and data centres that provide access to AI, as well as customers, communities and countries.

"The goal of AI use is to improve the world around us, and we must be proactive and constructive in addressing concerns," he advised.

In November last year, 30 countries signed the Bletchley Declaration on AI safety, underscoring the importance governments place on approaches to AI so that it is developed, built, deployed and used in a way that is responsible and safe, Watson said.

To secure and use AI in a responsible way, the world must describe where it wants to go. Therefore, there is a need for a global conversation, without which there will not be enough observers to identify the issues that must be dealt with when it comes to AI, he said.

The goal is to reach a global consensus on AI. A global modelling approach to AI should be used that is similar to the approaches used to manage scarce resources, such as water, and to manage environmental imperatives, Watson recommended.

"The new AI economy will be built on trust. We want AI to be used to optimise what we as people are doing. An important question, therefore, is not only what AI can do for us, but also what it should do," he said.

The company is committed and determined to deploy AI in a safe and responsible way. The concept of guardrails is important in this regard, and Microsoft has benefitted from conversations with industries, academia and civil society, said Microsoft Africa government affairs director Akua Gyekye.

"These conversations are part of the blueprint we are using to ensure the technology we are creating stays safe," she said.

For example, some AI systems require safety brakes, such as for sensitive operations or for critical infrastructure. Part of the conversations is to define the class of high-risk systems, such as energy, water, and emergency responses and healthcare among others, into which safety brakes must be built by developers and monitored by operators to ensure that they remain under human control, she added.

Similarly, the conversations with governments, industry, stakeholders and citizenry focus on how to promote transparency, said Gyekye.

"The public has access to information on what we do, as do those we work with. This is part and parcel of how we can build AI systems in ways that benefit everyone. Additionally, we have to think about why we want to use AI systems and must deploy them in partnership to tackle big challenges, such as social and development challenges."

Specifically, the United Nations Development Programme partnered with Nigeria to convene an AI development reference group to ensure AI development in Africa is multistakeholder and multidisciplinary, and that all disciplines form part of the conversations to help shape country AI policies, she said.

The African Union also convened experts to support the continent and its countries to develop AI policies and regulations that ensure that AI use is responsible, safe and beneficial, she added.

"AI can help Africa to seize various opportunities and grow economies, and thereby benefit from the new era in technology. While the potential economic benefits of capturing global AI markets are significant, the main conversations are about using AI to transform how governments engage with citizens and how we can use AI to address problems we are facing.

"Embedding responsible AI principles and standards into how we build and deploy systems is important to ensure that we maximise the benefits from the technology and limit the potential harms" Gyekye emphasised.

Edited by Chanel de Bruyn
Creamer Media Senior Deputy Editor Online

Comments

Array

Showroom

Booyco Electronics
Booyco Electronics

Booyco Electronics, South African pioneer of Proximity Detection Systems, offers safety solutions for underground and surface mining, quarrying,...

VISIT SHOWROOM 
SMS group
SMS group

At SMS group, we have made it our mission to create a carbon-neutral and sustainable metals industry.

VISIT SHOWROOM 

Latest Multimedia

sponsored by

Photo of Martin Creamer
On-The-Air (26/04/2024)
26th April 2024 By: Martin Creamer

Option 1 (equivalent of R125 a month):

Receive a weekly copy of Creamer Media's Engineering News & Mining Weekly magazine
(print copy for those in South Africa and e-magazine for those outside of South Africa)
Receive daily email newsletters
Access to full search results
Access archive of magazine back copies
Access to Projects in Progress
Access to ONE Research Report of your choice in PDF format

Option 2 (equivalent of R375 a month):

All benefits from Option 1
PLUS
Access to Creamer Media's Research Channel Africa for ALL Research Reports, in PDF format, on various industrial and mining sectors including Electricity; Water; Energy Transition; Hydrogen; Roads, Rail and Ports; Coal; Gold; Platinum; Battery Metals; etc.

Already a subscriber?

Forgotten your password?

MAGAZINE & ONLINE

SUBSCRIBE

RESEARCH CHANNEL AFRICA

SUBSCRIBE

CORPORATE PACKAGES

CLICK FOR A QUOTATION







sq:0.082 0.138s - 142pq - 2rq
Subscribe Now