The five most common myths of responsible AI
Now Reading
The five most common myths of responsible AI

The five most common myths of responsible AI

Responsible AI is critical in building trust in what is quickly becoming an everyday tool

Gulf Business

Responsibility is the central tenet of a functioning society. The UAE was a world leader in Covid response because the government ensured we all wore masks, stayed a safe distance from one another, worked from home where possible, and got vaccinated when vaccines became available. A shared sense of responsibility got us through one of the darkest times in living memory, so it is appropriate that we examine the technologies we use daily through the lens of responsibility.

Artificial intelligence, in particular, has gathered steam across the region. PwC projects that AI will add $320bn to Middle East GDP by 2030, with countries’ annual growth in AI spend reaching between 20 per cent and 34 per cent. The UAE is predicted to lead its regional peers in the proportion of GDP (around one seventh) accounted for by AI. And according to other research data, by 2035, AI in the BFSI sector alone is expected to add $37bn to UAE GDP. The country’s leadership should come as no surprise, given its pioneering move to appoint the world’s first ever minister of state for AI back in 2017.

By now you have probably heard the phrase ‘responsible AI’, referring to the practice of designing and implementing such systems to enhance societal impact and minimise prejudicial and negative outcomes. Responsible AI is critical in building trust in what is quickly becoming an everyday tool. So addressing problems in data bias and negative outcomes needs to be done properly. And it needs to be done now, before problems become so ingrained that change is prohibitively expensive. Responsible AI does not yet have a formalised rulebook. Instead, different companies have published their own whitepapers on what it means to them. Given its somewhat informal status, it is inevitable that myths have sprung up around responsible AI. Here are five such myths.

Responsible AI is purely about ethics
The reality is more pragmatic than formulating a code of moral conduct alone, as many negative outcomes in AI can come from the best of intentions. Responsible AI must encompass both intentions and accountability. At the testing stage, implemented models should behave as they were designed to behave. While this may seem an obvious requirement of any system test, AI solutions must look at the data used for the test, not just the behavior of the algorithms.

Data must come from compliant and unbiased sources, and testing should include multiple parties from a range of disciplines. Each of these parties, even the ones who had no part in the creation of the system, should be able to understand the results. The process should also include transparent oversight of the collection and use of data, especially as it relates to the overall result, which can have real-life consequences for someone applying for a loan or insurance, for example.

Responsible AI can be delivered by tools alone
Tools can go part of the way to delivering responsible AI. But even MLOps — which does for machine learning lifecycle management what DevOps does for more conventional coding — requires teams to work together and build a new culture, with new governance standards and new workflows. These amended processes will be designed to accommodate different risk levels associated with factors such as end users, possible outcomes, and business impact. Tools can only help with these approaches once the approaches themselves have been designed.

Incompetence and malice are the sole causes of issues
Some of the largest technology companies on the planet have suffered public scandals due to the misapplication of AI principles, and post-mortems showed no evidence of either sabotage or serious errors in design. Instead, a lack of intentionality and accountability along the project lifecycle led to unforeseen consequences. Behind many of the news headlines are simple cases of biased or unrepresentative data. If it can happen to the big players, it can happen to anyone.

AI regulation will have no impact
The UAE and other GCC nations have introduced many regulations in the past, particularly in data residency. And the UAE National Strategy for Artificial Intelligence mentions responsible AI prominently. Also, businesses here that deal with the European Union must comply with the 2016 General Data Protection Regulation (GDPR), which touches on many of the elements of responsible AI.

As we can see, regulation has already had an impact. To benefit from AI, regional organisations must balance their own transformation journeys with emerging standards, as it is all but guaranteed that governments will move to formalise them soon.

Responsible AI is for technologists to solve
Given the scale of adoption, this notion will soon be impractical. For many adopters, it may already be so. Domain experts are necessary to understand the underlying business problem and the nature of the data. Data scientists and other technical staff understand the tools and the solutions that have been built. And decision makers understand wider business goals and can make judgment calls on regulatory issues and other matters. While no one is indispensable, no one understands all aspects of an AI project.

Ends and means
When implementing responsible AI, we need to assess risk and build trust. Everyone across the company needs to play a part in the culture change that is to follow. Infrastructure, software, people and more must unite in a seamless hybrid that sees all the angles and adds value inside and outside the organisation.

Sid Bhatia is the regional vice president, Middle East and Turkey at Dataiku

Read: To master AI as a revenue catalyst requires first understanding its costs

You might also like


© 2021 MOTIVATE MEDIA GROUP. ALL RIGHTS RESERVED.

Scroll To Top