Insights: The path to responsible AI
Now Reading
The path to responsible AI

The path to responsible AI

Hundreds of billions of dollars in commercial AI revenue is expected to flow to the Middle East by 2030, and contribute heavily to double-digit GDP growth

Avatar
Responsible AI

Artificial intelligence (AI) solves real-world problems. We know this. We have seen it. Last year, we saw droves of regional businesses move to the cloud and, once there, realise that scalable, affordable smart technologies were within reach. Proofs of concept quickly followed, as did several success stories. And then, as AI grew in popularity, a concept that had largely been the subject of conversation among tech experts began to go mainstream. That concept is “responsible AI”.

Hundreds of billions of dollars in commercial AI revenue is expected to flow to the Middle East by 2030, and contribute heavily to double-digit GDP growth, with the UAE reaping the most benefits, followed by Saudi Arabia. GCC nations have been pioneers in artificial intelligence and the UAE has been a trendsetter among them. The country was the first in the world to appoint a minister of state for AI and in October 2017 it launched the UAE Strategy for Artificial Intelligence 2031. By the nation’s sixtieth anniversary celebrations, it aims to solve real-world problems that include the elimination of the federal government’s 250 million annual paper transactions.

The UAE, like many countries, grasps the potential of smart technologies to supercharge economies and solve environmental and social issues. The government’s AI strategy may focus on the 190 million people-hours wasted each year or the one billion unnecessary kilometers travelled for the sake of physical transactions. But stakeholders and public messaging also regularly refer to responsible AI.

Align intent with consequences
Responsible AI is a common framework that focuses organisations on the wider implications of their technology experiments. Methodologies and best practices in responsible AI seek to align intent with consequences and ensure that developers of AI solutions never lose sight of their impact beyond the enterprise. Broadly speaking, this requires collaboration among stakeholders of different backgrounds at every step in the development process, from design to deployment.

Just as successful leverage of AI technology requires a culture change, so does the delivery of responsible AI. So, it is more beneficial from a business standpoint to integrate responsible AI from the start. Companies should begin by examining their values and responsibilities. A list of dos and don’ts, communicated clearly to all employees, will help to govern everything that comes after.

To make AI work for a business requires that staff at all levels are aware of (for example) what data needs to be collected and how. While training employees in these processes, they can also be introduced to the ethical and legal components of the technologies and all the possible spillovers from their use. Every action they take should be viewed through the lens of the responsibility framework so that they are aware of the legal and moral implications of what they do on behalf of the organisation.

The need for transparency
Responsible AI systems must be secure, but they also must be transparent. Non-technical people must be given the means to interrogate a result from an AI system, be it an automated action, a recommendation, or an alert. Good governance in the development of such systems will define a set of deliverables at each step that will ensure that products remain transparent. Performance and uptime are just part of the equation. Readily auditable platforms should record data access ― not just timestamps, but the user that accessed it and their reasons for doing so.

Decision makers and developers must be empowered with the correct tools and best-practice training to deliver technically sound and audit-ready AI systems. Integrating the elements of responsible AI requires taking astute action at every point in the development pipeline. Constant communication between stakeholders will also be necessary to flag any potential issues so all relevant parties can assess them against the responsibility framework.

This is where the correct choice of stakeholders will come into play. Responsible AI comes more naturally to organisations that are prepared to include potential users of the end system in the development process, or at least people who are representative of end users. Failure to do this in the past has led to some public failures in AI that have diminished confidence in the technology’s ability to fulfil certain use cases.

The bias in data
For example, the bias that arises from historical data can lead AI systems to churn out unhelpful results. If ― as has happened ― an algorithm screens resumes and returns more male candidates than female, the fact that the algorithm accurately analysed the history of hiring practices does nothing to improve the value of the result.

Responsible AI allows for prejudice in data and amends algorithms and analytics models accordingly. A committee that includes experts on historical prejudices ― or people who have experienced them ― is a stronger decision-making team. Also, techniques such as exploratory data analysis (EDA) ― a visualisation approach that can be helpful in identifying underlying structures and biases in data ― can greatly improve the quality of AI products.

A general error in the implementation of AI has been siloed development. Different teams work on different projects with different priorities. While this is counterproductive to the implementation of any AI program, it is particularly detrimental to the delivery of responsible AI. Common data sets are a prerequisite of ethical development because, as we have seen, the data itself can be the source of negative outcomes. Uniform, enterprise-wide commitments to transparency, data integrity and other goals are necessary to produce ethical products. Holistic frameworks will guide everyone because they will be designed to apply to all facets of the business, having been formulated by a wide range of stakeholders.

Responsible AI is accountable AI. It is ethical, grounded by its potential human impact, and lays bare its inner workings. Get the right team in place, with technical, domain, and legal specialists who pay attention to data quality and listen to wider audiences, and the results will be benchmarks for excellence.

Sid Bhatia is the regional director – Middle East, Dataiku

You might also like


© 2021 MOTIVATE MEDIA GROUP. ALL RIGHTS RESERVED.

Scroll To Top