Ethical AI ― How to avoid a digital dystopia

While Artificial Intelligence remains an integral part of the future’s blueprint, it is imperative to avoid pitfalls to help develop a smart society



Artificial intelligence (AI) has become so mainstream, it is difficult to believe that earlier, it was the pipedream of backroom techies.

Worldwide, it is supporting cancer research, predicting crop shortages and improving business productivity.

Middle East governments and businesses are taking AI seriously. In the GCC, AI bots are being used to supercharge customer service in the utilities sector and FSI. And PwC expects AI to be a $320bn industry across the Middle East by 2030, with nations investing considerable chunks of GDP in the pursuit of smart societies.

UAE-based organisations are projected to contribute 13.6 per cent of national GDP to AI spend. In Saudi Arabia that figure is 12.4 per cent, and in Egypt it is 7.7 per cent.

But challenges remain. It is hard to see how an unwelcome AI invasion will bring universal benefits.

Retrospective responsibility
Where does AI accountability begin and end? Many within the regional technology spectrum remained concerned that public outcries may become so intense that they might lead to regulations that would penalise companies for actions taken before laws were put in place.

This retrospective responsibility could be costly. While we certainly want to curtail the activities of companies that are cavalier about civic responsibility, others may be planning today for such future regulations in detrimental ways.

They may either opt for doing nothing for fear of future laws; or they may rush headlong into ill-thought policy-shakeups, later citing such actions in their defence.

Overpromising
Many firms, eager to be global leaders in AI ethics, make unrealistic claims of ethics and innovation living in perfect harmony. While it is to be hoped that such utopian conditions may be possible, we need open, honest debates on important issues, rather than getting bogged down in brand preservation.

Machines are ill-equipped to make ethical judgments and act on data alone. Many of the region’s governments, such as those in Abu Dhabi and Saudi Arabia, have already begun to introduce digital workflows into their judicial and litigation frameworks, but have thankfully learned from AI fiascos elsewhere, and have avoided using more ‘advanced’ systems.

Society before technology
Smart Dubai, the Dubai government agency responsible for delivering the emirate’s “smart society” vision, went so far as to publish formal guidelines on AI ethics. The document codifies the goals ― fair, accountable, transparent and understandable ― that many public-spirited technology companies espouse.

However, to deliver on these laudable pillars, organisations need to be aware that AI itself cannot and will not be any help to us. The number-crunching and pattern-discerning nature of machine learning will dredge up that which may offend, discourage or demoralise. We, as humans, are responsible for repairing these underlying problems. If data is prejudiced, then the problem does not lie with the AI that discovered it.

Human rights
As we form frameworks on how we deal with intelligent machines as a society, we need to remember that, as yet, we are not one society. Each nation state and government will have to grapple with its own principles and establish its own ethics benchmarks.

Governments must also consider how broader public policies will affect AI adoption. A perfect example is the area of job displacement by automation, particularly among older workers. In 2018, the organisation Oliver Wyman developed metrics to encapsulate the threat of automation to the senior portion of the workforce. It found that not only was automation a higher-than-average threat to older workers in GCC nations, the workforces of such nations were also aging faster than those of their global peers.

This remains a critical issue for governments alone to address.

Next steps
There remains reason to be encouraged. With right public policy frameworks, we can expect the Fourth Industrial Revolution to be a net creator of jobs, as its predecessors were.

However, governments must take action on important issues before they get blown out of proportion.

It must be ensured that AI is part of school curriculums. Meanwhile, the technology sector should engage with mainstream media, to arm the public with knowledge necessary to place requisite safeguards. Also, it is imperative to create an environment where businesses can fail without fear.

Regulators that police AI ethics must be equally fearless and reflect the diversity of the community it protects.

Get it right and the “smart society” you end up developing will be a great place to live in; get it wrong, and you will end up in a digital dystopia.

 

Assaad El Saadi is the regional director, Middle East, Pure Storage