What responsible AI means outside of big tech
Now Reading
What responsible AI means outside of big tech

What responsible AI means outside of big tech

For many, the main concerns are not around whether AI will be sentient; rather, it’s about being able to understand the advice and decisions AI models make

Gulf Business
Responsible AI

In November of 2022, OpenAI released ChatGPT-3 for public use, followed by ChatGPT-4 on March 14, 2023, after GPT3’s explosive popularity.

On March 29, over 1,100 tech leaders, including Elon Musk and Steve Wozniak, signed an open letter asking all AI labs to pause research and development for six months.

Shortly after, Italy became the first country to ban the use of ChatGPT, followed by the European Union and China announcing their own plans to regulate AI. The debate over ethical AI – and the fear that humanity would be wiped out by the unknowable intelligence of our own creation – was once again reignited.

When we think of responsible AI, what comes to mind first is how it affects tech companies – how AI development will be regulated, and what types of new AI developments will come from it. Now that smart machines are becoming more ubiquitous across the economy, the debate now dives into how it affects those outside of tech.

For many in an industry like manufacturing, for instance, the main concerns are not around whether AI will be sentient. Rather, it’s about being able to understand the advice and decisions AI models make, and being able to detect malware, as organisations increasingly integrate and rely on them.

Real world uses of AI in sectors outside tech

The ideal outcome for AI is to make a better world. Take manufacturing or utilities: AI can free up precious time and resources by automating workloads, making optimal business decisions, and streamlining operations. Predictive maintenance is just one example of where AI can plug in as a tool to simplify field service operations through identifying machinery maintenance needs prior to the service team being deployed. This allows businesses to reclaim the time that would have previously been spent on diagnosing an issue and traveling back and forth to the site, which can now be spent on more important tasks or simply getting it done faster.

What is explainable AI, and why is it key to successful AI deployment?

When it comes to responsible AI, there are two important aspects to consider. The first is the practical aspect, as in, is it making the right decisions for the right reasons? Having explainable AI is hugely important to understanding why it makes the decisions it does – and why, if it makes a wrong decision, it went down that path. Oftentimes, this will turn into a cycle where machine learning feeds the AI and the AI then produces more data for the machine learning model. Faulty reasoning will pollute the output, resulting in unusable data and untrustworthy decision making.

On the other side of the coin, the ethical aspect centers on more of the cybersecurity concerns surrounding AI. Ransomware presents a significant problem to any AI system – aside from just delivering malware to shut down a business, what if it’s used for more insidious, discrete purposes?

If malware corrupts the data in an AI system, for example, warping the algorithm, it can lead to more disastrous consequences like damaging the products and the company’s reputation.

Why the biggest threat from AI is malware

The more autonomous and intelligent AI systems become, the bigger the risk of a malicious entity infiltrating and corrupting it without shutting it down entirely, thus being less likely to be detected and fixed in a timely manner. Lack of human intervention gives malware, whose entire goal is propagating an attack and spreading quickly throughout an entire IT system, more opportunity to slip by without being noticed.

Cybersecurity, and especially zero-trust and isolation principles, then becomes critical to the safe and responsible use of AI – from making sure software is producing the right level of proofs and audits to separating the duties and permission sets for each task or user. In this way, practical and ethical AI go hand in hand towards creating responsible AI, which can then be used as intended to drive business decision making.

Of course, the question remains, how do we ensure the AI we’re developing is both ethical and practical? ChatGPT has proven to be more efficient and capable with each iteration, while gaining in popularity at the same time.

While fear of the unknown will always be present, and for valid reasons, it’s highly unlikely that people will stop making new AI tools, the same way we continue to explore space or the deep sea. It is instead about making sure we understand how it works, making it work for us, and furthermore protecting it against malicious attacks from bad actors.

Kevin Miller is the CTO at IFS

Read: The five most common myths of responsible AI

You might also like


© 2021 MOTIVATE MEDIA GROUP. ALL RIGHTS RESERVED.

Scroll To Top