Home Technology Cybersecurity Insights: How to spot AI ‘snake oil’ in cybersecurity Organisations often overlook vital inquiries about AI despite their understanding and use of predictive analytics and other advanced technologies by Taj El-khayat September 4, 2023 Image credit: Supplied The UAE Strategy for Artificial Intelligence will celebrate its fifth anniversary in October. The government saw the potential in artificial intelligence (AI) to solve real-world problems that occupy the minds of world leaders everywhere. It could be a boon to traffic safety, healthcare research and education. The government’s reasoning continued: in a world where carbon-pumping activity threatens economic progress, AI could help; in a region where water shortages loom large, AI could help; and in a nation that looks to its next space mission where others merely dream, AI could help. So, in October 2017, the UAE appointed the world’s first ever minister of state for AI. Artificial intelligence is but one example of the UAE’s standing as a model for human endeavour. Another is cybersecurity, where the Telecommunications and Digital Government Regulatory Authority’s National Cybersecurity Strategy was such a success that it was instrumental in gaining the country worldwide recognition as a leader in the field. In 2020, it placed fifth out of 194 nations in the International Telecommunications Union’s Global Cybersecurity Index. It is only natural that the UAE’s successes in cybersecurity be brought to bear on its successes in AI. However, across the globe, as they get to grips with the power and potential of predictive analytics, machine learning, cognitive reasoning and other technologies, organisations often fail to ask the right questions about AI. And when applying AI to the defence of our digital estates, it seems only prudent to ask the same questions we would ask a human security professional when deciding whether to hire them. What can you do? Where have you worked in the past? Can you give some examples of where you have faced a problem and prevailed? How will you add value to our organisation? Artificial intelligence’s black-box nature calls for such curiosity, and yet too much trust is often placed in advertised functionality that may be little more than a container for complex branching heuristics – not true AI. At the other end of the trust spectrum, we should consider the large language models (LLM) like ChatGPT and the new-and-improved Bing, some of which are making headlines for sharing dark thoughts and disorderly ambitions. Unlocking the potential of AI To unlock the potential of AI, we must first unlock trust. Its public image could do with being more fact-based. When used in threat detection, AI is subject to due diligence like any other technology. AI algorithms used in modern security systems are inspired by the simple fact that heuristic solutions can flag anomalies from dawn ’til dusk but still fail to incorporate actionable context. So, when looking to procure the right stuff in AI-powered cybersecurity, CISOs and their teams must first establish what type of technology is involved. Does it supplement its alerts with contextual information from inside and outside the organisation? Is it capable of understanding the model and workflows of the business it protects to the extent that it can identify false positives and not contribute to alert fatigue? Where will it be deployed and where will it operate? What are the credentials of the solution’s design team? Are they data scientists? Security researchers? Psychologists? And what are the vendor’s or systems integrator’s support commitments? All these questions will dovetail into a clear view of the technology and what it can do for the organisation. There is no such thing as an AI that can serve as a panacea. It cannot see everything; nor can it do everything. Amid the region’s mass cloud migration of 2020 and 2021, we saw a parallel mass expansion of the attack surface. Unpoliced endpoints and shadow IT coupled with unknowable third-party networks and shadow clouds put a lot of pressure on the cybersecurity function at a time when it struggled to discover, recruit, and retain talent. SOCs undoubtedly lost some existing talent as the result of that pressure, making matters worse. This dismal environment was ripe for overpromised solutions. Much as Covid-19 vaccines were embraced enthusiastically as the end to a dreadful problem, AI-based cybersecurity offered to put a different pandemic to rest. But while Covid-19 vaccines clearly worked wonders, the cyber-medicine did not. Digital snake oil is no substitute for experience, agility, and patience – the willingness to iterate, using true AI. Over time, true AI improves while the snake-oil alternatives are exposed for what they are. The AI black box puts most people off trying to understand it. But think about how you buy other useful items. You do not need to understand the internal-combustion engine to ask sensible questions about cars. And doing so can reveal what is truly under the hood. The same goes for AI. Ask and understand. Then reap value. Taj El-khayat is the area VP – South EMEA at Vectra AI Also read: Understanding cybersecurity issues that keep organisations on their toes Tags cybersecurity Technology Vectra AI 0 Comments You might also like HUAWEI launches new foldable, nova 13 series, MatePad New: HONOR launches MagicBook Art 14 in the UAE How agentic AI will boost the digital economy across the Middle East Talabat plunges over 7.5% in Dubai trading debut after $2bn IPO