Insights: Tackling GenAI's impact on enterprise security
Now Reading
Insights: Tackling GenAI’s impact on enterprise security

Insights: Tackling GenAI’s impact on enterprise security

Ad-hoc adoption of GenAI and LLMs by end users will compromise the enterprise attack surface and data privacy, and cybersecurity administrators will need to initiate proactive measures linked to its use

Gulf Business
Insights: Tackling GenAI's impact on enterprise security

Generative Artificial Intelligence or GenAI covers all AI techniques that learn from artefacts of data and use it to generate brand-new, unique artefacts that resemble but do not repeat the original data.

According to this definition by Gartner, this can serve benign or nefarious purposes. GenAI can produce novel content including text, images, video, audio, structures, computer code, synthetic data, workflows and models of physical objects.

Driven by the need to maintain a competitive first-mover advantage and leverage possible gains generated from AI, enterprises are trying to develop their own GenAI applications around use cases specific to their business model.

GenAI and Large Language Models (LLMs), defined by Gartner as a specialised type of artificial intelligence that has been trained on vast amounts of text to understand existing content and generate original content, are being adopted by business teams as part of their innovative experiments to build use cases and generate results.

The release of ChatGPT and large language models are common visible signs of the capability of GenAI that has triggered the imagination of businesses at large. Soaring imagination, sometimes unrealistic, coupled with rapid early adoption and the dabbling of early use cases has triggered the warning lights for cybersecurity administrators.

Use of GenAI: Potential opportunities and threats

Mandated from the top, business and IT decision-makers are dabbling on how best to use GenAI and large language models to boost productivity and automation and improve operational processes across the enterprise. While the long-term vision of GenAI is to automate processes and reduce human intervention, it is the intermediate stages of the development and adoption of GenAI that have cybersecurity administrators raising red flags.

While some of these experiments are sanctioned and known, many continue as ad-hoc under the enterprise radar as shadow IT, creating new attack surfaces, violating data compliance guidelines, and compromising sensitive data and enterprise intellectual property.

It is becoming imperative for security administrators to securely manage how their enterprise consumes GenAI and to proactively manage the impact on its cybersecurity framework.

The other side of GenAI is how threat actors will leverage GenAI in their attacks on enterprises. More sophisticated attacks using unknown threat vectors around life-like impersonation and phishing, at large scales and speeds, will require significant changes to enterprise cybersecurity practices and policies.

According to Gartner, these life-like threat impersonations using GenAI, will compel cybersecurity administrators to lower their thresholds for detecting non-standard network behaviour and in return, this will generate more false alerts, in 2025 and ahead. Managing increased incidences of false alerts will require human intervention to build new predictive models.

By 2027, GenAI will have succeeded in reducing these increased false positives rates by 30 per cent in the area of application security testing and threat detection, forecasts Gartner.

Security steps that will help your enterprise

Here are some of the initiatives cybersecurity administrators need to initiate to become more proactive about the usage of GenAI and its impact on the enterprise.

  • Build use cases of GenAI for cybersecurity applications inside the enterprise around chat assistants, for example.
  • Partner with early adopters inside the enterprise and other impacted departments such as legal, risk and compliance, to create policies and guidelines for end-user usage.
  • Formulate training programmes for active users of GenAI inside the enterprise about how privacy and copyright concerns need to be addressed.
  • Apply various frameworks protecting trust and security when building GenAI applications in-house.
  • Follow trust and security frameworks when adopting new GenAI applications from ISVs and other software suppliers.
  • Relook at enterprise cybersecurity measures to protect from threat actors using GenAI with unknown attack vectors.

The benefits of GenAI for cybersecurity administrators are now available in the tools and solutions from Microsoft, Google, SentinelOne, Cisco and CrowdStrike. By using these solutions, through the services of trusted partners, cybersecurity administrators can benefit from additional productivity, increased skills, better communication, and highly improved outcomes for the enterprise.

The writer is the managing director at Cloud Box Technologies.

 

You might also like


© 2021 MOTIVATE MEDIA GROUP. ALL RIGHTS RESERVED.

Scroll To Top