Scared of ML or AI? They’re probably the technologies that will improve your security

By Joe Baguley, VP & CTO EMEA, VMware.

  • Tuesday, 21st December 2021 Posted 2 years ago in by Phil Alsop

How and why we experience fear is a complex subject. Not least because this is a technology article, not a psychology one. But businesses are just as culpable as anyone else, particularly when it comes to the unknown. Nowhere is this more acutely demonstrated than when it comes to the adoption of Machine Learning (ML) or Artificial Intelligence (AI) for the enterprise. According to the latest VMware annual Global Security Insights research, more than half of businesses (56%) cite security concerns as holding back the adoption of ML/AI-based applications to improve services.

 

This is the possibly commercial equivalent of being scared of the dark. It’s not as irrational and unfounded but in reality, it is ML and AI that will solve the security challenges organisations face today if, that is, businesses are prepared to embrace them as enablers and not shun them with scepticism. The hype over ML and AI is not without merit.

 

The sharp edge of AI

The use of ML and AI is proving to be a double-edged sword. While this can be said of most new technologies, both sides of this particular blade are far sharper because neither is universally understood, yet. Although the widespread use of these technologies is still in its infancy and questions remain open about the pace of progress, with adoption hovering by many reports around 10%, within the enterprise, the potential is enormous. McKinsey Global Institute research suggests that by 2030, AI could deliver an additional global economic output of $13 trillion per year.

 

A quick look around almost any organisation will yield examples of its deployment. Most of us interact with ML in some form or another on a daily basis - targeted Facebook adverts being a good example. But it’s being used across supply chains, improving customer relationships and in medical diagnostics. ML is one of the most common technologies in development for business purposes today and is primarily used to process large amounts of data quickly, "learning" over time and getting better at what they do the more often they do it.

 

But it’s a different story altogether when it comes to security because of the perception that AI/ML-based applications act as an open window to would-be attackers. It is true that more solutions equal greater risk but the reality is we are only as good as the last attack or technique that is used in an attack. The attacker’s objective is to typically look for “blind spots” which are places they can hide and gain context and visibility while on the inside. The longer they are hidden the longer they have to formulate an attack.  This dwell time allows the bad actor to find a perfect strike time. Then starts a game of cat and mouse, as you know something is in there but you then need to know where it is.  But this situation should be fostering the use of more AI/ML-based security applications, not less. Being able to utilize a solution which enables you to have context, visibility and behaviour analytics technology, which typically can start to read behaviour and uncover bad behaviour. So, what’s holding businesses back?

 

 

Bound by fragility

The good news is businesses don't have their heads buried in the sand. They know IT security is not working and it needs major improvements. The issue is complexity. According to the Global Security Insights research findings, well over half of businesses (57%) believe there is too much complexity in the security solutions market to prompt a change in security policy. In other words, ‘paralysis by analysis’.

 

Big businesses have reached a stage whereby security solutions are so complex they don't know where to begin. They’ve been built up over time, added to, fixed and patched up to the extent that they no longer represent what they’re required to do. It is like an F1 car whose engine is held together by glue, string and a prayer. This is precisely the reason why security needs to be intrinsic - built-in, not bolted on. With a solid foundation, businesses have a licence to innovate and explore emerging technologies - like ML and AI - without fear or recourse. Yet, security systems today are so fragile, organisations dare not make additional changes or add more layers of complexity and are in a situation where they are bound by this fragility.

 

The nightmare of modern security

It’s a juxtaposition that needs action, not inaction and there has never been a better and more cohesive argument for simplicity.  When it comes to unravelling the huge, nightmarish mess of modern security, ML and AI are the knights in shining armour.

 

By its very nature, these technologies are ideally placed to get in and understand the nitty-gritty. Indeed, the more large and complex things are, the better. As a result, the use of ML/AI-based security applications brings huge advantages when it comes to threat modelling, response and prediction. Perhaps it is no surprise then that our report also found that 62% of businesses would like to use more ML /AI apps to improve security and services. Though realising this vision is dependent on ensuring teams feel confident in apps and the decisions algorithms make. They need unwavering trust in the technology.

 

Explainability, explained

Because AI, by its nature, is autonomous, it infers a lack of control. And for businesses to comfortably sit back and let something uncontrolled go forth and run security, requires more than just blind faith. This is what is commonly referred to as, ‘explainability’. Namely, being able to explain or prove how an AI has come to a decision.

 

This is hard because most AIs code themselves through machine learning and being ‘taught’ with datasets as to how to make decisions, rather than being explicitly coded. This means that we may not ever know how they are coming to these decisions. Explainable AI can be described as ‘chains of reasoning’ to prove why the decisions it makes are correct. The need for Explainable AI is going to become more and more pressing, as fears - a broader lack of understanding around how AI works and what its limits are - regarding AI technology remain high. However, if developers themselves don’t know why and how AI is thinking, it creates a slippery slope as algorithms continue to grow more complex. Offering more insight into how AI is used and how it makes decisions will give businesses more confidence in using AI products, particularly when it comes to security.

 

Innovation or invention

This is where businesses need to get innovative to create a self-fulfilling cycle of improvement when it comes to security - something that almost two thirds (63%) of businesses confirmed in our report. And here, there’s a long way to go.

 

Organisational ML/AI policy should be as universal as payroll or HR but it’s not and this is something that needs to be addressed. This means vendors putting much greater weight behind making it more understandable and organisations addressing culture, attitude and understanding with the sole objective of building trust in the technology. Only then will the concerns around ML/AI-based security applications recede.