“AI systems in cybersecurity will free up an enormous amount of time for tech employees.”
“AI systems can help by categorizing attacks based on threat level.”
“AI technology allows us to detect unknown and unseen threats.”
These are just some of the claims reported in the media over the past year about the impact of Artificial Intelligence (AI) in cybersecurity practices. But, of course, what they should really say is that ‘Machine Learning’ (ML) can do this – not AI.
Put simply, AI happens when machines conduct tasks without pre-programming or training – and this does not yet exist. ML, in comparison, relies on training computers, using algorithms, to find patterns in vast amounts of data and identify potentially malicious data based on rules and information it already has. ML is nothing new; it has been present in cybersecurity since the 90s.
Miscommunication leads to misunderstanding
The problem is that in all the media hype and marketing materials from next-generation vendors, the terms ‘AI’ and ‘ML’ are often used interchangeably and this is confusing for IT decision makers. In fact, our latest research revealed that just 53% of IT decision makers said their company fully understands the differences between the terms AI and ML. Even more worryingly, IT decision makers believe the claims as three in four (75%) consider AI to be ‘the silver bullet’ to solving their cybersecurity challenges.
The truth is that the claims around ‘AI’ are simply misleading; it should not be heralded as the shiny saviour for the cybersecurity industry. ML, however, is an important and powerful tool in the fight against cybercrime – especially given its ability to improve malware scanning as it helps detect potential threats, flagging them to IT teams who can, then, proactively mitigate them much more quickly.
But even when done properly, ML does have its limitations and businesses need to be aware of them. For example:
1. You need to hold its hand
To use ML you need a lot of inputs – and every one must be correctly labelled. At ESET, we’ve spent three decades gathering, classifying and choosing data to train our ML system!
What’s more when an algorithm has been fed a large quantity of data, there is still no guarantee that it can correctly identify all the new samples it encounters. Human verification is still needed. Without this, even one incorrect input can lead to a snowball effect and possibly undermine the solution to the point of complete failure.
2. It will always have its flaws
The truth is that even a flawless machine will not always be able to decide whether a future, unknown input would lead to unwanted behaviour. If a next-gen vendor claims its machine learning algorithm can label every sample prior to running it and decide whether it is clean or malicious, then it would have to preventively block a huge amount of undecidable items – flooding company IT departments with false positives.
Of course, not every false positive necessarily leads to the collapse of your business’ IT infrastructure. They can, however, disrupt business continuity and, thus, potentially be even more destructive. ML systems, therefore, need the help of humans once again when they come across something they haven’t seen before.
3. It can’t outsmart a hacker
Sadly, no matter how smart a machine learning algorithm is, it has a narrow focus and, as we discussed, learns from a specific data set and rules.
The simple fact is that, by contrast, attackers don’t play by any rules. What’s worse, they are able to change the entire playing field without warning. A hacker can learn context and benefit from inspiration, which no machine and no algorithm can predict – no matter how sophisticated they might be.
Beyond the hype
The ever-changing nature of today’s threat landscape makes it impossible to create a universal solution, based solely on ML, to solve all cybersecurity woes. With a purely ML-based cybersecurity solution, it only takes one successful attack from malicious actors to open up your company’s endpoints to a whole army of cyber-threats.
This is why ML needs to be implemented alongside other protective layers and skilled people, to ensure your company’s cybersecurity strategy is robust enough for the fight.
Over-hyped claims about the capabilities of AI and ML in cybersecurity are quite simply muddling the message around the capabilities of ML. It’s important that your business is aware that ML has its limitations in order to understand the ways in which you can ensure you’ve properly secured your organisation.
Want to find out more? Read our whitepaper ‘Is all the AI hype putting business at risk?’ today.