How the Fast Evolution of Stealthy Malware Requires a Rethink of Security
Stealth – the art of remaining hidden - has been a force of nature since before the dawn of mankind. Long before we were standing upright on the Savannah, nature had already figured out that one great way of staying alive was to remain silent, hidden out of sight and with the wind in your face as you watch your prey. As in nature, the art of remaining hidden continues to evolve for the cybercriminal, as well.
I am sure that a technical audience is familiar with Moore’s Law and its role in the evolution of technology. But perhaps for the world of security, we would do better drawing parallels from Darwin’s On theOrigin of Species – chiefly the lesson of ‘survival of the fittest.’
Hunters have known for many millennia that if you expose yourself to your prey, then you will probably go hungry. It is no different in the world of data and network security. Long gone (at least in terms of development) are the threats like the Lovebug, Melissa, CodeRed and SQL/Slammer worms of 1999-2003, where the malware was so detrimental to the networks they ran on that you could easily spot, identify, and then remove them. Once defences were in place, it became relatively easier to remove the next attack, and so on.
Security professionals everywhere learned these lessons and security was improved. But as in nature, the game of hunter versus hunted never ends, so it wasn’t long before the cybercriminal then learned how to keep quiet. Of course, at first that meant not trying to attack every IP address on the internet - or every email address and server.
Soon malware became stealthy. Then it got really stealthy as we saw the adoption of obfuscation and moving code snippets around so that the standard defence models for finding malware (file based scanners, for the most part) wouldn’t identify the attack code; then came encryption, so the industry shifted to detecting those commonly used packers instead. Sexy new terms like steganography came along, disappeared, and then reappeared.
In addition to changing threats, the continuous and always changing model of ‘computing’ has equally played a part in how to become ‘stealthy’ too. Widely used system tools, common development frameworks, libraries and administrator applications and services, even the way the operating systems work, present a new opportunity for the cybercriminal to exploit.
Using these common tools, invoking them, asking them to perform a given task, scripting something to run a module, launching other functions, services or code, means system administrators can perform almost any task they want to get done, but without having to write an executable. Therefore, so can cybercriminals who don’t have to deploy anything, nor copy the code onto the local drive, or across the network, thus avoiding detection by the commonly deployed security solutions we see today.
So, without a threat to deploy, how on earth do you stop the cybercriminal using those same tools that we need to administer, maintain and, yes, even improve, security?
This isn’t the only approach; in 2014 threats started to break cover that were clearly using advanced, clever techniques to remain only in memory and not touch the drives, to leverage (and sometimes hide within) legitimate processes. Threats like Kovter, first seen using ‘fileless techniques’ in 2015, hinted at what was to come – threats that were almost undetectable by standard end-point protection.
Clearly, the days of the current security model are numbered - end-point technology has moved from the frontline of defence, to (at best) the final resort. Businesses will be forced to adapt to new techniques to identify threats – but if you are not looking for bad code, what are you going to look for?
It’s obvious really – we cannot tell by looking at, or even inspecting, everyone that they are, or will become, criminals – but, you know someone IS a criminal based on behaviour. Just as with the real, physical crime world, in data we need to start to look for the signs of crime, ensure that we can determine what is a crime, understand what actions were taken in the crime and then look at who and what caused the crime.
I tell customers to look at ransomware, for example – is it normal for a human to enumerate hundreds of shares across a network? Is it typical behaviour for you to over-write thousands of local, let alone remote, files across the network? Of course not. It’s pretty obvious that it isn’t a human doing this – and the same is true for many data breaches, too. Is it acceptable for the databases to have such a volume of traffic headed to that internal IP address, let alone out of the network? No. So don’t be a victim.
With advanced malware living inside hardware, or on the network, it’s only through a combination of behaviour analytics, gathered from potentially multiple and different technologies, that organisations will be able to use machine learning and predictive, advanced threat-detection technologies to detect and remediate before real damage occurs.
This is where the industry is heading and this is why I get excited about Juniper Networks’ security, because we have already started this journey. We are taking information from thousands of datapoints, correlating the activity from both Juniper and different third party solutions, adding context (based on previous experience) plus intelligence and knowledge from other organisations, and providing our customers with the wisdom they need to protect their business.