As technology continues to advance and become increasingly data-centric, many business processes that were once manual and arduous are now fully automated to the extent that they have become transparent and assumed. This emergence of fluid, algorithmic machine-to-machine interaction is fundamentally changing the IT landscape, forcing the business and IT to collaborate more closely and is becoming a primary source of competitive advantage (Gartner predicts by 2018, over half of large organizations globally will compete using advanced analytics and proprietary algorithms).
The lure of achieving even the slightest, crucial competitive edge has propelled certain industries, such as high frequency electronic trading, into leading the charge toward this algorithmic, machine-to-machine future. Corvil has been on this journey from the outset, with the world’s leading investment banks, playing a critical role in helping them accurately monitor and measure their complex automated transactions, at scale. As you’d guess, we’ve learned more than a thing or two along the way about how to most effectively and efficiently perform such monitoring, which gives us a unique vantage point now from which to help organisations across a broader range of industries, as they become increasingly algorithmic.
So is this brave new algorithmic world free of cyber risk?
Despite rapid technological advances and increased automation of defenses, cyber threats will undoubtedly continue to be a challenge. Why? Here are the main reasons I propose and thoughts on expected traits of cyber attacks in a M2M world:
- Cyber criminals follow the money - Wherever business is being transacted and data with a monetary value flows, you can be sure there are cyber criminals not too far away (assessing how they can tap in to get a slice, or two, of the action). Data is the new gold, silver, and platinum and its value in the new ecosystem (including to attackers) cannot be overstated.
- Entire attacks will occur in microseconds, or less - This will be achieved through end-to-end attack automation and will quickly expose that log data alone will be insufficient to provide visibility into such attacks due to its lack of detail, dependance on machine generation and rear view mirror perspective (vs true real time).
- Automation not exclusively for the good guys - Cyber criminals have a habit of adopting the technologies and techniques utilized by legitimate business (but in most cases, more rapidly, since they have less constraints), and are no stranger to leveraging automation to conduct their illicit activities in as efficient and profitable manner as possible (whether it be Exploit kits as a service, Shodan vulnerability scanning en masse or server-side polymorphism manifesting itself as part of ransomware). Automation can also have the unfortunate consequence of lowering the bar for who can try their hand at being a profitable cyber criminal.
- Perimeter-based checkpoints are less relevant - Modern environments are more dynamic in nature e.g. interacting workloads distributed across hybrid cloud based infrastructure and a plethora of smart devices and IoT enabled machines communicating over open 5G networks (By 2020, there will be 50 billion connected ‘things’, each of which is a potential backdoor). This means there is no single set of reliably defined entry/exit points for traffic inspection.
So how to begin tackling such algorithmic cyber-attack tactics?
In order to effectively detect and mitigate such attacks, the ability to pervasively monitor, understand and analyse business transactions will be critical. Here are 5 critical capabilities that I believe will be essential for any solution providing such monitoring:
- Live visibility [with nanosecond level resolution] - Far from trivial to achieve, reliably and at scale. This is something that cannot be added as an afterthought and without it, the horse may have bolted and damage done before you realise. Actually, the nirvana state is complimentary live and retrospective visibility and threat detection, since it is also common to find out about an attack weeks after it first gained a foothold on your network.
- Seamless interoperability - The solution should come with robust, value rich point to point integrations with the likes of threat intel, upstream security analytics, host-based detection tools and identity management systems, to enable automated M2M detection, triage and investigative workflows.
- Community powered - No one organisation will have all the answers (threat indicators) and the threat landscape evolves and morphs each hour. Solutions that provide open API access and facilitate intra organisation collaboration and knowledge sharing will be best positioned to provide true value.
- Entity centric attack tracking - An attacker will hop from machine to machine as they conduct their coordinated activity. Viewing attack activity solely from a src/dst IP or host standpoint will be inadequate. The ability to easily pivot data exploration to be focused on users, applications and file, as well as IP/host, will be critical.
- Machine learning based anomaly detection - As cyber-attackers seek to obscure their tracks within the data, monitoring solutions will require the smarts necessary to consider a large range of variables to determine if specific entities are at risk or indeed the source of risk. Practically speaking, this will involve the ability to baseline normal on a per entity basis, in order to aid location of suspect outliers.
Perhaps the most startling point in all of this is that this depicted brave new algorithmic world is already here, and if you want to position yourself and your organisation for success, and perhaps even lean toward being more proactive in how you run your security operations, make sure you choose a strategic partner that has the necessary expertise, experience and foresight to enable you to do it right.