Security service provider SIEM strategies have a fatal flaw

A recently released commercial from ID protection company LifeLock depicts a group of robbers rushing into a bank – the patrons of the bank panic, and there's a moment when everyone looks at the security guard, who casually responds, "Oh, I'm not a security guard, I'm just a security monitor." He then comically informs everyone that there's a robbery in progress. The point is clear: to protect your identity and finances, you need to do more than just monitor your credit.

The reliance on log and event data to combat threats has been a foundational piece of cybersecurity efforts ever since intrusion detection, firewalls and other systems started generating massive amounts of log data. Enterprises of all sizes send logs from every type of device, system and application to a SIEM (security information and event management) system – a central repository for trend analysis, event correlation, log retention and automated reporting for compliance.

Such tools do provide useful and actionable information. The glaring problem, though, is how these systems are pitched by managed service providers (MSPs), managed security service providers (MSSPs) and even multi-tenant datacenter (MTDC) providers – all seemingly fixated on SIEM as the end-all, be-all solution. SIEMs were designed and optimized to capture security events, and most fail to provide contextual analytics. At best, they tell us about events that have already happened on our networks – in other words, they act a security monitors, not security guards.

So should we simply abandon SIEM? Absolutely not. However, we need to broaden our current security conversations to include things like least privileged access, network segmentation and scale. Organizations should be working to lock down a network, even while being concerned about monitoring it.

The 451 Take

Say the words 'defense in-depth' or 'network segmentation' in tight-knit security circles, and you'll witness a collective eye roll (and often hear the derisory remark 'expense in depth'). It's not that the above concepts have no value, but they've taken a beating from years of middling tactics, to the point where 'threat prevention' has left a bad aftertaste. Even so, detection and monitoring are not a sufficient substitute for solid security hygiene. While perhaps not as fun to talk about as 'threat detection' or 'machine learning,' the basics still matter, and the hard work is still needed to secure our networks. In all the breaches in recent memory, one can point to basics that were either overlooked, or allowed to exist for some (likely well-meaning) reason. That said, talk to most MSPs, and especially MSSPs, and sooner or later you'll likely be pulled into a discussion of SIEM – because frankly, that's the hot-ticket item. Let's not forget, though, that SIEM is merely one piece in a very large puzzle.

Years ago, the conversation around security centered on firewalls. As long as you had a good firewall, the logic went, the network was probably protected pretty well (this wasn't true, by the way). Then things like remote work enabled by high-speed internet everywhere, BYOD and public clouds happened, and slowly what was known as the perimeter melted away.

At the same time, another technique inherited from personal computing's early days – antivirus – was getting beat up by attacker tactics that had become far more sophisticated than many, if not most, defenses. Despite adversaries learning to perfect advanced levels of global distribution for many common attacks, as well as coordinate techniques for penetrating a specific target (in the case of the dedicated adversary), many organizations (and security vendors) were seemingly stuck in the 1990s.

This led to a new focus on detection to counter the widespread assumption that every organization had been penetrated by security threats, whether they knew it or not. This, in turn, gave rise to entirely new segments of analytics dedicated to finding and containing threats, equipping preventive technologies with better attack recognition, and arming response teams with the means to seek out and discover threats that might otherwise remain hidden.

At the center of activity for many enterprises, SIEM remains a focus of operations. Despite its shortcomings, SIEM continues to represent the nexus of technology monitoring, analytics and operational data correlated with threat intelligence that security teams rely on to respond to issues as they appear.

Today, an entire industry has sprung up around SIEM platforms, with many MSSPs currently doing essentially two things: SIEM and associated incident response. Those tools and services do have merit, and can add tremendous value to existing security programs. But if all we're doing is effectively putting a 'security monitor' into corporate networks, then sitting back and taking note (albeit with pretty reports and helpful analytics) when intruders come marching in – we've lost the battle for our networks already.

By the time the SIEM tool alerts you to a problem, it's already too late. So how should we think of these tools? Securing your network is like a puzzle. Each of these tools is a piece – in fact, think of them as the edge pieces, which is where constructing a puzzle usually starts. Why? Because edge pieces are the easiest to identify and make sense of.

Then comes the hard work of putting together the rest of the puzzle. Long-term success requires expecting attackers to make it into the network. The next logical question is how to minimize the damage they can cause, and slow their progression through the LAN. If you've heard the phrase defense in-depth, that's the exact approach. But instead of being a product, it's a design strategy, as well as a development strategy and management strategy.

Ideally, it should not only parallel the sequence of reconnaissance, penetration, lateral movement and asset compromise often identified with modern attacks, it should also disrupt that sequence at every reasonable opportunity. The fact remains that tools can only get us so far in securing our networks. At some point, someone needs to do the hard work of seeing to the details that keep the ship tight.

Unfortunately, there are no tools for that, and service providers will find it difficult, if not impossible, to do it for us. Without a thorough understanding of the networks, the workers and roles, and all the applications involved, providers are simply left to guess, or follow very basic best practices, which should be just a starting point.

How do we start then? While it's true that a significant overhaul of datacenter security has occurred through cloud technologies, more conventional environments (service providers and enterprises alike) should still consider three particular values: the concept of least privileged access, network segmentation, and scale.

When it comes to least privileged access, the basic steps of restricting groups of people to their respective resources has probably already been done. The most important (and difficult) part is restricting the admin accounts. From an attacker's point of view, that's what they're after. It's far too common for users to have local admin rights, and for admin accounts to be global. Fixing all this will be inconvenient, but we need to allow ourselves (moreover our admins) to be inconvenienced for the sake of security.

A further complication is that vendors design applications that, to run properly, require elevated privileges, which require time and effort to determine how to mitigate and restrain. It may also require investment in new software, which is obviously not a simple solution.

For an application with exposures that are too numerous or too serious, the recommendation to replace it may very well be a non-starter, given the investment the business may have made in that asset. Fortunately, there are technologies specifically designed to implement more granular management of administrative access.

That's not to say a truly high-risk application should stand if the cost of remediating its weaknesses exceeds its business value. Sometimes the tough call has to be made. At a minimum, the company should work with the vendor to see if they can properly fix the issue in a reasonable amount of time. Alternatively, privilege management technology may be able to close the gap where the application itself cannot.

The second concept we mention – network segmentation – has been a known defense tactic for years, but remarkably, is still too seldom deployed. In an unsegmented or flat network, everyone and everything (think IoT) can access everything. While this may be convenient and easy to administer from an IT standpoint, it is a nightmare from a security perspective.

In this scenario – because most security tools are externally focused and not looking at what's going on inside – once an attacker has gained access to the network, they can easily move anywhere within it, and eventually gain access to the targeted data.

Network segmentation is basically dividing or splitting a network into several separate or isolated sub-networks or segments, limiting and regulating communication throughout the network. The security benefits of this are quite impactful. For example, in a properly segmented network, a device that becomes compromised in one segment does not automatically give the attacker access to devices and data in another segment – significantly limiting the threat exposure.

Segmentation can help keep malicious insiders, external adversaries or other unauthorized parties away from your data assets and intellectual property. The problem is, however, that properly segmenting a network can take a tremendous amount of time, and those efforts are generally thrown out the window as soon as troubleshooting is needed. If this is the case, are there systems in place to ensure that changes made to the network are reverted once the problem is identified and resolved? And are those changes made automatically, without a human having to intervene?

Recall the 2013 beach of Target stores. Attackers gained access to a third-party HVAC vendor's login credentials and eventually to Target's point-of-sale (POS) devices, where they deployed malware and gained access to customers' data including debit and credit card information.

So why would a HVAC system be connected in any way to the core of the company's business, the POS systems? If network segmentation had been properly deployed, and those credentials limited to a segment that only contained systems the HVAC vendor needed, the breach might have been avoided, or at least greatly contained. Resignations, months or years of bad press, legal proceedings, fines, and lost sales could all have been avoided. Again, this approach will take discipline, and likely inconvenience admin staff. But it's fair to assume that if things are made easier for administrators, they may also be easier for intruders.

The final part of the equation – scale – really must be designed in at the foundation to be truly effective. Imagine a healthcare provider using an app that requires admin privileges to run. The admin team figures out how to minimize the damage that the account could do across the network – but now that change must be made, by hand, to 2,000 machines across the network.

This simply isn't practical or maintainable – meaning there was a failure in assessing the impact of risk and the cost of mitigation. Should the provider simply find a new application that doesn't have such a vulnerable footprint? Replacing the application may not be economically viable for a host of reasons, not the least being an organization's reliance on the app's capabilities.

When risk remediation is an option, doing tasks by hand also means there likely isn't a good way to ensure those settings stay put (how often do we see helpdesk disable security settings and features during troubleshooting, and then never put them back?) and are continually deployed to new machines.

For security to be maintainable and scalable, it needs to be automated and continually enforced. The future of the datacenter embraces these values to a degree unprecedented in IT. The further we get from state-of-the-art in the datacenter, however, the more likely we'll continue seeing these ongoing problems. Whether automation at this level comes through scripting and scheduled jobs, leveraging Microsoft's Group Policy, or from another third-party tool, the approach will need to be evaluated by the various internal teams.

Regardless of the tools or methods, the end result should be that admins can make a change once, have that change deployed to the entire network, and then have it continually verified and enforced throughout its lifecycle. Until the endpoint and the distribution network are reinvented as significantly as the datacenter has been in the wake of the cloud and DevOps revolutions, these are the challenges organizations must embrace to protect what is often the initial target of opportunity for attackers of every stripe.

Dan Thompson

Senior Analyst

Aaron Sherrill

Senior Analyst

Scott Crawford

Research Director

New Alert Set

"My Alert"

Failed to Set Alert

"My Alert"