It’s the time of year when cybersecurity companies publish sets of predictions about the cyber threat and security landscapes in the coming year. For 2023, we think that regulations will increasingly mandate offensive security testing across more sectors.
I’ve got another personal prediction: as the use of penetration testing and other types of offensive security testing grows, the number of organizations asking why their endpoint detection and response systems don’t always flag and alert on these tests will also grow. Understanding the reasons behind this will help security professionals get the most out of offensive security testing and their monitoring and detection solution.
We could forgive people for thinking that offensive security activities are dubbed “adversary simulation” because they just simulate what most adversaries do when they carry out attacks: breach the system from outside by compromising a single endpoint and then move internally through the network. Actually, that’s just one type of offensive security test. Offensive security testing offers an entire spectrum of tests to detect security gaps. These tests often stress control frameworks to their extremes, using techniques that are far more advanced than those typically used by most threat actors.
To understand this spectrum, think of the company as a walled city. Security teams can test for whether there’s an exploitable security weakness in the organization’s defenses: is there a hole in the wall? Or, for whether attacks exploiting a known weakness are stopped by the organization’s defenses: are there guards stationed by the hole? We can also test for whether successful attacks are detected: did the watchmen see the enemy coming in through the hole?
Often, tests take entry through the hole in the wall as given: the attacker has already entered the system. These are internal pen tests, also known as “assumed breach” pen tests. They skip the early parts of an attack and assume that the attacker already got in through the hole in the wall or external perimeter. In these cases, endpoint compromises are out of scope because the endpoints are already assumed compromised.
And often, these internal pen tests physically introduce a special host into the organization’s premises and connect it to the network for the testing exercise: the tester’s laptop, or something like a Raspberry Pi. The tester then uses network-based attacks to find credentials for access before hunting for further security weaknesses. This activity then does not get detected by the customer’s endpoint monitoring systems.
That’s a failure on the part of the monitoring system, isn’t it? In fact, it’s almost certainly not.
Monitoring systems like endpoint detection and response (EDR) and extended detection and response (XDR) systems do what they say on the label: they monitor endpoints (and, in the case of XDR, network, servers, and cloud). They are intentionally designed to detect compromised hosts or compromised credentials within an already instrumented environment, using software agents or sensors that conduct the monitoring and collect data (or telemetry) into a database or data lake. Points in the system that are not instrumented with agents or sensors will not collect data and will not raise alerts on suspicious data.
The specially introduced device used for the assumed breach pen test likely doesn’t contain an endpoint agent. It likely isn’t set up to provide endpoint telemetry to the organization’s monitoring systems. No endpoint agent on the unmonitored box? No endpoint data, and no alerts.
Conversely, security teams conducting an external pen test that compromises an existing host should detect such an instance. Of course, threat actors may stumble on a non-monitored device as a starting point. Further, security teams are more likely to detect both this and internal pen test activity using network telemetry.
So, what can security pros take away from this to improve their organization's security posture?
I’m not saying that security teams should not set out to test the abilities of their EDR or XDR platform to detect intrusion activity. That’s a worthwhile exercise. And it absolutely should detect external pen tests. But testing a suite of detection tools using a test that isn’t specifically designed for the purpose is not the way to do this.
Security teams running pen tests need to first understand their purpose and scope. Second, ensure that all elements of the network (not just endpoints) are instrumented with agents or sensors to detect malicious activity. There should be no gaps.
Finally, consider patching and preventing as important, if not more so than detecting. To return to the hole in the wall, security pros should focus on fixing the hole before the attackers sneak in. That makes vulnerability assessments to find what needs patching one of the most important types of offensive testing a security team can do -- as long as the team then patches what it finds.
Jane Adams, information security research consultant, Secureworks