7 Steps for an APT Detection Playbook using ATT&CK™August 11, 2017
For a small segment of industries earlier in this decade, advanced persistent threats (APTs) were a growing concern because of their ability to go undetected and to even penetrate those enterprises with perfect patching. Then, in 2013, Mandiant published a report on a Chinese-based threat actor that they dubbed APT1, and the “APT problem” became writ-large. MITRE cyber researchers decided early on to have an "assume breach" mentality, expecting that this would produce a better chance at APT detection. They first began by researching data sources and analytic processes for detecting APTs through the use of endpoint telemetry data. This research was the beginning of an experiment that led to the development of ATT&CK™ (Adversarial Tactics, Techniques, and Common Knowledge), released in 2015.
Traditionally, defensive efforts have been focused on defending at the perimeter, building taller walls to keep the bad guys out, but the industry has come to realize that this strategy alone is insufficient. Although anti-virus products can defend endpoints, even these were found wanting in stopping APTs. Recognizing the need to further the state of the art in defending the endpoints themselves, MITRE embarked on experimenting on a live network at one of our facilities to detect "adversaries."
The experiment was iterative, and involved designating a team to emulate adversaries (as described in the ATT&CK model) and another team to use analytics to detect intrusions and the scope of an intruder’s actions across the network. The iterative nature of the experiment gave rise to a methodology to improve the capabilities for adversary emulation and behavioral detection analytics. We believe that this methodology will be useful to any organization wishing to try their own internal "cyber game" or to create a playbook for hunting threats on their network.
7 Steps for an APT Defensive Playbook
- Identify Behaviors: Using ATT&CK for Enterprise, determine which techniques or behaviors are priorities for detection. There are several factors that drive this decision, including the prevalence of the technique (in the model), its potential impact to the network/endpoint, availability of data in the enterprise to detect the technique, and efficacy of any potential analytics built for the environment to detect the technique.
- Acquire Data: For each identified technique/behavior, what kind of data is required to detect this, and is it available now, or will it require buying a new tool or product in order to collect the data? It is entirely possible that an organization has a trove of data and only needs to engineer a way to collect and process it. In other cases, the technique may be so advanced that it will require a custom-made tool in order to capture the relevant data.
- Develop Analytics: Having the required data requires a means to effectively process it in order to gain any value. An analyst can only process so much information in a day, so flooding them with data will accomplish little, no matter how good it is. Analytics need to be written in such a way as to focus the analyst’s attention to particularly notable events so that their investigations can be given sufficient focus. Use MITRE's Cyber Analytics Repository to find already developed analytics.
- Develop Adversary Emulation Scenario: Using ATT&CK for Enterprise, develop an adversary-representative plan to test the effectiveness of the sensors and analytics. The scenario is built from known tactics, techniques, and procedures (TTPs) used by adversary groups known to target a particular network. As the analytics are, in part, defined by the TTPs they cover, the evaluation plan should incorporate them so that they can be properly evaluated.
- Emulate Threat: Once the plan is finalized, engage an adversary emulation team to run it and to determine how the operational defenders fare. A live MITRE network was used to execute the plan, which made for additional challenges, but also added considerable value to the analytic development process. Limit scope at first, and expand as practice improves.
- Investigate Attack: After the adversary emulation team conducts their side of the evaluation, it’s the Blue Team’s turn to recreate the timeline of the emulation team’s activity. In an ideal world, all the activity would be found and the defenders could declare victory, but the Blue Team often finds that there are techniques that the emulation team uses that they haven’t seen yet. Although analytics can provide an increasingly accurate capture of what happened, deep-diving into the data is inevitable for a Blue Team who wants to get the complete picture.
- Evaluate Performance: This is the most important step of all, and often the most overlooked. All personnel involved need to meet and discuss lessons learned as a result of the exercise. It is not particularly helpful to the Blue Team if the emulation team says, "We owned your network, try again next time." All parties need to discuss what worked, what didn't, and what the emulation team was able to do that the Blue Team couldn't detect or had difficulty with due to false positives. Topics discussed in this step will drive the next steps taken in refining the security posture, not just for the next version of the exercise, but for the enterprise's own operational network defense.
It is through this iterative approach of intrusion detection verification and refinement that real, measurable improvement can be achieved. By using the 7 Steps for an APT Defensive Playbook, an organization will begin to identify and close their own adversary detection security gaps by discovering and creating effective analytics for the threats they face.
For the story behind MITRE's "experiment" and the development of ATT&CK-based analytics, see the paper "Finding Cyber Threats with ATT&CK-Based Analytics."