The Philosophy of ATT&CKJuly 24, 2018
We started ATT&CK almost five years ago as a way to categorize common adversary behavior for adversary emulation and intrusion detection research. The project has grown quite a bit since then, both in content and in the process for maintaining that content, to ensure it remains a useful resource for the community. We’ve released an ATT&CK Philosophy whitepaper to serve as the authoritative resource describing the design and philosophy behind ATT&CK because it is important to us that we remain transparent about how we make those decisions. Openness about our processes is especially critical as we move toward a more open model of managing ATT&CK, as mentioned earlier this year in John Wunder’s post about what's next for ATT&CK.
The whitepaper details the history beyond ATT&CK as well as a few details like how to break down the elements of ATT&CK and what various terms mean. We often get questions about what data we use, where it comes from, and how we make decisions (beyond the updates we push), so I’m happy we cover those aspects in the paper as well. We intend this to be a living document, so as we change, we plan to update it accordingly.
Not all (potential) Threats are Equal
ATT&CK was developed with a driving use case in mind—better detection of post-compromise cyber adversary behavior. We released a paper describing much of that work last year, and it’s important to keep in mind this original goal to understand the process we’ve developed for updating ATT&CK. Though our process is nuanced, one core concept drives it: you can prioritize how you defend by focusing on documented threat behavior.
Persistent threats (sometimes referred to as advanced persistent threats or APTs) are named as such because they have an objective to accomplish, and they will attempt to reach that objective multiple times and in many different ways when they encounter interruptions to their operations. Persistent threats can take many forms, from nation-state sponsored activity interested in espionage or theft of intellectual property, to financially motivated criminals infiltrating financial institutions. Actors across this spectrum may use very similar techniques to one another.
The space of possible techniques that adversaries can use is huge. You can look at what’s happened in the past, what’s happening now, and what could happen based on academic research, vulnerability catalogues, or just whatever idea someone has on a given day. It’s a daunting array of things to worry about for any organization, and it’s a difficult task to deal with the complexity of technology while balancing the impact of security decisions against usability. Scaling that down and focusing in on empirically documented threat activity happening in the wild is a useful way to prioritize what to tackle first, and that serves as the core influence driving the types of information within ATT&CK.
We have used several different sources of threat information to document ATT&CK techniques, including:
- Threat intelligence reports
- Conference presentations
- Social media
- Open source code repositories
- Malware samples
We’re open about where technique and threat information comes from as long as it’s a reliable source and improves the community's collective understanding of techniques they may face. The ATT&CK team has many years of collective experience across threat intelligence, network defense, and red teaming, and we regularly apply that experience in selecting and vetting information.
What About Red Teams and New Technique Research?
Unfortunately, publicly available threat intel often isn’t enough to get a full picture of what adversaries are doing. What’s seen and reported is based on what data is available at the time, who saw it, and what the sensitivity of the information is – which leads to cases where information about adversary behavior is not publicly available. This presents a challenge for us because we want to keep ATT&CK up to date on the latest tactics, techniques, and procedures (TTPs) that defenders should know about. Given this reality, we use several methods to try to approximate what adversaries are doing in the wild.
Underreported Events – Not everything that happens is reported. Information restrictions and data sensitivities are a real limitation. If a credible source says a technique is in use in the wild and there is enough technical information available on how it could be used, then that’s likely enough evidence without having a publicly available report to reference.
Red Teams – They often are driven by similar objectives to persistent threats—get in undetected and accomplish a goal. Sometimes the techniques that red teams use match what’s happening in the wild, and sometimes they don’t. If several red teams successfully use certain types of techniques that many of their customers are vulnerable to, then it’s likely a real-world adversary at some point will end up using it too.
New Technique Research – There is a lot of great work being published that brings to light new methods of subverting security measures. Sometimes these techniques are picked up by real-world threats quickly1, and other times they are newly discovered when looking across old data with new details2, so it's important that we consider newly discovered techniques with some level of discretion based on the likelihood that they will be (or have been) used.
Technique Decision Factors
Sometimes it makes sense to add new techniques when we get a contribution or find new reporting, but other times we might re-scope or add details to an existing, but related, technique to include the new information. We consider several factors when including new information to determine where and how it fits into the model and knowledge base:
- Objective: What is the technique accomplishing? Similar techniques may be performed the same way to accomplish different tactics. Likewise, different techniques may accomplish the same tactic in different ways.
- Actions: How is the technique performed? Is the "trigger" different between techniques that distinguishes them even though the result may be the same or similar?
- Use: Who is using it? Do multiple groups use the technique? If so, how is the use different or similar?
- Requirements: What are the components needed to use a technique, and what is affected by use of it? (For example, files, locations, registry changes, API calls, permissions, etc.) What is the overlap of components between techniques? Are they distinct or similar?
- Detection: What needs to be instrumented to detect use of the technique? This is related to requirements and actions, but could differ across techniques that are related.
- Mitigations: What mitigation options are available for the technique? Are they similar to or different from other techniques that are either performed in the same way or have the same result?
Most of the time, none of these questions alone will determine how we incorporate new information, but considering these factors helps us think through the best way to add information to ATT&CK in a useful way.
Help Us Continue
Curating the content in ATT&CK can be more of an art than a science at times, and we hope this blog post and the paper will help you understand our thinking. Yes, there are exceptions to the process, which we try to document as we go and as well as use to change how we think about future content. We apply a process for assessing the relevance of new information which leads us to sometimes not incorporate some pieces of information – we need to stay focused on the core principles that make ATT&CK useful. We’ve received many contributions over the years that didn’t meet the criteria we set out for ATT&CK, even though many had good and well-reasoned justifications for why it should be included. Regardless of whether we include contributions, we appreciate the community’s continued willingness to help. The process has evolved over the years and will continue evolving to meet the needs of those who use and value ATT&CK.
Maintaining ATT&CK is also a lot of work. Just Enterprise alone is up to 219 techniques and 69 groups with an eyepopping 968 references. We look to the community to provide valuable input, both in content and process, to help keep it up-to-date and relevant. If you have information that you think should be included, please reach out to us at email@example.com or on Twitter @MITREattack.
We’re also continuing this series to talk about some additional applications of ATT&CK and future enhancements. Stay tuned!