MITRE ATT&CK Based SOC Assessments

Overview of ATT&CK-based SOC Assessments

All notes here are collected from MITRE's training on Cybrary

  • Hands on

    • Red team

    • Time consuming

    • More accurate

    • Specific to analytic

  • Hands off

    • Less time consuming

    • Less accurate

    • Good for analytics / engineering

    • Done through observation

Assessment Components and Flow:

Frame Assessment --> Rubric --> Analyze Components --> Interview Staff --> Compile Results --> Propose Changes

Framing a SOC Assessment

Component: Frame Assessment

  • Should and should not run an assessment

  • Does SOC want an ATT&CK based assessment

  • One assessment isn't a long-term plan

  • Most assessment methodologies provide an estimated snapshot in time

  • Following up after an assessment will the security posture

  • The assessment paints broad strokes to give an indication of where generally the gaps lie

  • Set the right expectations.

Messaging

  • Staff feel threatened thus may not comply with process, worry about how results are used, misrepresent / exaggerate, leadership may overreact

  • Make sure you position as an ally to the SOC, not an assessor, or auditor.

Fit

Not fit for:

  • Ones that aren't committed to follow-up

  • Don't have a SOC or SOC-like function

  • Don't have visibility into key data sources to detect techniques

  • Want a turn-key, no maintenance ATT&CK solution

  • Already have a good understanding of ATT&CK in their environment

Fit for organizations looking to improve and start branching into threat-informed defense.

Tips

  1. Use a phrase other than assessment - ATT&CK based SOC study

  2. Make sure leadership understands the point of the assessment, and that it aligns with their goals

  3. Use it as a stepping stone of improvement, not to gauge performance

  4. SOC staff is not being evaluated, rather the SOC's policies, procedures, tooling etc

  5. Prepare to follow-up after running an assessment

Scoping an Assessment

  • Type: Enterprise, ICS, Mobile

  • Within Enterprise: Win, Linux, Cloud, Mac, PRE, Network

  • Do they have the technology present in the environment? - Linux, Win, Mac, etc.

  • Are they supposed to be defending against the technology?

  • Can they potentially see it?

  • Do they want to assess it?

Working with the PRE Platform

  • Resource Development is usually outside of SOC scope

  • Reconnaissance potentially in scope depending on how it's executed

Questions to ask

  1. Is the technology present? - Win, Linux, etc.

  2. Should the SOC defend against it?

  3. Can the SOC defend against it?

  4. Does the SOC want to be assessed on it?

Be careful working with PRE - leave it out if unsure

Analysis of SOC Components with the ATT&CK Framework

  • Primary objective: Be able to map common SOC methodologies to ATT&CK

  • Secondary objectives:

    • Understand how to set + select a coverage scheme for an assessment

    • Know how to map logging strategies to ATT&CK

    • Know how to identify techniques a given analytic might detect

    • Be able to quickly analyze tools to understand coverage

Set Coverage Rubric

Component: Set Rubric

  • Be able to select coverage scheme for a given assessment

  • Know the difference between tech / sub-tech coverage

Choosing what to map to

  • Identify thing you care about

  • Analyze it

  • Map it to ATT&CK

What is Coverage

  • Means:

    • What we're measuring

    • What range that measurement can take

  • Simple Detection:

    • What we're measuring: "Can we detect it?"

    • What values it can take: "Yes" or "No"

  • Detection and Mitigation:

    • What we're measuring: "Are we confident we can detect or mitigate it?"

    • What values it can take: "No", "Partially", "Mostly", "Yes"

  • Hybrid:

    • What we're measuring: "Will execution of this technique cause problems?"

    • What values it can take: numeric, 1, 100

Tips for Defining Coverage

  • When first starting out, keep it simple:

    • Can you detect it or not?

    • Have you deployed a mitigation or not?

  • Don't worry about pinpoint accuracy:

    • Assessment are supposed to paint broad strokes

    • Overly complex coverage charts can hurt more than help

  • Your metric should be defined by:

    • Your and the SOC's maturity

    • The overall goals of the assessment

Abstraction hierarchy

Tactic (most abstract) -> Technique -> Sub-technique (most specific)

Tips and Gotchas

  • Be as specific as possible when mapping to ATT&CK:

    • It's better to map to sub-techniques over tactics

    • Don't worry if pinpoint accuracy isn't possible

  • Be careful with inferred coverage:

    • Sub-technique coverage does not imply primary coverage

    • Primary coverage might not imply sub-technique coverage

  • Sometimes there isn't a good ATT&CK match, and that's okay!

Coverage Inference

  • If a SOC says that they cover a technique with high confidence, then you can infer that they cover the sub-techniques

  • If a SOC says that they cover all sub-techniques under a technique with high confidence, you CAN'T infer that they cover the actual technique

  • If SOC says that they cover a technique with SOME confidence, then it is not possible to infer that any sub-techniques are covered with the same confidence

Coverage inference up/down the hierarchy is almost always dependent on context and user preference.

Coverage Scheme

  • High Confidence (Green), Some Confidence (Yellow), Low Confidence (White)

  • Easy to present on the matrix

  • Easy to score

  • Understandable at all levels

  • Presents a good message

  • Confidence of detection is a good score - leaves ambiguity but still useful

Working with Data Sources

Component: Analyze Components --> Data Sources

  • Understand what ATT&CK data sources are

  • Know how to quickly identify relevant data sources

  • Be able to map informal logging strategies to ATT&CK data sources

  • ATT&CK features 58 unique data sources

  • On average, each data source maps to ~26 techniques

  • The most "useful" data sources:

    • Process monitoring (286)

    • Process command-line parameters (182)

    • File Monitoring (162)

Why should we care?

  • Most SOCs tap into data source in some way

    • Logging

    • Detection tools

    • Custom sig and analytics

  • If we can map what the SOC does with data sources to ATT&CK, then we can infer some kind of coverage

Analyzing Data Sources: Example

My SOC has amazing detection! We are:

  • Running AV on all endpoints

    • ATT&CK = Data Sources -> Anti-Virus

  • Leveraging quantum supremacy for predictive blockchain analytics

    • Not relevant to ATT&CK Data Sources

  • Capturing packets to and from all endpoints

    • ATT&CK = Data Sources -> Packet Capture

  • Forwarding all app logs into SIEM

    • ATT&CK = Data Sources -> Application Logs

  • Proactively patching all 0-days with next-gen AI

    • Not relevant to ATT&CK Data Sources

Now you can use the data sources to map to the ATT&CK Matrix where it could give a heat map of what coverage you might get using the Data Sources identified above.

Gotchas and Tips

  1. Deploying versus Collecting versus Using

    • Are you deploying anti-virus, or using it as a data sources?

    • Even if you're collecting something, you're not always using it

  2. Be as specific with data sources as possible

    • When possible, it's better to note specific log sources than broad categories (Vendor A's logs vs App logs)

  3. It's not just the TYPE of the data but WHERE it's collected

  4. Looking at a data source doesn't mean you'll see techniques

    • Often with data source we like to say "can you see it or not?"

    • Coverage will ultimately depend on how you're using the data source

  5. Techniques and sub-techniques don't always have the same data!

    • Sub-techniques often have data sources that don't apply to the primary technique

Resources for More In-Depth Analysis

Analyzing data source strategies can provide insight into SOC coverage

Analyzing Analytics

Component: Analyze Components --> Analytics

  • Understand why analytics are important with regards to ATT&CK

  • Know how to analyze analytics to identify ATT&CK coverage

How data sources are used

  • Analytics are detection rules designed to detect specific behavior:

    • Signatures, by contrast, tend to home in on specific indicators

    • Assume custom or SOC-controlled detection rules (for our context)

  • Functionally: ingest data source, apply filter(s), create alert:

processes = search Process:Create
reg = filter processes where (exe == "reg.exe" and parent_exe == "cmd.exe")
cmd = filter processes where (exe == "cmd.exe" and parent_exe != "explorer.exe")
reg_and_cmd = join (reg,cmd) where (reg.ppid == cmd.pid and reg.hostname == cmd.hostname)
output reg_and_cmd
  1. Start with all processes

  2. Filter 1: specific processes that match a condition

  3. Filter 2: specific processes that match a condition

  4. Filter 3: conditional intersection of output of steps 2 and 3

Assessments and Analytics

  • Many SOCs are already running some kind of analytics or signatures:

    • Filling in coverage gaps missed by tools

    • Honing specific detections to account for noise

    • Calling analysts to look at something more closely

  • From an assessment perspective:

    • Each analytic can potentially detect ATT&CK techniques

    • If we map analytics to techniques, we can create a coverage map

How to Analyze Analytics

  1. Find the data source the analytic is keying off of

  2. Try to determine (in words) what each filter is doing

  3. Map identifiers in the filter to ATT&CK:

    1. Look up strings / numbers in ATT&CK or consult 3rd party sources

    2. Look at the metadata for clues

    3. Ignore stuff if you can't find a hit

  4. For each technique identified by a filter, try to gauge coverage 1. Read the ATT&CK page to gauge fidelity 2. Basic: this analytic is likely / unlikely to see the technique 3. Useful but simple: this analytic provides high / medium / low coverage 4. More advanced: this analytic provides (e.g.) 66/100 detection coverage

  5. Use ATT&CK page (CAR)

  6. Use ATT&CK datamap

  7. Use Google with site:attack.mitre.org

  8. Use Google with "att&ck"

Summary and Takeaways

  • Analytics are detection rules designed to identify behaviors

  • Many of your existing analytics may already map to ATT&CK

  • To analyze them, follow a process, look at How to Analyze Analytics

  • Record any mappings for analytics you analyze

  • Keep in mind: If you're doing this for the first time, you might not have any good ATT&CK analytics

Breaking down Tools

Component: Analyze Components --> Tools

  • SOCs rely on tools for their primary sources of detection

    • Passively detecting adversaries

    • Active threat hunts

    • Logging to a SIEM platform

  • Regardless of SOC maturity, knowing tool coverage is important

    • Understanding where you currently stand

    • Acquiring tools based on need

  • Analyzing tools can be hard

    • Core functionality is hard to evaluate objectively

    • Marketing material is not always the same as what's deployed

    • Often treated as black-boxes where under-the-hood is off-limits

Critical Questions for Analyzing Tools

  1. Where does the tool run?

    • Endpoints

    • Network applicance: perimeter, east-west traffic

    • Email gateway

  2. How does the tool detect things? Helps with fidelity calc.

    • Static indicators

    • Dynamic, behavior-based detection

    • Monitor for adversary artifacts

  3. What data sources is the tool monitoring?

    • What's the tool looking at when it's running?

Example Tool Breakdown

  • Sits on Endpoints (Does well on all tactics aside from Initial Access, Exfil, C2)

  • Uses mostly static detection (Bad sign - likely not running advanced analytics)

  • Monitors:

    • Windows reg (good data source, but with static detection only pulls techniques that always modify the same reg values)

    • File system (Given it uses static detection, likely no ATT&CK relevance)

    • Outlook inbox (Unusual - but could potentially pick up some initial access vectors e.g. spearphishing)

    • Outbound network connections (Could be good, but likely just looking at ports known bad URLs / IPs. Might pick a few C2 techniques)

Conclusion

  • Not strong around ATT&CK TTPs

  • Some potential detection of techniques that modify the registry

  • Unlikely, but might pick up some email related techniques

  • Might pick up some C2 techniques

  • General feel: could detect a handful of techniques, but likely that a sufficiently motivated adversary can avoid

Tips for Analyzing Tools

  • Don't worry about pinpoint accuracy when assessing tools

    • If you just paint coverage at the tactical level, that's a win too!

  • Read the vendor docs

    • Often filled with marketing speak - but sometimes useful

    • Alternatively: reach out directly to the vendor

  • Ask the SOC how they use the tool

  • Look for already-available analysis of tool

    • Leverage ATT&CK Evaluations results whenever possible

Synthesizing SOC Assessments

  • Be able to put together the pieces to form a full ATT&CK-based SOC assessment

  • Secondary:

    • Be able to prepare, conduct, and interpret results from SOC interviews

    • Know how to choose a heatmap style + type to deliver results

    • Be able to aggregate heatmaps and interview results together

    • Understand the important + types of recommendations for assessments

Interviewing Staff

Component: Interview Staff

  • Understand why interviews are important

  • Be able to prepare and conduct interview with SOC staff

  • Know how to process findings after an interview

Intro

  • Not everything is captured in documentation, manuals, or dumps of configs

  • It is also important to understand the SOC's environment:

    • How is the SOC organized?

    • What is the business model?

    • What processes are in place?

    • What are the technical or operational constraints to be aware of?

The Interview Process

  1. Identify: Identify the type of interview to conduct

  2. Prepare: Prepare the interview

  3. Conduct: Conduct the interview

  4. Process: Process interview findings

Identify: Types of Interviews

  • Context / Framing the Engagement:

    • Who: CISO, SOC Manager, Project Lead

    • What: Background, Goals, Organizational Context, Priorities

    • Outcome: Set Expectations, Define Focus, Identify Deliverables

  • Technical Interviews:

    • Who: Tech Staff, Shift Leads, SMEs

    • What: Network and IT Architecture, Threat Landscape, Operational Processes and Procedures

    • Outcomes: Tool usage, technology deployment, threats, detection details, operational constraints, pain points

Preparing the Interview: Who to talk to?

  • SOCs come in different shapes and sizes:

    • Smaller SOC tend to have staff who wear multiple hats

    • Larger SOC tend to have staff who have clear focus

  • Depending on the focus of the engagement, consider interviewing:

    • SOC Manager

    • Tier-one Monitoring (possibly Shift Leads)

    • Tier-two Analysts / SMEs (Malware, Forensics, etc.)

    • CTI Analysts

    • Red Teamers and Cyber Threat Hunters

    • SIEM Administrators and Engineers

  • Consider groups outside of the SOC as well:

    • Network Firewall team

    • Desktop Support

    • IT/Server Administration

    • Cloud Services Management

  • If some functions are outsourced, you may want to interview reps from those SPs

Preparing the Interview: Covering Logistics

  • Who does the interviewing?

    • One lead

    • One note taker

    • Other SMEs as needed

    • But, don't gang up

  • In person or remote?

    • Good to have at least one in-person

    • If all remote, try to use your webcam

  • How long?

    • Depends! People get tired, and SOCs are busy

    • Usually: 45-90 minutes per team, with breaks between

Preparing the Interview: Setting it Up

  • Coordinating and scheduling

    • Who is the customer POC who can work the schedule?

    • May want to get admin support from your side for a large engagement

  • Read-aheads for interviewers and interviewees

    • Good to do an initial data request if possible

    • If available, obtain a copy of CONOPs, org chart, role descriptions, etc.

    • Send along interview questions beforehand for interviewees to digest

  • Prep yourself beforehand

    • Build a list of questions you want to ask, but explore topics as they emerge

  • If in person: may want to have one or two days available and a dedicated conf room for a drop-in

Conducting the Interview: Structure

  • Should you interview one or multiple teams/staff?

    • Pro: can learn from inter-team discussions

    • Con: one strong team / personality can dominate

  • Things to keep in mind:

    • Bosses in the room may stifle free conversation

    • May have disagreements between contractors or contractors + staff

    • Make sure to present yourself as an ally!

Conducting the Interview: What to Expect

  • Organizational Attributes:

    • Some organizations may be more compliance-oriented vs threat-oriented

    • Larger organizations with more discrete teams may be siloed or disconnected

    • Ops may have a limited view of the greater organization, IT infra, etc.

    • Smaller org SOC staff may wear multiple hats, and be resource-constrained

  • Technology:

    • There may be inconsistent deployments of sensors

    • There may be incomplete collection of data from different sources

    • There may be different monitoring requirements for different business units or parts of the architecture

    • Not all components may be fully integrated or monitored by the SOC

    • THe org may be transitioning to new products or solutions

Conducting the Interview: Bias and Perspective

  • Biases

    • We all have them, interviewers and interviewees

    • Try to understand the perspectives of the teams (and yourselves)

  • Neutrality

    • Try to remain neutral, particularly when the interviewees have differences of opinions amongst themselves

    • Don't ask leading questions (e.g. "don't you think Sysmon has richer detail than Windows events?") or promote favored solutions

Conducting the Interview: Starting Off

  • If available, perhaps have the main POC at the org (CISO, SOC Manager, Project Manager, etc.) perform intros

    • If not - do the best you can, and remember you're on their side!

  • Capture the attendees and their roles

  • Share the interviewees your understanding of their function or the technology in question, and that you want to:

    • Confirm what you've gleaned so far

    • Identify the gaps and strengths that they know of

    • Drill down into deeper topics

Conducting the Interview: Asking Questions

  • Work off a list of prepared questions when you can

    • May need to reframe questions for that group's context

    • Caveat: Different organizations may use different terms (or even different teams)

      • May be evident when people "talk past each other"

      • Try to capture "local" jargon and terminology

    • May skip some questions if they are less relevant to the particular team you are interviewing

  • When there is disagreement or ambiguity, probe further to identify the source or understand the competing viewpoints

Conducting the Interview: Sample Questions

  • Describe a recent incident

    • How was that handled end-to-end?

    • What is your team's role in this event?

    • What first triggered response?

    • What were follow-on activities?

  • Describe how new analytics are rolled out

    • i.e. how they're developed, tested, documented, etc

  • What are the "pain points" you encounter in process X?

    • What would you like to see automated?

  • Be Direct: ask specifically about ATT&CK techniques and tactics

    • What events or analytics would lead to detecting Data Exfil?

Post-Interview Hot Wash: What to Produce

  • Assessor team should write up and compare notes

    • Do this as soon as possible while they are fresh in your mind

    • Notes might be cleaned up and captured formally as an appendix or source material for narrative in a report

  • Capture conflicting reports or opinions between different teams, or between regular staff and management

    • Try to capture those and understand their perspective

    • Pursue follow-up questions to help get to ground truth

  • Specifically identify ATT&CK-related tidbits

    • Any strengths? Any gaps

    • Useful when compiling final report

Recall elephant problem

Post-Interview Hot Wash: Typical Outputs

  • End-to-end process descriptions

  • Perceived strengths and gaps in capabilities

    • E.g. ATT&CK-specific tactics/techniques; general categories of cyber ops

  • Operational constraints

    • E.g. It is very difficult to change configuration X in our env

  • Priorities for threats, TTPs, or technical focus

  • Follow-on data requests

    • E.g. if additional sensors or data sources are identified

  • Additional insights into the Enterprise architecture and practices

    • E.g. connectivity, outsourcing, etc.

  • Which tools are really used vs shelfware

Communicating with ATT&CK

Component: Compile Results

  • Understand the types of heatmaps you can present

  • Know the tradeoffs with different heatmap styles

  • Be able to choose the right type and style of heatmap for a given context

Key Heatmap Ingredients

1. Scope

  • Including the right tactics

  • Displaying - or hiding - sub-techniques

2. Measurement Abstraction

  • Categorical: what are the buckets?

    • High Confident, Some Confidence, Low Confidence, No Confidence of Detection, Static Detection Possible

    • Quantitative: what do scores measure?

3. Color Scheme

  • Choosing the right colors to convey your message

  • Pick a Good Scoring Scheme for your Heatmap

    • Too many categories can be confusing for readers

      • Easy to get lost in labels

      • Remember: your goal is to paint broad strokes

  • Settle on something that conveys the right information at the right layer

    • Removing just one category and modifying colors has significant communication impacts

Regardless of ATT&CK use case, have a good scoring scheme

  • Define categories that are relevant to your domain

  • Avoid mixing category types (confident + likelihood is too busy)

  • Know your audience: leadership wnats big picture, staff needs details

  • Choose good color schemes (gradient, discrete, etc.)

  • Metrics can be great, but always have good justification(s) for your numbers

  • Use RED as a color only as needed to bring attention to specific ares that should be focused on - glaring gap

  • A softer color palette is more inviting

    • Conveys the same message, but easier to digest

    • Positions the results as less antagonistic: these are areas of improvement, not failure

    • Be cautious of using red

Being Realistic: Heatmaps are not Axiomatic

  • Coverage heatmaps are great

    • Easy to understand; tangible and straightforward; useful to staff

BUT

  1. Coverage doesn't always align with attack execution in practice

    • Techniques execution and corresponding detection can vary

    • Per-technique detection isn't always the right level of abstraction

  2. Coverage is not -static; what's green today can change tomorrow

    • Attacker TTPs and defender practices rotate

    • Don't ignore what you cover today

  3. Remember: ATT&CK heatmaps are almost always approximate

    • If you're doing this as a third-party, make sure the SOC knows this

    • If you're doing this in-house, make sure colleagues understand

Using sub-techniques in Heatmaps

Including sub-techniques is useful but hard to visualize

  • Visualize a subset of mapped sub-techniques

  • Know your audience

    • No sub-techniques for leadership - use abstract info (detections + mitigations together only at tactic level)

    • Include sub-techniques for engineers / details info

  • Keep sub-techniques in appendices or other form

Compiling a Final Heatmap

Component: Compile Results

Creating a Coverage Chart

  1. Create heatmaps denoting what each analytics and tool will detect

    • Focus primarily on these two; less so on data sources

  2. Aggregate results from step 1: create a "combined" heatmap

    • Always choose the "highest" score if aggregating tools and analytics

  3. Augment results from policies, procedures, and interviews:

    • Policies and procedures will discuss how specific tools are used

    • Interview results can describe how tools are used

    • Interview results can provide evidence of gaps in coverage

Formula

Tool Coverage + Analytic Coverage + Interview Positives - Interview Negatives

Tool1 Chart + Tool2 Chart = Tool Aggregated Chart

Tool Agg Chart + Analytics Chart = Tool,Analytics Agg Chart

Tool,Analytics Agg + Interview:

  • Red team might say that the SOC never detects something that was indicated as high-confidence in tool or analytics map - thus downgrade

  • Engineering team might say that they do cover a tactic, and thus can be upgraded

Heatmap: Summary and Takeaways

  • To aggregate, you should have:

    • An analytic heatmap showing analytic coverage

    • A heatmap for each relevant tool (behavior-based, signature-based)

    • An understanding of strengths / weaknesses from interview

  • Final coverage can be summarized in a formula:

    • Aggregation step: Tool Coverage + Analytic Coverage + Interview Positives

    • Subtraction step: Initial Aggregate - Interview Negatives

  • When there's disagreements:

    • Choose higher coverage during aggregation

    • Choose lower coverage during subtraction

Aggregation of Partially Covered Techniques

  • It Depends!

    • Detection is not probabilistic

    • Aggregating coverage can be very lossy

  • Generally speaking:

    • Upgrade if sources use different/complementary detection methods

    • Leave as-is if using the same or similar methods

  • Look at your rubric

    • Quantitative approaches have more leeway

    • Qualitative approaches need more evidence

  • If all else fails, err to the side of caution

Proposing Recommendations

Component: Propose Changes

  • Understand basic types of recommendations and importance of them

  • Be able to deliver a set of prioritized techniques for the SOC to focus on

  • Be able to propose recommendations to improve coverage

  • Understand how assessments fit into a larger ATT&CK and SOC ecosystem

Never ATT&CK and run! Always give recommendations after an assessment, as the SOC's details are fresh in your mind

Typical Recommendation Categories

  • Technique Prioritization

    • Relevant + Important Techniques

    • Implementation Roadmaps

  • Process Refinement

    • Team Cohesion

    • Business Processes

  • Coverage Improvement

    • Analytics + Data Sources

    • Mitigations

    • Tool Acquisition + Usage

  • Follow-up Engagements

    • Red/Purple Teaming

    • Other assessments

Technique Prioritization

In addition to the heatmap, provide a small set of techniques for the SOC to focus on.

  • Makes interpreting the heatmap more tractable

  • Grounds the results in something clear and tangible

  • Provides a starting spot for other recommendations:

    • Adding in new analytics

    • Atomic testing / purple teaming / TTX

Highlight prioritized techniques in a different color

Tips for Technique Prioritization

  1. Small list of techniques are great for short-term wins

  2. Pick a strategy for recommending techniques based on coverage

    • If ATT&CK is not yet integrated: focus only on techniques with low coverage

    • If ATT&CK is integrated; focus on techniques with low or some coverage

  3. Focus on techniques that are immediately relevant

    • Are they used by relevant threat actors?

    • Are they popular or frequently occurring?

    • Are they easy to execute and do they enable more techniques?

    • Are the necessary logs readily accessible?

  4. Provide the SOC with your methodology + pointers so they can build their own lists for longer-term prioritization

Example

  • Concerned with APT32, OilRig, Turla, and APT28 where they have low coverage

  • Have a SIEM platform ingesting API Monitoring, Authentication Logs, Anti-Virus, SSL/TLS Inspection, Windows Error Reporting

  • What techniques should they focus on?

Process

  1. Overlay Threat Actor Techniques

    1. Color based on one, two, three, four threat groups

  2. Overlay Detectable Techniques

    1. See which data sources are available, and compare that to the heatmap

  3. Select Techniques with Low Confidence

    1. Prioritize based on low confidence

Technique Prioritization: Summary and Takeaways

  • Always follow up an assessment with recommendations

  • Prioritization plans make daunting coverage charts easier to digest

  • When crafting a prioritization plan, focus on techniques that are:

    • Relevant, based on a measure of CTI

    • Defensible, based on your understanding of the current defenses

    • Gaps, based on your assessment

Improving Coverage

  1. Adding Analytics

    • Increase coverage for specific techniques

    • Good for SOCs looking to complement the existing tooling

    • Requires staff, logging, and search tooling

  2. Adding new tools

    • Better coverage off-the-shelf

    • Good for SOCs still using primarily "cyber hygiene" tools

    • Can add significant coverage initially, but usually long adoption

  3. Ingesting more data sources

    • Increase visibility into raw data

    • Good for SOCs looking to expand their analytic program

    • Requires existing analytic processes + data ingestion pipeline

  4. Implementing mitigations

    • Bypass detection and prevent execution

    • Good for SOCs with strong control of their endpoints and devices

    • Can be challenging to verify + keep up to date

Tips for Data Source Recommendations

  1. Identify actionable data sources - ones that are easy to ingest

  2. Focus on data sources that offer useful coverage improvements

  3. Consider recommending data collection rollout strategies

  4. Link data source recommendations to the SOC's tooling + analytics

Tips for Tooling Recommendations

  1. Weigh tradeoffs between free + open-source tools and commercial ones

  2. Try to focus on tool types as opposed to specific tools themselves

  3. Focus on tools that help increase coverage the most but also fit within budget

  4. When you can - include analysis of tools the SOC is currently looking at

Process Improvement

  • Do teams communicate well?

  • How do they onboard analytics?

  • Do they have standard documentation?

  • How do they onboard new staff?

  • Do they have leadership support?

  • How do they acquire new tools?

  • How do they track threats?

  • Are assets accurately tracked?

  • How consistent are things deployed?

  • Do they have good cyber hygiene?

Additional Engagements

Cycle through the following:

  • Assess Defensive Coverage

    • Assessments and Engineering

    • Adversary Emulation Exercises

  • Identify High Priority Gaps

    • Threat Intel

    • Public Reporting

    • Relevant threat models

  • Tune or Acquire New Defenses

    • Writing detections

    • New tooling

    • Public resources

Last updated

Was this helpful?