Search
Close this search box.

Security Metrics Reference

This page built in collaboration with Jared Pfost, our guest on episode 146 of The Cyber Risk Management Podcast.

          Listen to podcast episode — https://cr-map.com/podcast/146/

          View Jared’s LinkedIn Profile — https://www.linkedin.com/in/jaredpfost/

Objective: Enable a Security or IT leader to improve security investment decisions through measurement.

Business Problem: Manage risk to an acceptable level with limited resources.

  1. Measure what matters most:
    1.  Build measurements for fundamental controls and where investments decisions are needed.
    2. Leverage a risk assessment to prioritize investment decision areas, like Cyber Risk Opportunities, use metrics to measure progress.
  2. Are we measuring controls or risk? Yes!
    1. Each control should be mapped to one or more risk.
    2. I prefer to define high level Loss Scenario statements that resonate with business leaders
    3. e.g. loss of intellectual property, service disruption, reputational impact from customer data breach, etc.
    4. Each Loss Scenario has a “kill chain” where threats may exploit vulnerabilities.
    5. Metrics measure the effectiveness of controls to manage security posture or degree of vulnerability.
  3. Construct actionable metrics:
    1. each metric should have a short and long term target value.
    2. Long term targets represent acceptable risk.
  4. Who defines acceptable risk aka metric targets?
    1. Business leaders – with Security facilitating the discussion from start to finish.
    2. More on this as we share metric examples.
  5. Key metrics in my experience span IT service domains:
    1. Application development
    2. Device management
    3. Cloud configuration
    4. Identity & Access Management (IAM)
    5. Detection & Response
    6. Governance, Risk, and Compliance (GRC)
    7. Data
  6. Application
    1. % of high impact applications meeting secure development requirements (DevSecOps). Example of a control coverage metric.
    2. % applications in CMDB vs. enumeration (by Security i.e. attack surface management)
    3. % of application vulns mitigated within policy timeframe. Example of performance metric.
    4. % of applications with no critical vulnerabilities in production. Example of outcome metric.
  7. Types of metrics:
    1. As your security program matures, metrics types will move from coverage to outcomes.
    2. Once a control is mature and isn’t driving an investment decision, move coverage metrics to an operational scorecard.
    3. It’s no longer a Key Risk Indicator.
    4. The ability to drive decisions is what sets a metric apart from a KRI!
  8. Device
    1. % devices in CMDB vs. enumeration (by Security)
    2. % of devices meeting configuration standards. Separate device classes where control owners are different e.g. end user devices vs. infrastructure.
    3. % device vulns mitigated within policy timeframe.
  9. Cloud
    1. % of assets meeting configuration standards
    2. % of cloud vulns mitigated within policy timeframe
  10. IAM
    1. % of users meeting authentication reqs e.g. MFA.
    2. % of assets meeting authentication reqs e.g. MFA and PAM.
    3. % of service accounts meeting standard e.g. no interactive login and change frequency
    4. % of admin accounts reviewed per policy.
    5. % of accounts terminated per policy e.g. within 12 hours of termination.
  11. Detection & Response
    1. % of users passing phishing simulation. Add degree of phishing sophistication as you mature and the threat landscape evolves i.e. AI enabled threat actors.
    2. % of assets meeting monitoring standard (focus on cloud, devices, identities as needed).
      1. Bonus metric for mature shops: % of pen test or red team activities detected by blue team.
    3. % of incidents detected with target timeline. Add incident severity as this matures.
    4. % of incidents contained with target timeline
  12. GRC
    1. % of compliance controls with automated reporting (to maintain health continuously vs. just during audits)
    2. % of risk registry items actively managed i.e. with current treatment decisions by the business.
  13. Data
    1. % of backups meeting resiliency requirements i.e. malware resistant
    2. % of critical business functions with tested continuity plans
    3. % of Data Loss Prevention issues mitigated within policy timeframe