top of page
brencronin

Cybersecurity Metrics

Updated: Nov 10

Metrics are the lifeblood of business decision-making, cherished by executives who rely on them to steer their organizations toward success. This holds especially true for high-ranking leaders who lack the luxury of delving into the intricacies of cost-to-value assessments in the realm of cybersecurity initiatives. However, while most employees do have an appreciation for metrics, it's imperative to recognize the potential pitfalls and elusive nature that surrounds them.

These are the questions that loom large when it comes to metrics:

  • Is it a life-or-death metric (i.e., Is the metric feared)?

  • Do you truly understand the underlying significance of the metric being tracked?

  • How straightforward is it to accurately capture and present the intended metric?

Is it a life-or-death metric (i.e., Is the metric feared)?


To emphasize the significance of this question, let's examine a frequent but often overlooked scenario in call centers. A common strategy employed by call center service desks, especially when their performance is assessed based on metrics like the average ticket open/closure time, is categorizing tickets as 'deferred customer time,'. This is often done even when resolving the issue does not rely on actions from the customer. Another tactic involves prematurely closing tickets, only to reopen new ones when the customer calls back to complain.


The reality is that the resolution takes a certain amount of time, but in these scenarios the tickets never accurately reflect this. In these instances, call centers are under immense pressure to meet specific SLA metrics, leading them to manipulate the system. I've witnessed this scenario unfold numerous times, and I can almost guarantee that when metrics are pushed too hard, such manipulations will occur – resulting in poor customer satisfaction which is the opposite intent of call center metrics.



Interestingly, economists have studied this phenomenon and coined two principles:

Goodhart's Law - When a measure becomes a target, it loses its effectiveness as a measure.

For instance, in a call center, if the performance metric is the number of calls processed, focusing on this metric can lead to increased call volume but at the cost of customer satisfaction, creating a perverse incentive known as the Cobra Effect. The term 'Cobra Effect' originated from colonial India, where the English government offered a bounty for Cobra skins, inadvertently causing an increase in Cobra breeding.


Campbell's Law - Also known as 'metrics obsession' or 'teaching to the test,' this law posits that the more crucial a metric becomes in social decision-making, the more likely it is to be manipulated. It is important to understand that all metrics have inherent limitations in accurately describing the full complexity of the world.


Do you truly understand the underlying significance of the metric being tracked?


Understanding metrics can be initially confusing, especially when it comes to metrics acronyms. Some common metrics terminology includes:


KPI = Key Performance Indicator

KRI = Key Risk Indicator (To add to the confusion businesses also use KRI for "Key Results Indicator")

KCI = Key Controls Indicator


Too many metrics tend to get lumped into the cybersecurity 'Key Performance Indicator' (KPI) bucket including metrics that are really Risk Indicators (KRI)s and Control Indicators (KCI)s. A cybersecurity program can also have different types of sub-metric types, each serving distinct performance measurement purposes:

  • Implementation metrics

  • Efficiency metrics

  • Impact metrics (e.g., assessing dangers averted)

  • Proxy metrics (e.g., measuring intangibles)

An important goal is to always align metrics with the overarching business goals they aim to measure, and that metrics should serve as guides for informed decision-making.



One powerful approach to crafting effective metrics is known as the GQMR (Goal-Question-Metric-Refinement) method. Here's an illustrative example:

  • Goal: Achieve a reduction in a specific parameter.

  • Question: To what extent does this parameter exist in the current context?

  • Metric: Quantify this parameter as a numerical value or percentage present within the domain.

  • Refinement: Does the metric have the potential to mislead or misrepresent the true state of affairs?

Furthermore, it is advantageous to perceive metrics through the lens of comparisons. Some common comparative aspects include evaluating current conditions against desired future conditions and assessing past conditions in contrast to the present. These comparisons can shed light on the effectiveness of previous decisions and actions.


How straightforward is it to accurately capture and present the intended metric?

The effectiveness of metrics depends in part on how well they are reported. Collecting data for metrics can be a time-consuming process, and it's essential to analyze whether the time spent collecting data for a metric exceeds the time invested in achieving the cybersecurity goal that the metric represents. One of the initial steps is to determine how the data will be collected and organized for presentation. To structure and present metrics effectively, consider aligning them with a cybersecurity program framework like the CIS controls or categorizing metrics based on major cybersecurity domains.




6 views0 comments

Comments


Post: Blog2_Post
bottom of page