KPI Chaos to Cash Flow: Rethinking Security Metrics for Real Success

One of the primary ways to change how Application Security is viewed is to change the metrics that shows it is working. There’s an old saying “You can’t manage what you can’t measure.” This has lead to a focus on finding the right data to measure. Freeman Dyson noted “Observations not only disturb what is to be measured, they produce it.”

Many recommended Application Security Key Performance Indicators (KPIs) sabotage a successful program and product delivery. These metrics either overwhelm the product team with findings or promoted poor software engineering practices. It’s imperative to choose metrics wisely.

They focus is on vulnerabilities

The “Whack-a-Mole” Metrics
These KPIs are akin to playing a game of whack-a-mole, where every time you think you’ve fixed a problem, another one pops up. The reason this happens is the end goal of the Application Security program created is less risk to the organization. The overall quality of the product isn’t really a focus. At this level, the typical indicators are:

  1. Vulnerability Counts: The total number of vulnerabilities discovered. Think of this as your mole count in the game. The more vulnerabilites that are uncovered, the better the program. After all, it means less potential vulnerabilities have made it to production. Right?
  2. Remediation Rate: The rate at which vulnerabilities are remediated or mitigated. The focus here is making sure that the vulnerabilities are getting cleared rapidly. Faster remediation time means more security awareness. Right?
  3. Vulnerability Types Tested: The variety of vulnerabilities your testing tools and processes can detect. A diverse toolkit ensures you’re not missing any sneaky moles. This is usually for legacy applications. The idea here is that as developers have addressed certain classes of vulnerabilities, it is possible to add more to it. This way we don’t overwhelm developers. The more vulnerabities tested, the less security issues reach production, right?
  4. Number of Products Onboarded: It’s always good to find the moles in the most important applications right? The goal of the application security program is not just one product though? A company wants all of its in house software to be monitored. More products means better security, right?

These four metrics all have one focus. They focus is on vulnerabilities. On the surface, these seem like a good thing. The challenge with these metrics is the complexity around them. There are too many variables at play. A single class of vulnerability can lead to an architecture change that takes significantly longer than any other finding. Vulnerability counts can lead to similar vulnerabilities being reported in a manner which hides the underlying cause.

The focus on vulnerabilities draws attention the belief that application security slows down product development. It reinforces the false assumption that application security is a “cost” rather than an enabler.

… they are ultimately still very dependent on the same underlying statistics.

The “Sherlock Holmes” Metrics

These KPIs try to evaluate the success of an application security program by using correlation rather than causation. Instead of focusing on the vulnerability, these metrics try to focus on states of the vulnerabilities themselves.

  1. Percentage of Code Under Test: The proportion of your codebase that is covered by security tests. More coverage means fewer hiding places for vulnerabilities.
  2. Exemption Rates: The rate at which a product team is requesting exemptions for vulnerabilities. The more frequent that a product team is requesting an exemption, the greater the likelihood a vulnerability can leak through to production. Exemptions exist in the case of things that may not be reachable or exploitable. They can also exist if a team is not willing to put in the work to address the issue at hand for either technical or resource reasons.
  3. Pass-Through Rates: The number of new vulnerabilities that escape detection within the development and testing phases and make it into production. The belief is that success is driven by the number of tested vulnerabilities that leak through. Of course, this means that every vulnerability of a class not being tested will leak through. And that will throw off all metrics.

While these three metrics appear to be shifting the focus away from vulnerabilities to the software development lifecycle and product team performance, they are ultimately still very dependent on the same underlying statistics. Each of these metrics is heavily influenced by the number of tests, the types of tests, and ultimately lead to either the application security team being an enabler or a police force. Neither of which result in a truly better product.

The “Security Investment” Metrics
These are the hardest to get. They require thoughtfulness and a willingness to track real costs. The require transparency of a program beyond surface level findings. These are the KPIs that dig deep into the effectiveness and collaboration capabilities of the software security team.

  1. Total Cost of Production (TCP): The overall cost of managing and maintaining a product. This metric tells an organization the real cost of how much it spends on a product. This includes ancillary items such as cost of making sales (including responding to RFAs), responding to security incidents, maintenance, and more.
  2. Downtime Cost: The amount of time your application is down. Things that are factored in are the support costs for brining the system back up as well as addressing the issue that brought about the downtime.
  3. Cost to Address New Vulnerabilities: The expenses involved in fixing zero-day vulnerabilities. This includes both immediate response costs and long-term fixes to prevent future occurrences. One of the challenges in mean time to remediation is that it ignores the total cost. There were many organizations that the mean time to remediate the Log4Shell finding was low in calendar dates as they found that they were not vulnerable, however the real cost was a lot larger given the number of people involved in having to make that determination.

The most obvious question from this point forward is “Well, how does one track these and show that the application security program added value?”. That is a valid question. After all, there is no way to do two timelines. One cannot see what happens if an application security program is not in place and if one is in place for the same workload.

The answer lies in the fact that software security is about ensuring that the software development life cycle is performing sound engineering practices. Nearly all of the software vulnerabilities that are leaked into a production release are as a result of gaps in the software development process.

It’s time that software security recognizes that a more secure application doesn’t require more money to build, it saves more money to maintain.

Posted in Security and tagged , , .