Time for a New Year’s Resolution: Your Software Security KPI—A Perfectly Secure Product
In “The One Thing” by Gary Keller and Jay Papasan, they discuss the nature of long-term goals. The goal itself is not the purpose—instead, the purpose is to change behavior and to know how to act appropriately in the moment. While they advocate for SMART goals (specific, measurable, achievable, relevant, and time-bound), they argue that the odds of achievement should be very low, almost impossible.
What Would That Goal Look Like for Software Security?
Goal: Any security vulnerability finding in any environment by any scan or vendor will be remediated in 60 days.
How to Measure: Track findings as recorded and verify that no findings persist beyond 60 days.
Achievable: Arguably.
System Over Goals – Scott Adams
Another way to frame this: habits are what Scott Adams calls systems. Adams frequently challenges the focus on goals, noting, “Goal-oriented people exist in a state of continuous pre-success failure at best, and permanent failure at worst if things never work out. Systems people succeed every time they apply their systems, in the sense that they did what they intended to do” (Forbes, https://www.forbes.com/sites/bryancollinseurope/2018/06/28/scott-adams-goals/). Adams emphasizes the importance of systems throughout his work.
Software engineers inherently understand systems—they work with them daily. Yet they rarely step back to examine the system they’re operating within, often finding themselves in a perpetual state of firefighting. As a result, technical debt, including discovered vulnerabilities, accumulates rapidly.
Let’s explore the habits that will build a more robust software product development process.
Habit 1: Stop Tracking Vulnerabilities
This goal isn’t accomplished by simply stopping vulnerability tracking. While that would technically work, it wouldn’t achieve the real objective of remediating security vulnerabilities. We might as well stop building software altogether if that were the approach.
Instead, the idea is to stop treating vulnerabilities themselves as a valuable metric. They aren’t. Tracking vulnerabilities often leads to arguing about them—their “exploitability” and “reachability”—rather than fixing the underlying problems.
The current end goal of most security programs is reducing risk through vulnerability remediation, while the overall quality of the development process remains secondary. This creates a system that practically guarantees failure. Instead of investigating where the software engineering process failed and produced the vulnerability, teams debate “Does this vulnerability expose the organization to risk in the short, medium, or long term?” Such arguments rarely yield meaningful results.
While knowing about vulnerabilities remains important, the key metric should be how many vulnerabilities have been converted into test cases or static analysis rules, and how many make it through to production. The latter number should trend toward zero.
Habit 2: Leverage Run Time Protections
Ah, so if we implement firewalls (network or web-application), API Gateways, RASPs (Runtime Application Security Protections), or ADRs (Application Detection and Response), are we compliant with the goal? No.
These tools provide an essential layer of defense that can be rapidly deployed and updated to mitigate immediate threats—even those that aren’t or can’t be detected through other means. They also provide valuable insight into user behavior patterns and anomaly detection.
Additionally, it’s crucial to standardize logging, traceability, and observability. All new code should implement these practices, and teams should begin retrofitting existing code to incorporate them.
However you define your success criteria, make logging, traceability, and observability mandatory components. Will this increase development time? Initially, yes—as all additional work does. But engineering teams will become more efficient at implementation, support will become easier, and these improvements will ultimately pay for themselves. In fact, all these steps will prove cost-effective over time.
You might be thinking: “There’s no way an organization can complete all that code change work in a year.” You might be right, but that’s goal thinking. System thinking doesn’t focus on the timeline—it recognizes that this work needs to be done and emphasizes building the right habits and making appropriate decisions in the moment.
The key metric to track here is the lifespan of runtime protection rules. These should be deactivated when the underlying fix is implemented, with their duration trending toward our 60-day goal.
Habit 3: Keep Third Party Libraries Up To Date
Most modern software consists primarily of third-party libraries. While having a sound update process is crucial to software product development, most organizations lack one. Many teams operate on an “if it isn’t broke, don’t fix it” mentality. The irony is that these same product teams often discourage their own customers from remaining on older versions of their software. While some may accept customer version lag, others force upgrades—every software developer understands that supporting multiple versions increases complexity. Software engineers naturally want to minimize the number of supported versions.
Let’s be good stewards and help our third-party dependencies by upgrading when possible. This upgrade process must be controlled: libraries should be incorporated using specific versions, verified against known good releases, and reviewed for potential malicious code where feasible. All libraries should be pulled from an internal source, not directly from the Internet.
Setting up this process isn’t easy—not because of the library management itself, but because of verifying application stability. This requires ensuring appropriate test cases exist and confirming that upgrades don’t introduce breaking changes.
This is the critical factor that will determine whether a year of striving for perfect security succeeds or fails. A product team unwilling to implement this process will never manage third-party libraries effectively and, consequently, will never achieve true security. The fundamental reason is that proper library management requires dedicated time to create comprehensive test coverage. If a product team doesn’t trust its test cases, it cannot achieve adequate security.
Evaluating the security of a system requires being curious about the system
Habit 4: Practice New Mindsets
First, if the product team truly prioritizes security and quality, delivery speed becomes a secondary concern. When speed remains a pressing issue, ask leadership about their software’s total cost of ownership and potential savings from a more measured pace. While many will argue that new features generate more revenue than potential savings, the key question becomes: Where is the tipping point? How do we balance speed with sound engineering practices? Getting product leadership to ease off the accelerator slightly and ask more nuanced questions of their team can create significant positive change. However, this assumes product ownership values these aspects of software development. If they focus solely on the bottom line, the required effort increases dramatically.
Second, organizations must foster a mindset of curiosity. Most vulnerabilities are uncovered through asking questions: “What happens if I do X?” “What if I am able to do Y here?” Effective curiosity requires deep understanding of what’s being built and how it works. Hack-a-thons offer one approach—and these need not be after-hours events or short bursts of activity. The goal is to bring together developers, architects, business stakeholders, and others to collaboratively improve both software products and business functions. To drive participation, consider offering monetary rewards for solutions that deliver the most business value.
Third, we need to simplify our approach to data classification. The current complexity around what should or shouldn’t be encrypted has become unwieldy. Focus on three core principles: Least Privilege, Role-Based Access Control (RBAC), and encrypting all data not essential for system function. This means encrypting all user data by default. While this might raise questions about search and reporting capabilities, solutions exist—though that’s a topic for another discussion.
Habit 5 : Improve Automated Tests
Smart Fuzzing, which forms the basis of DAST (Dynamic Application Security Testing), uses intelligent random values to trigger specific types of vulnerabilities based on assumptions about the underlying technology.
Vulnerability and Penetration tests follow similar principles. Red Teams typically combine smart fuzzing with behavioral testing techniques to analyze application responses.
Both approaches can be transformed into more effective automated tests. This presents an ideal opportunity for partnership between development and security teams. While software engineering teams may not possess the specialized security expertise, regular collaboration with security teams can help build a more comprehensive and aggressive integration testing suite.
Habit 6: Challenge Technology Changes
Few organizations truly need a diverse technology stack. While this doesn’t mandate a single-stack approach, the common argument that certain technologies excel at specific tasks, though valid, often doesn’t justify the complexity of maintaining multiple stacks.
Product development teams frequently experiment with new technologies to stay current with trends or to claim cutting-edge status—witness the current AI craze. The real challenge lies in maintaining technological concurrently while making strategic changes only when truly beneficial. However, that’s a topic that deserves its own discussion.
Habit 7: Get Back to Testing For Vulnerabilities
If you thought you’d escape traditional security testing, think again. All the standard testing methods—static, interactive, dynamic, ADR, RASP, and SCA scanning—remain essential. However, they must be implemented systematically.
Static analysis and SCA (Software Composition Analysis) should occur at multiple stages: during development, at code check-in, during builds, and in regular automated scans.
DAST (Dynamic Application Security Testing) belongs in a production-like staging environment and, depending on organizational maturity, in production itself.
Once an organization reaches sufficient maturity, implementing a Bug Bounty Program becomes the next logical step. These programs help identify more subtle vulnerabilities that automated testing might miss. As with other vulnerability discoveries, these findings should be converted into automated tests to prevent regression.
Are You Up For the Challenge?
The goal of remediating every security vulnerability within 60 days might seem impossible—and perhaps it is. But remember, the goal itself isn’t the point. The point is creating systems and habits that transform how our organizations approach security.
These seven habits—moving beyond vulnerability tracking, leveraging runtime protections, maintaining third-party libraries, practicing new mindsets, improving automated tests, challenging technology changes, and maintaining comprehensive external security testing—form a system. This system doesn’t guarantee perfect security, but it creates an environment where security becomes intrinsic to the development process rather than an afterthought.
Success isn’t measured by achieving perfect security—it’s measured by how consistently we apply these habits and how they shape our daily decisions. When a team embraces these practices as their standard operating procedure, they’re succeeding at the system level, regardless of whether every vulnerability gets fixed within 60 days.
The question isn’t “Can we achieve perfect security?” The question is “Are we building and following systems that make our software inherently more secure?” That’s a goal worth pursuing, one habit at a time.