Vsevolod Shabad, CISSP, CCSP, shares his experiences of how and why organizations may have impeccable audit results, yet continue to experience avoidable incidents, delivery friction and a steady accumulation of workarounds.
Disclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.
During 25+ years in technology leadership, which has included roles as CISO and CIO, across banking, mining, energy, and telecommunications, I’ve repeatedly seen organizations face this challenge. Most security assurance metrics confirm that controls are in place, policies are approved and compliance targets are met. Yet I’ve watched teams in regulated environments quietly bypass controls they considered obstructive while remaining technically compliant. Consequently, the metrics looked healthy, but the underlying security posture did not. This isn’t a failure of awareness, or of discipline; it’s a failure of what “assurance” chooses to measure. Here’s how I’ve addressed the issue.
The Problem Assurance Consistently Misses
Security controls are constraints: they restrict choice, introduce friction and slow down legitimate work. What matters is not whether controls constrain behavior, but whether those constraints are experienced as legitimate or obstructive. If or when a control is perceived as arbitrary or disconnected from real risks, bypassing it becomes rational professional behavior. I’ve observed – repeatedly – engineers routing around friction to keep delivery moving and managers tolerating deviations to meet operational commitments. Note that none of this requires malicious intent, nor does it show up in audit reports.
“Purpose” as a Design Attribute
Discussions on the issue often drift into the realms of culture or training. That is rarely sufficient, because controls are frequently designed without making their purpose explicit at the point where friction is felt. A useful design insight applies, which is that Viktor Frankl’s observation holds true in cybersecurity: people can endure significant constraints when the purpose behind them is clear. Controls that clearly signal the failure they are intended to prevent are tolerated differently from those that do not.
Nonetheless, purpose is still no substitute for usability. I’ve implemented controls with a crystal-clear rationale, but which were still bypassed because they required disproportionate effort. Thus, purpose can legitimize friction, but it cannot compensate for poor design.
How I Made It Work in Practice
Early on in my career as an IT leader, in a regulated payment organization, I inherited a control that required up to six weeks of approval time to provision a new test environment, despite sufficient infrastructure capacity being available. This control was designed to manage risk and cost, but its purpose was opaque to delivery teams operating under tight timelines.
Teams responded by reusing shared environments, delaying testing, or performing validation outside approved platforms. In doing so, formal compliance remained intact while exception requests and informal workarounds steadily increased. More explicitly: the control existed on paper, but its operational legitimacy eroded.
But, after I introduced tiered approval paths based on environment criticality and automated low-risk provisioning, exception requests dropped 65% within three months. Test cycle times improved without compromising oversight.
Later, while leading a cybersecurity team in a large bank, I saw the same pattern from the other side, including the same pressure to normalize shadow workarounds.
These experiences shaped my understanding of what I now call velocity-based assurance.
Velocity As an Assurance Signal
The concept of velocity offers a means of observing what static assessments miss; and not because those assessments are poorly executed, but because control-centric assurance models have inherent limits in what they can reliably observe about real-world behavior. I don’t mean operational speed, but rather the velocity of alignment between control intent and actual behavior. That is to say: how quickly an organization adopts a control without sustained enforcement, stops generating workarounds, and converts exceptions into design improvements rather than precedents.
Note that high velocity does not imply weak controls. Some of the strongest controls I’ve implemented exhibited high velocity precisely because (a) their purpose was clear and (b) their design aligned with real operational needs.
Measuring Velocity in Practice
Nor does velocity require new tooling: I can infer it from existing operational data:
- Exception Velocity: I track the trend of exception requests using existing ticketing systems. When I rolled out a control requiring mandatory code review approval from security architects, it generated 18 exception requests per week; within two months, this dropped to 4 without additional enforcement, indicating positive velocity.
- Workaround Half-Life: I detect this through retrospective incident reviews and patterns in operational logs: undocumented configuration changes, off-hours deployments, repeated use of emergency access. Short half-lives indicate misalignment between control design and real work.
- Enforcement Load: This is a measurement of the ongoing effort required to maintain compliance: think manual approvals, escalations and reminders. High enforcement load is a direct contributor to burnout within security teams.
- Time-to-Legitimacy: This is the time it takes for a control to be followed without prompting and defended by its users. Demonstrating that a control is followed naturally is the strongest possible argument I can make for an external auditor.
Static Versus Velocity-Based Assurance
In practice, this shifts the entire assurance model. I summarize the difference between static and velocity-based assurance as follows:
|
Feature |
Static Assurance (Traditional) |
Velocity-Based Assurance |
|
Primary Question |
Does the control exist? |
Is the control accepted? |
|
Success Metric |
Clean audit report |
Declining enforcement load |
|
View of Friction |
Necessary cost of business |
Signal of design failure |
|
Role of Exceptions |
Compliance deviation |
Feedback for redesign |
|
Long-Term Outcome |
Compliance theatre |
Sustainable security |
This distinction is not about improving governance mechanics, but about recognizing where governance signals stop being reliable indicators of real-world control effectiveness.
Designing Controls Without Coercion
This perspective has changed how I approach assurance today. For critical controls, I ask four questions:
- Can those constrained by the control explain which failure it prevents?
- Is the cost of compliance lower than the cost of workarounds?
- Are exceptions treated as signals for redesign?
- Does enforcement effort decrease over time?
If the answers are consistently negative, maturity scores offer comfort rather than control.
A Starting Point for Practitioners
Security controls rarely fail loudly. They fail quietly, through compliance “theatre”: controls that exist and are audited while being bypassed in practice.
Purpose determines whether controls are experienced as safeguards or dead weight. Velocity reveals which outcome is unfolding. When assurance measures only the presence of controls, it creates confidence without reducing exposure – a gap that no increase in control maturity alone can fully close.
So, here is my advice to other practitioners inspired to start reexamining their own security controls:
- Identify the three controls that generate the highest number of exceptions or workarounds in your environment. Observe their exception velocity and enforcement load over a quarter.
- Where friction remains high, treat that friction as a design problem rather than a compliance failure, and address it.
Your metrics will still look healthy – but your underlying security posture will improve.
Vsevolod (Sam) Shabad, CISSP, CCSP, has over 25 years of experience in cybersecurity across banking, telecommunications and regulated critical infrastructure. He has held executive, management and technical roles, with responsibility for security control design, assurance and risk management in complex organizations. His cybersecurity work focuses on control effectiveness, assurance failure modes and the operational legitimacy of security controls.
