What if we could strengthen our defenses today by predicting future scenarios? Vu Van Than, CISSP, SSCP, CC, explains his idea for combining future-back thinking with threat modeling, to mitigate the most critical threats by working backward from potential future risks.

Vu Van Than, CISSP, SSCP, CCDisclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.

According to the OWASP Threat Modeling Manifesto (2021), threat modeling is the practice of analyzing representations of a system to identify and prioritize potential threats, and to assess the effectiveness of mitigations in reducing or eliminating those threats. It’s a structured process that supports informed decision-making around security and privacy risks. And, according to Adam Shostack in Threat Modeling: Designing for Security, understanding attacker motivations and potential attack paths is essential for effective threat mitigation.

However, one key weakness I realized while working through conventional threat modeling is that nearly all existing frameworks are grounded in past patterns and known techniques. The underlying assumption is that historical attack behaviors can sufficiently predict and model future threats. Yet the most dangerous threats – those that challenge assumptions at a cognitive level - often do not appear on those maps.

Challenging Perceptions

This led me to the idea that effective defense must model not only what attackers have done, but also test what we ourselves believe. What are we unconsciously treating as safe because we’ve never seen it exploited? What assumptions might be shaping our visibility without us realizing it?

In this sense, future-back threat modeling is less about diagrams and more about confronting the epistemic fragility of our strategic posture.

The core of the future-back approach is something I learned during a leadership course in 2020. This course emphasized the importance of strategic foresight and planning by:

  • Beginning with the end in mind
  • Working backwards from the envisioned future to the present
  • Moving step-by-step towards that vision

This mindset became foundational to how I now design threat modeling: not just to analyze the past, but to reimagine the future and trace it back to today’s actionable steps.

How I’ve Applied a Future-Back Approach

Preparing to Eliminate Ambiguity

I conducted a cybersecurity landscape analysis, reviewed global threat reports (ENISA, IBM etc.) and clarified our business priorities in collaboration with stakeholders. Then I initiated a comprehensive risk assessment, including penetration tests, threat intelligence reviews and system audits.

While I can’t share the specific findings due to their sensitive nature, the process helped surface previously underappreciated exposures, particularly in areas where legacy systems intersected with business-critical operations. These insights helped prioritize mitigation efforts that aligned more tightly with both threat intelligence and executive concerns.

Understanding the Enemy and Ourselves

Through cyber threat intelligence channels, I analyzed indicators of compromise (IoCs) along with tactics, techniques and procedures (TTPs), then conducted a gap analysis using NIST 800-53 and MITRE ATT&CK. I categorized threat perspectives as:

  • Attacker-centric (who): Focusing on motives and threat actor profiles; or
  • Asset-centric (what, why): Identifying high-value targets like customer data; or
  • Architecture-centric (how): Understanding vulnerabilities and attack paths

This multi-angle view highlighted the need for better alignment between business-critical assets and our detection architecture. It also revealed that certain high-value attack paths had not been regularly tested in past red team exercises. That realization shaped the next phase of modeling and was critical in setting the stage for what we would later uncover using honeypots.

Honeypots as Instruments of Epistemic Testing

In one of our internal threat research exercises, we deployed a network of high-interaction honeypots across selected cloud environments. While the majority of activity captured aligned with well-documented techniques such as brute-force attempts and scanning patterns, we also encountered behaviors that – at the time – didn’t align with any existing MITRE ATT&CK techniques.

What stood out were repeated interaction patterns involving unexpected protocol sequences and the targeting of ephemeral subdomains – behaviors that felt too deliberate to dismiss as random noise, yet could not be confidently mapped to known tactics or procedures.

At around the same time I attended a regional cybersecurity conference, at which a digital forensics and incident response (DFIR) team presented findings on a novel post-exploitation technique they had observed in the wild. This technique was later incorporated into the MITRE ATT&CK framework. The parallels between their findings and some of the anomalous activity we had recorded through honeypots raised a critical question: had we, perhaps unknowingly, witnessed early traces of the same technique before it was formally recognized?

This possibility prompted deep internal reflection. Were there other behaviors we had dismissed simply because they lacked labels or precedent? What else might we be missing – not because it’s invisible but because we’ve never thought to look?

We reframed our detection mindset accordingly. Instead of asking “Is this malicious?”, we began asking “Does this challenge what we assume to be secure?”

That shift led us to revise SIEM heuristics, audit our detection thresholds, and re-examine architectural trust boundaries. The honeypots didn’t give us answers; they gave us better questions. In cybersecurity, the quality of the question often determines the clarity of the defense.

To attract and record attack attempts, I deployed high-interaction honeypots. These weren’t just detection tools, but instruments to test the validity of our assumptions about how attacks manifest. They helped us uncover transitional attacker behaviors – early-stage variants that don’t align with any known TTPs. This lack of alignment surfaced potential blind spots, i.e., behaviors we might have unconsciously assumed to be safe, simply because they had not yet been formally observed or categorized.

For example: while we validated brute-force login attempts and sequential port scans aligned with known models, we also observed anomalies that could not be explained by any ATT&CK technique. This highlighted the value of dynamic hypothesis testing, in which the question shifts from “Did this match a known attack?” to “Does this challenge our current assumptions?”

In this way, honeypots became more than bait; they became tools for offering glimpses into what tomorrow’s attacker might look like, even before the industry defines them formally.

Bridging Threat Modeling with Executive Risk Perception

A third realization came not from logs or network traffic but from sitting at the same table as stakeholders and board members. It became clear that, in environments of uncertainty, the ultimate decision about “what’s riskiest” is not made by the CISO, but by the executive.

Our role as security leaders is not to control risk, but to present its contours in a way that decision-makers can truly see. This requires understanding the operational and psychological environment in which executives make decisions: what pressures they face, what consequences they fear, and what makes risk feel “real.”

To make future risks intelligible and actionable, I connected:

  • Assumptions revealed through future-back threat modeling
  • Observed behaviors from honeypots and deception technologies
  • Historical and present data (from intel, scans, incidents)

When aligned, these elements transform a vague “possibility” into a plausible future scenario, a grounded challenge to current assumptions. This helps executive leaders perceive not just what is possible, but what is plausible enough to act on.

Security, in this view, may serve as a strategic lens that makes the future more visible—and, perhaps, less dangerous.

Organizational Response and Post-Modeling Action

Identifying a plausible future risk is not the end of the process, but the beginning of a new decision pathway.

After threat modeling, I worked with leadership to follow our existing risk management framework and align outcomes with it. We focused on three potential responses:

  • Mitigate: Review defenses, develop detection playbooks, and adjust the risk matrix
  • Accept: Document the rationale for accepting the risk and ensure executive ownership
  • Transfer or Avoid: Engage third parties or adjust business processes

What matters is not just recognizing risk but embedding future insight into current systems of accountability and governance. This closes the loop:
Assumption → Hypothesis → Validation → Executive Perception → Operational Response

The Nature of the Unpredictable

At its core, cybersecurity is not merely a technical challenge, but a continuous negotiation with the unpredictable and the unknown unknowns. What I’ve attempted here is not a universal solution, but a framework of thinking that seeks to bring the future into focus, test our assumptions and embed strategic reflection into the heart of defense.

That said, I recognize this approach is difficult. It depends heavily on context: the organization’s maturity, its leadership culture and the willingness of decision-makers to engage with risk, not just as a number but as a narrative.

Thus, it is not a methodology to be copied. It is a practice to be lived, adapted and grown into.

A Note on Limitations

This reflection is based on observations in a specific organizational and operational context. The effectiveness of this approach may vary significantly depending on an organization’s maturity, culture and threat landscape. Further validation across industries and scales would be valuable.

I hope that relating my experience, though context-specific, can spark discussions or adaptations that fit your own environment. In cybersecurity, none of us have all the answers, but we might have a few good questions.

Vu Van Than, CISSP, SSCP, CC, has 10 years of experience in cybersecurity, risk management, and compliance. He has led security strategy and teams in both technical and management roles. His work focuses on threat modeling, strategy development, and network defense.

Related Insights