Not getting what he needed from conventional threat modeling, Vu Van Than, CISSP, SSCP, CC, tried something different, leveraging honeypots to model and test assumptions about adversary behavior before incidents occurred.
Disclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.
Threat modeling is supposed to help me reason about adversaries, attack paths and controls. But, in practice, what I get out of a threat model is only as good as the assumptions I ‘smuggle’ into it – i.e., that I might introduce without noticing. It’s a manifestation of the quiet belief that “this is safe enough because we have not seen it exploited”.
That’s why I introduced and used Future-Back Threat Modeling: I start from a plausible future state, then work backward to the present. My aim is not to predict the future, but to expose what I’m currently taking for granted and to pressure-test those beliefs before an attacker does. That creates a practical problem: how do I test assumptions about adversary behavior early – before they become incidents?
Traditional monitoring tells me what is happening. It doesn’t always tell me whether my future-facing assumptions are right. This is why I want to reframe the concept of honeypots, which are commonly described as “decoy systems designed to lure and trap” malicious actors. Used well, they help: defenders avoid passively awaiting an attack and instead actively gather intelligence that strengthens security posture.
My definition in this article is narrower and more operational. Within a future-back cycle, I can treat honeypots as strategic sensors: instruments that generate evidence to confirm, refine, or contradict the assumptions embedded in my threat model. So, this isn’t another “what is a honeypot” article; I’m going to explain how I’ve changed the purpose and success criteria of honeypots by using them to test future-back threat-model assumptions.
Why the Traditional Honeypot Isn’t Enough for Future-Back Success
Many adversaries begin by scanning external subnets and probing public-facing hosts, trying to exploit the weakest machine. The traditional honeypot ‘tricks’ the adversary into engaging, reports the event to central monitoring, and can help detect attack attempts early – frequently associated with the first steps of an intrusion lifecycle (e.g., the early stage of a kill chain). Practically, that means artifacts: source IPs, attack types, sometimes even captured binaries and hashes for later analysis.
There’s also a second traditional theme: placement. Honeypots need to be put in the right place. A public-facing honeypot can collect external attack attempts; internal honeypots can help surface lateral movement. I choose to extend this into layers – network, application and endpoint – each positioned to capture different slices of attacker behavior.
But, in practice, the limits of traditional honeypots show up very quickly. We assume that an attacker will stay long enough to be interesting, while most real-world activity today is short, automated and opportunistic. Keeping a honeypot believable, inside and out, takes real effort which doesn’t always translate into better decisions.
I learned this the hard way. In environments where I deployed honeypots, our internal red team did not take long to become suspicious. After a period of probing, they were able to recognize the systems as deceptive and even infer which honeypot engine was in use. Once that line was crossed, engagement dropped off almost immediately. What remained was noise, not insight.
The problem was not that the honeypot technology failed, but that its value collapsed the moment detectability stopped being theoretical and became operational. At that point, the familiar questions resurface: how many honeypots are enough and what does “coverage” actually mean? When the output is treated as threat intelligence rather than evidence against explicit assumptions, it is easy to collect more data while understanding less.
Such framing is operationally useful. However, it tends to default to a detection-centric question: “What is happening to me right now?” The outputs are descriptive. They tell me what I saw. Future-Back Threat Modeling demands a different question: “What does what I saw do to my assumptions about the future state?” The difference is not wordplay; it determines whether my organization is merely collecting more telemetry or improving the quality of its strategic security decisions.
What I mean By “Assumption Testing”
My use of the word assumption has a specific meaning: a belief about how an adversary is likely to behave or which attack paths they are likely to take. It quietly shapes what I choose to defend, what I bother to measure and where I focus my attention. For example, I might assume that attackers will prioritize exposed Remote Desktop Protocol (RDP) services, leading me to concentrate defensive effort and monitoring there. I’ve found that many such “beliefs” are never written down or been tested, yet they still drive real security decisions.
A detection-centric honeypot does not necessarily challenge these beliefs. If I treat its output as a standalone observation (“We saw X”) my underlying model often remains intact. In future-back work, I ask a different question: does what I am seeing support the assumption that shaped my threat model, or does it call into question that assumption?
This is a line I draw deliberately. Without it, calling a honeypot a strategic sensor would merely be re-labeling an old idea. A honeypot earns that label only when it is built to test a specific threat-model assumption and when its output is evaluated against that assumption rather than consumed as an isolated indicator.
On making that shift, my definition of success changes. A honeypot that generates large volumes of interaction can still be of low strategic value if it leaves my assumptions untouched. Conversely, a honeypot that attracts very little activity can be strategically important if that absence forces me to question a belief I had been relying on.
For clarity: I’m not suggesting that my rethinking of honeypots somehow predicts the future or replaces other forms of threat intelligence. My claim here is narrower and deliberately testable: that, used this way, honeypots reduce uncertainty by providing targeted evidence about whether future-oriented assumptions still hold, or whether they need to be revised.
Practically, this reverses my usual workflow. I start by making explicit the assumption, then letting that assumption guide how a honeypot is designed, where it is placed and how its output is interpreted. Only after that do questions of coverage or realism come into play.
From Early Signals to Security Decisions
In practice, deploying an external honeypot didn’t leave us short of signals, but overwhelmed by them: in a short period, millions of log entries blended opportunistic scans, automated exploitation attempts and more structured probing activity. My challenge was not collection but discrimination: deciding which signals mattered early and which simply reflected the background noise of the internet.
Routine requests and exploit-oriented probes appeared interleaved with automated scans consistent with Nmap Scripting Engine behavior, producing signals that were technically plausible yet insufficient to confirm (or refute) any future-facing assumption about adversary intent. All of it was technically plausible; none of it was immediately decisive.
Our difficulty was not uncertainty about whether the activity was suspicious, but about what it meant in relation to the future state we were modeling. Each pattern could be interpreted as routine scanning, broad automation, or the earliest stages of deliberate reconnaissance; without additional context, there was no credible way to decide which interpretation deserved attention first.
In several cases, our initial reaction was simply to move on, to the next alert. Prioritization quietly collapsed into intuition and habit: early signals were easy to acknowledge and just as easy to set aside. Without explicit assumptions to test, every log line competes equally for attention and volume becomes a liability rather than an asset.
This is where Future-Back Threat Modeling framing becomes operationally relevant: when honeypots are deliberately aligned to specific future-oriented assumptions, my interpretation changes. Timing begins to matter more than frequency; absence can become as informative as presence; deviations from expected patterns carry more weight than raw counts.
Early signals do not dictate action; they inform judgment. I have seen organizations receive similar signals yet respond very differently, not because the data differed but because their underlying assumptions remained unexamined. When early signals challenge the timing or plausibility of a future attack path, they enable leaders to revisit risk priorities before incidents force the issue. This can lead to mitigation, conscious acceptance, or deferral. The difference is that the decision is made deliberately, informed by evidence rather than inherited belief.
I don’t have a complete solution to this problem yet and I’m cautious about implying that one exists. But assumption-driven honeypot deployment has materially improved the quality and timing of my security decisions. It does so not by eliminating ambiguity, but by reducing it enough to support deliberate judgment rather than reactive triage. Used this way, honeypots support Future-Back Threat Modeling by challenging confidence before assumptions quietly fail, rather than by prescribing specific actions.
Vu Van Than, CISSP, SSCP, CC, has 10 years of experience in cybersecurity, risk management and compliance. He has led security strategy and teams in both technical and management roles. His work focuses on threat modeling, strategy development and network defense.

