CISO Best Practices Cheat Sheet: Cloud Edition
This guide is for CISOs and cloud security leaders who want to move beyond fire drills and dashboards. 

Whether you’re inheriting a cloud program, scaling to multi-cloud maturity, or aligning with board priorities, this cheat sheet helps you cut through the noise, focus on measurable outcomes, and lead with clarity - all with practical frameworks and 90-day actionable steps.

Download Now

15 Years of Zero Trust

The term is generally accepted as having originated in 2010 in a Forrester report by John Kindervag. 15 years, as the saying goes, is a long time in cybersecurity. So, have Kindervag’s thoughts and observations stood the test of time?

As we write this in October 2025, the notion of zero trust is 15 years old, having appeared in 2010 in Kindervag’s Forrester report: “No More Chewy Centers: Introducing the Zero Trust Model of Information Security”.

The report begins by telling the story of Philip Cummings, who in 2004 orchestrated what the FBI termed “the largest case of identity theft ever investigated”, with 30,000 customer records exfiltrated from his employer, TeleData Communications. For $60 a time, Cummings leaked personal data to his Nigerian paymasters and even installed a hidden mechanism for the data leaks to continue for two years after he had left the company.

Defining a Need for Zero Trust

Kindervag goes on to discuss the concept of “Trust but Verify”, noting that the concept originates in a Russian proverb (it initially became well known thanks to its repeated use by U.S. President Ronald Reagan when negotiating nuclear disarmament with Russian leader Mikhail Gorbachev in the 1980s). “Forrester has found that most security professionals trust a lot but verify very little”, he wrote. “By default we trust people, but it’s hard to perform the verification, so we don’t do it”. He goes on to cite the example of Cynthia Whitehead, who exploited her elevated status within her organization to embezzle $300,000.

On a more technical note, the author noted that we cannot permit our networked devices to trust the data packets that land on them. “All we can truly know about network traffic is what is contained in packets”, he stated, “and packets can’t tell us about the veracity of the asserted identity”. The conclusion: “We can’t trust packets”.

The message is clear. While the Cummings data leak resulted in 14 years’ prison time and a $1m fine, this would have been little consolation to the thousands of customers whose data was leaked. “Trust but Verify” does not work because we trust but don’t verify. And in its basic form, we need to assume that any packet of data on our infrastructure could be malicious, because we cannot be certain that it is innocent.

Mitigating Measures

What about the author’s recommendations for mitigating the risks? Have they aged as gracefully as the risks themselves? Kindervag promoted three “concepts” at the time.

First is to “ensure that all resources are accessed securely regardless of location”. That is, we should assume that someone could plant a packet sniffer on our (supposedly) private network and suck up useful information that would enable them to perpetrate an attack. He recommended that we “assume that all traffic is threat traffic until it is verified that the traffic is authorized, inspected, and secured”, and that the security team should “protect internal data from insider abuse in the same manner as they protect external data on the public internet”.

Concept two is: “Adopt a least privilege strategy and strictly enforce access control”. Role-Based Access Control (RBAC) is recommended as a mechanism for implementing the Principle of Least Privilege (PoLP). RBAC does not prevent people with legitimate access to something abusing that privilege, but it does at least keep out those who do not have access to those resources. When recommending RBAC Kindervag noted that “Other technologies and methodologies will evolve over time”. Yet here we are over a decade later, still pushing RBAC and PoLP because they are still very strong fundamental ingredients in the security of our systems.

The third concept is to “inspect and log all traffic”. This is perhaps the most interesting and forward-thinking recommendation because, unlike the other two, the technology available at the time was arguably underdeveloped or underused to meet the need. Although log analysis tools did exist, it was common for organizations to log only a fraction of the log messages generated by their IT systems due to the massive quantities of network bandwidth and disk storage required to ship them around and save them for analysis. Security Orchestration, Automation and Response (SOAR) platforms did not exist at the time (SOAR systems as we know them started to appear in the mid-2010s) so finding the malicious needle in the vast haystack of stored logs was an almost impossible challenge.

More Than a Technology Challenge

Having defined his three concepts, Kindervag looked again at the people aspect. Yes, logging everything gives us visibility of everything that is going on (assuming, of course, that we have the people and the time to actually look at the data), but he delivered another powerful message: “If individuals know that security is monitoring their actions, they will be less tempted to do things that are questionable”. The undertone is clear: some people are inherently untrustworthy, but we can reduce this untrustworthiness!

The author ended with a final message: “Zero trust is not a one-time project”, urging us to “change how [we] think about trust” and to “integrate zero trust into future planning”.

Technology has grown quickly over the years and concepts that were relevant even a few years ago fade into obscurity and irrelevance. In the case of zero trust, though, the opposite has happened. This 2010 report could have been published verbatim in 2025: it identified, 15 years ago, all the key risks associated with trust in IT that still face us today. And into the bargain, the mitigating concepts and actions also remain as relevant today as they were back then.

Reassuringly, then, there are two modern factors that help us do something about the threats outlined in the report.

First, the technology limitations cited when discussing logging everything that happens have largely dissolved: fast networks and inexpensive storage make it possible to log much more than ever before, and SOAR technology (which generally includes AI/ML capabilities) has vastly reduced the need for humans to scour gigabytes of data looking for tiny anomalies.

Second, more importantly, is user acceptance. Years ago, openness was the way we worked. Systems were there to help us and security was often minimal because it got in the way of people doing their jobs. Users complained if they were continually prompted for passwords, or if the person next to them had more access than they did – even if they did not need that access to do their work. Today, though, it feels like that battle is being won: cyber attacks are so common and are so widely reported, that the users of our systems now understand the need for security. They accept that having to respond to a few MFA prompts each day is probably a better outcome than being out of work because their organization went out of business due to a cyber attack. They are OK with the fact that they can only access what they need, not what they want.

Because the concept of zero trust was defined so long ago and has remained largely unchanged in the 15 years since it was first defined, it is so well understood that organizations are in a better position than ever to implement it, to embed it into future developments, and to be more secure as a result.

Related Insights