Online MS in Cybersecurity from Drexel University
Drexel University’s online MS in Cybersecurity utilizes the College of Computing & Informatics and College of Engineering’s network of professionals to give students access to the latest research, tools and insights, and prepares students to meet the workforce needs through rigorous academic and experiential practical training. Learn more!
To a security person zero trust is, theoretically, a great idea. It gives fine-grained control over who can access what, regardless of whether it’s in the cloud or on-premise, and defaults to a state where nothing is permitted unless it has specifically been allowed. In practice, though, zero trust is a shining example of the classic security trade-off: by being more secure, you make your world way more inconvenient to manage and to use.
What We Mean by Zero Trust (ZT)?
Security vendor Crowdstrike puts it rather nicely, describing it as a “security framework requiring all users, whether in or outside the organization’s network (such as in the cloud), to be authenticated, authorized, and continuously validated for security configuration and posture before being granted or keeping access to applications and data”. In short: assume nothing, trust nothing and nobody.
If you’re thinking: surely that’s the same as the Principle of Least Privilege (PoLP) … well, not quite – it’s actually a lot more. The PoLP is all about granting the right access capabilities to each user and system once they’re logged in; ZT is the layer on top that both: (a) decides whether access is even granted in the first place; and (b) checks regularly during the session to see whether access should continue.
When dealing with cloud services, this is of particular use due to the physical off-siting of the application, often reducing visibility and real-time awareness of what might be happening.
Let’s take a real-life example from the banking world – the SWIFT transaction interchange co-operative. SWIFT has a comprehensive set of security requirements – the 132-page Customer Security Controls Framework, or CSCF – and each bank has to attest annually that they’re compliant with all the mandatory requirements it contains. Each year new things are added. In 2023, the new mandatory requirement was all about network segmentation and ensuring people can’t access the secure SWIFT-connected part of the network from other parts of the infrastructure.
To quote the requirement: “A separated secure zone safeguards the customer's infrastructure used for external connectivity from external environments and compromises or attacks on the broader enterprise environment”. And this is, of course, what ZT is all about – defending one part of your infrastructure from attack in the event of another part being compromised. For the average organization a non-trivial amount of system change and ongoing oversight will be needed in order to comply.
To the team running the systems, ZT is a gargantuan job creation scheme and can bring massive inconvenience. The utopia for a system manager is the ability to manage all infrastructure, on and off-premise, though a single platform (or, more usually, a manageable number of platforms). It’s great, for instance, to have a central monitoring system to keep an eye on all systems and generate alerts when something goes down or is performing sub-optimally. That’s actually not a huge problem in a ZT world because if all you’re doing is monitoring (that is, the software just has read-only rights to the infrastructure) then the risk is modest. The issue comes when you want to be able to make changes from a central point: a single management platform becomes a single point of failure and a super-convenient way for an intruder to damage the entire infrastructure in the event of a compromise. In order to inconvenience the bad actors, you have to inconvenience yourself. You should already have production, pre-production, test and development systems segregated from each other, and these should take into account internal and external software services as well. ZT adds a whole new level and makes you do further separation within each of those environments so that if (say) a bad actor compromises the finance system, they can’t jump off onto the email system, the file server, and so on.
Can We Automate or Use AI to Lighten the Load?
In truth, not really!
The problem is simple: the moment we let a machine decide whether to allow access to something, we’ve broken our model. One might try to argue that the decisions we make on allowing access to something are based on reasoning: when one resource (human or computer) requests access to something, we follow a process of reasoning to decide whether this should be allowed.
If the new starter in Accounts Payable requests access to the finance system, we logically deduce that this is a sensible thing to grant and go ahead and give them it. But that’s outside the realms of AI – that’s just basic Role Based Access Control (RBAC). Letting a machine reason for itself whether a user asking for access to X should be given that access because they already have access to A, B and C is asking for trouble. Yes, we can automate stuff, but only in a very basic sense – by implementing an RBAC approach where we define the rules and implement a system that grants and revokes access based on them … and even then, we still have a job on our hands to define those rules and configure them into the provisioning systems.