Beat IT Misery. Transform Your IT With NinjaOne
The NinjaOne automated endpoint management platform is proven to increase productivity, reduce security risk, and lower costs for IT teams and managed service providers. NinjaOne is obsessed with customer success and provides free and unlimited onboarding, training, and support.
Securing Hybrid Working Environments
Hybrid working environments combine on-premise and remote systems (as well as users), creating a challenging environment to secure and monitor.
Any organization with systems in the cloud is likely also to have some on-premise computing power. Maybe there are one or two systems that are simply more economical to run on on-premise servers than in a public cloud setup, for example. Unless you’ve gone entirely thin-client at the user end or your entire workforce works from home, there will be some user PCs in the office (and hence on-premise) too. So, your average “cloud-based” company is actually running a hybrid infrastructure. Which brings the challenge of securing something that’s in several places at once.
Before we get into the specifics, it is worth a mention of the concept of zero trust. We won’t go into the depths of zero trust here as we have already done so in various other features (such as this one, this one and this other one), and there are plenty of learning materials available on the subject too. But suffice to say that in any IT infrastructure, zero trust has enormous value, which is magnified in a hybrid environment. By its nature, hybrid has far more moving parts than a solely on-premise setup.
The goal for managing any aspect of a hybrid infrastructure is: get as close to a single security and management layer as we can. While the term “single pane of glass” is a rather tired cliché, the concept is a valid one: the simpler we can make something to manage and secure, the harder it will be to run it insecurely.
AD and Access Management
As the majority of hybrid infrastructures are Windows-based, you are probably using Microsoft’s Active Directory (AD) as the underlying authentication mechanism. The rule here is: have a single AD world spanning the on-premise and cloud elements of your infrastructure. Leaving aside the fact that it’s unnecessary to make your users log in with different credentials on the two different elements of the IT estate, it’s a simple security fact that having a single, unified portfolio of user IDs means that when someone leaves, it’s a one-step job to lock them out of everything. Whether you choose to use Microsoft’s cloud-based EntraID directory service or run AD yourself within your own server estate is entirely up to you, as is the precise way you architect the domain(s) within it. What matters is that you have a single, well-managed authentication world.
Securing On-Premise Systems
Moving up a layer, the task of implementing the basic security requirements of the servers and user devices is in fact one of the simpler ones. In this category we’re talking about anti-malware, endpoint detection and response (EDR), vulnerability scanner agents and the like: all the most well-known products support both on-premise and cloud-based hosts. The key challenge is to think about where the central management server will reside. If you want to host it on-premise, then you might choose to put it in the cloud element of your setup rather than on-premise, since to put it in your own server room will require inbound firewall rule requirements and hence a risk, albeit admittedly a small one. An in-house server will be easier to hook into the directory service (our mantra of not having separate logins for individual systems still applies, after all). If you choose to use the vendor’s own SaaS service for the console, that’s a preferred option in many ways (they feed, water and upgrade it for you) but authentication is more of a challenge, particularly if you’ve chosen not to go for a SaaS-based authentication engine.
Moving up a layer we come to our applications, and as we said at the beginning, even if most are in the cloud, there is every chance that more than none of them are on-premise. The same applies here, in that the apps you use should integrate with your directory service. Again, we need a world where users aren’t having to remember a multitude of logins (but are also not reusing credentials across disparate systems) – and where, as we already said, to disable a leaver we don’t have to turn off a load of different logins either.
Legacy and Proprietary
There is a small elephant in our server room, though: sometimes it is not technically possible to do what we want to – especially if we have legacy apps that pre-date today’s security techniques and standards. It is common to have a handful of applications or systems that simply cannot be integrated into a common authentication mechanism, or do not support one or other of your security tools. Exceptions are the enemy of security (a concept that we will explore in its own right in a separate feature soon), so you must come up with a robust way of dealing with the risks that come from systems that don’t conform to your standard approach.
In this context, automation is your friend. If an application cannot work with your unified authentication mechanism, for instance, consider what you can do with automations to make it look like it does – because having manual workarounds for security limitations is a risk and a challenge. Let us take an example: we know of a financial organization that has a core app that cannot authenticate with AD but instead has its own user database in a back-end relational database. Its approach to provisioning and de-provisioning has a post-processing step in which a script runs automatically and does the necessary updates to deal with the required changes in this particular business application. It isn’t a perfect approach, as it adds a layer of complexity, testing and maintenance that anyone could do without, but it mitigates the risk of non-integration very nicely.
Testing Security
The final element to consider is one that we hinted at when discussing the basic security layer and mentioned vulnerability scanner agents … the concept of testing. Even in an on-premise setup it should be taken as read that we would test our systems for vulnerabilities – inadvertent firewall misconfigurations, software versions that have known security flaws – and in a hybrid environment this testing is far more critical because hybrid equals more complex, and more complex equals greater risk. No matter how good we are at designing and securing our systems, we will never ever do it 100% correctly.
As a parting thought, some readers may be thinking: it feels like we need to be doing all this in reverse. We can’t design the directory service or the interconnects until we know what apps we are using. We can’t move to the new infrastructure until we have the automations in place to deal with the exceptions, as to do so leaves us vulnerable to risky manual workarounds. We can’t do those automations until we know the gaps. If you were thinking this, you are not wrong … because the rule with securing the hybrid working environment is that you have to design and implement it holistically, not bit by bit.
Related Insights