Inside MCP Security: A Research Guide on Emerging Risks
The Model Context Protocol (MCP) is quickly emerging as the go-to standard for connecting LLMs to external tools and data. But as adoption picks up, many teams are implementing MCP without a clear security playbook. This practical guide breaks down real-world MCP security risks and offers actionable steps to help teams secure LLM integrations. Download the guide to get smart on securing MCP as adoption grows.
Organizations of all sizes are reliant on shared, third-party services, solutions and hardware at some point in their IT estate. It could be something as simple as a mail or web server, or something more complex as an entire CRM solution. In all cases, security remains a risk factor as these shared services sit far outside the perimeter are maintained beyond the direct reach of your IT or cybersecurity teams.
There is a Swedish saying: “Shared joy is double joy; shared sorrow is half sorrow”. Since the references we have found suggest that this proverb is at least 100 years old, it is forgivable that there is no third clause stating that: “shared cybersecurity is potentially a major risk if not managed properly”.
Shared Service Environments (SSEs) exist for one primary and very understandable reason: cost. If you are a service provider supplying services to multiple clients, it is prohibitively costly to have completely separate equipment and systems for each paying customer. Similarly, a shared services client is also looking to benefit from the economies of scale that shared resources afford. Even where a service provider sets aside separate physical servers for a particular customer, there will still be shared elements: the underlying storage, the management layer that allows the customer to administer the setup, the physical data center, the power supplies the servers are connected to and so on. Each shared element presents a risk.
The Seed of Risk
The risk presented by the physical elements of the infrastructure is primarily that of availability: that is, the service becoming degraded or unavailable. These are generally well handled. You would expect the provider’s storage infrastructure to survive disk failures without you noticing, for example, because resilience and seamless failover is so well understood and generally catered for in modern storage hardware. Likewise, the failure of a server power supply, or a network adaptor, or a power supply, or an electrical distribution strip in a server cabinet. Duplication of equipment for resilience and redundancy should be – and usually is – absolutely standard in SSEs.
An additional risk to service availability is the service provider making a mistake when introducing new features, patching services or modifying configurations. As there are so many service providers across the world, and so many of them have had sysadmin-inflicted outages, it would be unfair to highlight a handful here and have them perceived as in some way “worse” than the rest: suffice to say that it is a risk in most service areas – from cloud providers to telcos, from security software vendors to business software hosts. As we all know from making changes in our own infrastructure, the risk of someone making a mistake is non-zero and these things will inevitably happen.
Fourth-Party Risk
A further consideration in our relationship with SSEs relates to a concept that has come into the industry’s general awareness only fairly recently: fourth-party risk. A fourth party is a supplier to our supplier. In June 2025, for example, Cloudflare experienced an outage in some of its services. In a frank and informative blog article that it followed up the incident with, the authors note that: “the proximate cause (or trigger) for this outage was a third-party vendor failure”, though they are quick also to acknowledge that “we are ultimately responsible for our chosen dependencies and how we choose to architect around them”. In this example, if Cloudflare is our third party, their supplier (the “third party” they refer to) is our fourth party. If you are thinking: does this mean there are fifth, sixth, seventh parties (and so on) then the answer is yes. The risk here is that the more steps of detachment there are, the less visible any risks will be to us.
Access to Underlying Systems
The most significant risk in an SSE, though, is access to the management layer – the part that the customers use to configure, control and monitor the services they are renting from the provider. While this may sound a little unfair, the reason that the management layer is such a risk is that we – the customers buying the service – are generally the weakest link in the chain, since when compared to the providers’ own technicians and administrators our average level of knowledge of the platform is inferior. This does not, of course, mean that everything is insecure just because we cannot possibly know everything about how to secure it properly; it just acknowledges that we will make mistakes from time to time either because we were careless or even because we simply did not know something. The risk is borne out by the sheer number of news stories we read about user-inflicted issues in cloud services such as making cloud-based storage areas generally accessible on the internet. Most security issues in SSEs, then, are not particularly related to those services being shared; they are down to the management interfaces being generally available on the internet and to those of us using the services not securing them properly.
Three Ways to Mitigate Shared Service Risk
So, what can we do about the various risks? As with many aspects of cybersecurity, there is a “top three” actions that are straightforward to implement but which go a long way toward achieving what we need.
First, we need to acknowledge the key principle of risk management: we manage security risks, we do not eliminate them. If we cannot accept fundamental risks such as inadvertent vendor-caused outages, then we need to look for a different solution. The same applies to fourth-party risk; while we could potentially write clauses into the contract with the SSE provider that oblige them to place certain controls around the upstream suppliers they use, we still cannot guarantee that someone in the chain will not make a mistake or even maliciously cause an incident. We identify and examine the risks, then decide how we deal with them – particularly when it comes to accepting the residual risks that cannot be completely avoided, which we must document in case something happens and we are challenged over the acceptance of the risk factor that was at play. If we can neither accept the risk of the SSE nor devise an alternative solution, that is a problem that risk teams and management will need to closely review and consider, because the only alternative is to not use the SSE.
Assuming we have accepted that some risk is inevitable, our second task is to evaluate the supplier we are dealing with – to explore their security principles and policies, oblige them to inform us of their upstream suppliers, demonstrate that they have suitable system resilience and incident response standards and procedures, and so on. As noted above we may well have to accept that they have risks – potentially unknown ones if they have many suppliers – and consider whether they are acceptable and what controls to put around dealing with them.
Third, to address the management/administrative risk, there is a whole list of technical actions we can look at which, between them, will reduce the risk as close to nothing as we can realistically get. First, Multi-Factor Authentication is essential: any cloud service without MFA – particularly for privileged logins – is a significant and avoidable risk factor. Next, inquire whether your instance of the SSA can be restricted using controls like Microsoft’s Conditional Access (which, for those who are not familiar, is a zero trust tool that prohibits connections to services unless they are coming from devices enrolled in your EntraID directory service). And if Conditional Access is not an option, then at the very least you should work with the vendor to restrict access to your door into the SSE to devices that are in your organization’s range(s) of internet-registered IP addresses.
Can we guarantee that all the SSEs we use are secure? Of course not – we cannot guarantee that of any system that we use. Is there a potential for the vendor’s technical team to configure something in our instance that impacts another customer on that shared platform? Of course there is – and it has happened many times over the years.
What we can do, however, is go into adopting an SSE with open eyes, an inquisitive mind and a risk-based attitude. If the supplier shares your security-minded view they will entirely understand this approach and will work with you to arrive at a mutually satisfactory outcome.