The thought, let alone the reality of so many different emerging technologies and attack surfaces converging is a combination that should overwhelm most cybersecurity professionals. However, according to a panel of cybersecurity leaders at ISC2’s SECURE London event last month, dealing with this tidal wave of future shock means first applying time honored principles of risk management.

In a lively session chaired by ISC2's director for the UK and Europe, Ed Parsons, CISSP, ChCSP, it was noted early on that AI tends to loom over all tech conversations at present - an unstoppable juggernaut on which cybercriminals are hitching a ride.

Santander International’s head of technology operations & risk/CISO, Dave Cartwright, CISSP, noted that AI could be used by the good guys too. In fact, this was essential, as “With more and more automated attacks, there is no way you can keep up with things unless you’re using automation, ML and AI.”

At the same time, he said, it was important to remember the technology’s limits. “Real intelligent people make mistakes…as does ML and AI.” Proof? When Cartwright asked ChatGPT for five songs where the title isn’t in the lyrics for a trivia quiz, “It gave me five instrumentals.”

On a more tangible level, he said, a security team might configure an intrusion prevention system to cut the internet link if it sees a particular attack. But the board and the executive must be aware of and accept such outcomes – and not sack the CISO if they occur. “That's actually the level at which you need to manage the risk. It's the risk of consequences rather than the risk of what it does to a large extent.”

When is a problem really a threat?

More broadly, Cartwright said it was important to take a step back and consider what is a real problem with AI and what is not. When it comes to both AI or SaaS services, he suggested that “the biggest problem with them is people are uploading sensitive data to them. It doesn't matter if they're AI or not.”

Alister Shepherd, CISO at the Financial Conduct Authority, said that the organization was looking at how it can safely adopt AI. It was clear security and security principles needed to be embedded from the start, and “We have long worked with narrow or analytical machine learning and other forms of AI.” There were “pretty well embedded” ways of modeling risk around these, he said.

The critical question when it comes to generative AI is “What are you going to use the system for?”

The complexity and scale of generative AI tends to cause much of the discussion and hype, “This is why it has to be very much on a case-by-case basis, as a lot of the controls and tools and technology around generative AI risk management are emergent or non-existent. So, then you have to start thinking about how you are going to control those risks outside of a dedicated tool.”

AI models and platforms were only going to become more complex, he said, “So I think we have to shift our mindset from looking at a model and knowing exactly technically how it's working to the output validation and human understandable justifications.” At the same time, he said, there had to be ongoing model monitoring for anomalous behavior and changes.

Chris Ensor, the deputy director for cyber growth at the National Cyber Security Centre (NCSC), added that we need to consider the impact of it going wrong, because in a lot of cases, it simply doesn’t matter.

Performing medical operations or being part of the nuclear firing chain was a different matter. “You may decide you can’t get confidence in it, and you use something else. You don’t have to apply it to everything.”

Risk management is also central to how we think about supply chain security, said Ensor, who oversees the UK government’s Cyber Essentials program. Too often people were unclear whether something like a Cyber Essentials certificate implied security of the supplied goods or services, or of the supplier, or both.

Hygiene still matters

“The best analogy I can find is you go to restaurants and they’ve got a food hygiene certificate on the door,” he explained. “That tells you something about the restaurant. In theory, you shouldn't get food poisoning. What it doesn't tell you is the quality of the food or anything about what the food's going to be.”

Cybersec leaders needed to be clear exactly what they expected from a supplier – and be sure that the supplier is aware of this. “Ultimately, if we want to secure the supply chain, it won't happen by accident. We've got to say what we want in our supply chain.”

Bridget Kenyon, CISSP, CISO at public sector supplier Shared Services Connected, added that too often organizations forget that their attitude to risk might not match those of partners or suppliers.

“There is the wonderful possibility of imposing your risk management process on a third party who actually has one already and has chosen to impose theirs on another third party,” she said. “There's an element where you have to kind of step back and say, No, I don't want to actually take over your company.”

“What happens if it's Amazon,” she added. “They're not going to adopt your exact security controls unless magically they're the same as the ones they've got already.”

That might not even be the biggest problem with the cloud. Cartwright said it’s still just “really, really easy” to put stuff in the cloud. “How many cloud installations have you guys seen where effectively it's winked into existence, and it started to grow legs, and then all of a sudden, it's been in production?”

That lack of design – never mind security by design – is itself a major threat. “Before long, you've got the shadow IT from hell with no design, and hence no security. And it's a massive, massive problem that you see everywhere.”

But if AI, the cloud, and supply chain risk are all big challenges, it’s important to not be overly daunted by their apparent complexity.

Bridget added it was easy to run away with an image of AIs generating AIs, which would become too complex for human understanding. But cybersecurity leaders should be used to dealing with complex challenges.

Just consider the Windows codebase, she suggested. “That is already complexity beyond human understanding. [AI] is possibly the skeleton key that unlocks all the information we already have.”