The recent ISC2 AI Spotlight asked a number of artificial intelligence (AI) questions over the course of the two-day virtual event. The opening session talked to the theme of a growing number of conversations across the industry: with agentic AI seemingly about to become the next big thing, what about the human element?

Host Brandon Dunlap chaired a conversation between Alex Haynes, CISO at IBS Software, Naresh Karuda, director of cloud security engineering for Deloitte Canada and Mike Spisak, managing director of proactive security with Unit 42, Palo Alto Networks.

The Basics: What is Agentic AI?

Spisak summed up succinctly what agentic AI is – it’s all about autonomy, specifically: “AI systems that act autonomously, perceive their environment, set out to pursue types of goals, or various types of goals and execute multi step plans with minimal human intervention”.

Use Cases and Practical Applications

Regarding the uses for agentic AI, Haynes pointed out that there are always two sides to the cybersecurity battle. “It has a lot of scope for being implemented in the offensive security space”, he pointed out, but on the flipside: “Secure operations is an obvious candidate for applying Gen AI and agentic”. As with all elements of technology, especially AI, if the attackers are using a particular technology, then the defenders can only keep up with them by using it too.

He expanded on the potential defensive uses for agentic AI. Compliance operations was a key area, because: “it’s all busy work, it’s not very interesting”. Repetitive tasks lend themselves to agentic AI, he said, along with “completely boring” tasks like firewall reviews, nonetheless admitting that you still need a human factor in there to validate what the AI tech has produced. He had a key area that he considered ripe for exploitation of agentic AI, though: vulnerabilities and the exploitability of vulnerabilities. “We are just completely flooded with these, right? Any enterprise will look at their vulnerability dashboard and you'll have 100,000 of them, but out of those 100,000 how many of those can harm you? And a human has to look at them”.

Challenges and Unexpected Outcomes

What about the potential downsides, asked the host, noting that he had seen some incredibly clever outputs from AI but that “there’s always this bit of distrust” of AI. Karunda commented: “When you put something into an AI prompt, you are expecting certain outcomes, it makes logical sense. But you could rerun it, it might make better sense, or it might make worse sense, right?” In short, we cannot take the outputs for granted – we need to challenge the model. Spisak concurred: “As we move into this world of agentic, where you've got, again, more autonomy, you've also got engineers that are very bullish and see the promise. Even non-engineer types now see they can give instructions to agentic AI and have things materialize in front of them. There's a lot of risks that come with that, like over-sharing credentials to give an agent access to a database”.

The discussion around implementing agentic AI revolved around the controls we use to regulate AI. Dunlap asked whether the group consider guardrails as an important thing to put around AI systems. As Spisak put it: “You have to really get your hands around the governance of all of that. Understand and sort of supervise it again. You don't want to stifle innovation, but you need to do it in a safe way”. His view was clear: “Data is the center of the universe”, was his view, “as it relates to all things AI. So, understand what's what should be public, what's restricted, what's confidential, what's top secret and understand the governance around those data types, and what types of AI and what types of use cases can be allowed, because that'll help you, all of a sudden, start to do this. What's sanctioned versus unsanctioned, right? What can we allow versus disallow?”.

The Balance of Humans and AI

Will agentic AI cause us, for example, not to need entry-level human positions such as SOC analysts anymore? The panel responded to a question that had come in from the audience: “We're hearing all this stuff about ‘I don't need any level one SOC analysts anymore’. Well, where are you going to get your level two and level three, if you're not upskilling the ones you already have?”

Haynes was of the view that there can be an over-reliance on AI, which can result in a skills vacuum among the human analysts. “Traditionally, a SOC analyst would learn gradually to understand alerts and the context of the alerts. The problem with AI assisted alert interpretation is, I feel, especially juniors who have come to rely on it”. He continued: “Eventually, over time, if you offload your critical thinking to the AI, you just start to trust it blindly and you don't interpret anything outside the context of the AI load, which I feel is dangerous”. He had seen it happen in real life, particularly with those new to the area of cybersecurity.

Ethical Considerations

There is, of course, an ethical side to any exercise of balancing human and machine. Developers, noted Dunlap, seem to be being hit particularly hard by AI systems writing code for less skilled individuals, so what can we do to ensure we are not tempted to, in his words, try to “automate our way out of everything”?

Karunda was very clear that people still most definitely have their place. People will do their job and use AI to assist and complement it, not to replace them in doing it. “It's really the evolution of our security profession”, he said, “we're not displacing it, we are augmenting and evolving our security profession”. He went on to talk about developing the people and letting the tech work with them: “Evolving them [staff] to go to that higher level and [getting] AI to help them out”. Haynes picked up by pointing out the rather misguidedly mythical way in which AI is regarded by some: “The hype is strong”, he said, “and it's not necessarily grounded in something that's realistic”.

Final Thoughts

Wrapping up, the panel were of the view that proper controls are one way to the sensible use of agentic AI in our organizations. We must be cautious to avoid “shadow AI”; to ensure people do not introduce risk into the tools we use by, say, hard-coding credentials into them; to avoid generating ridiculous numbers of similar tools with overlapping functionality with no real control.

Haynes was particularly upbeat with his closing thoughts, musing that the onset of AI is, to an extent, not much different from a particular emerging technology of a few years ago, which has turned out to be rather popular and useful. “AI is just a new asset type”, he said, “and one experience which is analogous is when we shifted to cloud from on-premise. It was a new asset type. Your on-premise tools didn't work in the cloud”. His view was that, as with cloud services: “Within five years, it'll be trivial to manage multiple AI models in agentic deployments with overlapping frameworks in the same way that you manage multiple clouds from different providers in a single dashboard”.

A thread that came up more than once through the discussion was that, just because we have new AI – both LLMs and agentic – there is still a place for some of the technology we have all been used to for years. As Haynes pointed out during the course of the conversation, to a palpable ripple of agreement: “We forget that regular automation is absolutely just fine for certain security operation tasks, audit tasks, compliance tasks”. “Regular automation scripts work”, he said. “Bash scripts still work. They've worked for 20 years. They'll keep working”.

One must never forget the budget implications of all this new technology. “I think what we found”, said Haynes, “is that it [AI] gets expensive very quickly”.

Related Insights