Securing AI Agents 101

AI agents are rapidly emerging across enterprise environments: powering automation, chaining tools, and acting across systems.

Securing AI Agents 101 is a one-page resource to help teams build a clear understanding of what AI agents are, how they operate, and where key security considerations show up.

Download Now

AI-Driven Defense and Autonomous Attacks

AI-driven defense and autonomous attacks are defining a new era of cybersecurity challenges and opportunities, both operating at a speed and scale beyond human intervention. AI speeds, scales, and automates both offensive and defensive actions to great effect, with bad actors using AI for adaptive malware, rapid vulnerability exploitation and social engineering, while defenders leverage AI for predictive threat detection and autonomous, real-time response.

Most people reading this will have used some kind of artificial intelligence (AI) application or service, either at work or at home. Although AI has a tendency to be wrong quite often (after all, it can only work with the data it has been trained on, which may well be inaccurate to begin with), AI can also be insanely useful.

Regular readers of ISC2 Insights will also be aware that AI can be used for bad as well as good. As such, there are plenty of very significant risks to consider when adopting it. Well over a decade ago – in 2014, in fact – the observation was made that: “The primitive forms of AI we already have, have proved very useful”, but with a stern warning that: “I think the development of full AI could spell the end of the human race”.

In 2018 the comment was made that: “AI is far more dangerous than nukes”. Wind the clock on to 2026 and these comments feel rather worrying – because the AI of today really does feel capable of world domination. Interestingly, these quotations were not just dredged up from a big barrel of random quotes in the hope of a sensational prediction: they came, in fact, from some very well-known people who both have track records in advanced science and engineering: Professor Stephen Hawking and SpaceX founder Elon Musk.

Automation of Attacks

The thing is, there is nothing new about creating software that will automate cyber-attacks against organizations and systems. Even when the internet was in its infancy, people around the world – particularly academics in the early days – realized the potential benefits of connecting to peers across long distances. Of course, there was a minority that figured that if they could connect to something, they could do something bad to it such as stealing the data that was stored there.

Some of these bad actors figured that they could write software that carried out the attacks for them, instead of having to sit for hours at primitive terminals making connections manually over very slow connections. For example, as long ago as 1988 the Morris Worm ran rampant and infected many thousands of machines worldwide, replicating itself as it went, seeking out new victim systems that were connected to every system it had landed on. This was an early example of a fully autonomous attack. Interestingly, it was one that inadvertently had a much more devastating effect than the perpetrator intended. (For those not old enough to remember, the worm did not steal data but the processing workload it inflicted on systems it infected brought about an early example of a Denial of Service (DoS) attack).

In the 2020s autonomous attacks are now several of orders of magnitude simpler to develop and to perpetrate, and the growth of AI has simply accelerated the growth of this autonomy. A novel example is a research project that created a proof-of-concept dubbed the Morris II Worm (with more-than-cursory nod to its 1980s ancestor): the trick of the project was to create an AI prompt that fools the engine into creating more malicious prompts and effectively makes it into a self-feeding attack machine.

Another approach is to secrete malicious prompts in web sites: when AI LLMs are trained on those web sites the malicious elements are downloaded and run, potentially hijacking the LLM and using it to conduct attacks. Even images have been used – because modern AI is very skilled at working with images, malicious prompts can be hidden in images and then processed when the LLM is working with the image. These are fairly complex examples of very cunning attack types that have been devised by AI specialists, though: at a lower level of complexity attackers can simply use AI to make it easier to conduct more straightforward attacks – for example to get an LLM to write some Python code to conduct an attack rather than spending days writing and debugging the code by hand.

Combating Automation with Automation

To contend with such complex automated attacks, then, we need a complex, automated suite of defenses. As CrowdStrike founder Dmitri Alperovitch noted in 2018: “To win a battle in cyberspace, speed is paramount. The only way you beat an adversary is by being faster than them”. Nicole Eagan of Darktrace is of a similar view: “Human teams simply can’t keep up without the help of AI”. But as with some of the examples cited earlier, we have in fact been using technology to help us defend ourselves for some years. If we think back to the intrusion detection systems (IDSs) that many of us run in our infrastructures, these came from an initial model devised by Dorothy Denning and Peter Neumann back in 1986 – 40 years ago. Machine learning (ML) as we know it today has also been around for 20+ years, with spam classification and behavior analysis of malware coming along in the 2000s. So, just as in the use of AI as an attack mechanism, automation and AI in the context of cyber defense can largely be summed up as: more, but bigger, better and faster.

Before we end, though, let us reflect on the title of this article. Although this implies we might seek to rely on AI, the general view is that in a defensive sense AI is there to assist humans, not to replace them. For example, Microsoft CEO Satya Nadella noted in his blog that we should “always think of AI as a scaffolding for human potential vs a substitute”.

Of course, AI has vast benefits and may well come up with ideas that wouldn’t necessarily occur to people; at a conference in 2022 technologist Bruce Schneier reminded everyone that: “AIs don’t solve problems the way humans do. Their imitations are different; they consider more possible solutions than humans; they go down paths we don’t even consider”.

To lose the human touch – particularly our ability to have flashes of inspiration and head down entirely random, seemingly illogical paths and happen upon a serendipitous discovery – would be detrimental to our security and hence the longevity of our organizations. Nadella has pointed out that we have used technology through the years without it replacing us. He commented in 2025 that: “Computing throughout its history has been about empowering people and organizations to achieve more and AI must follow the same path”. Jeetu Patel, president and chief product officer at Cisco, is of the same view: “The magic truly happens when you take human instinct and judgement and you combine it with an AI scale of automation”, he said.

Finally, we should consider on a personal level our motivation for embracing and adopting AI in our organizations, which includes its inclusion in our cyber functions. Former CEO of IBM Ginni Rometty is commonly cited as the source of a poignant comment in this sense: “AI will not replace humans,”, she said, “but those who use AI will replace those who don’t”.

Related Insights