At the 2025 ISC2 Security Congress, Joseph Carson, CISSP, explored the growing tension between AI-powered cybersecurity and AI-driven threats. As autonomous systems reshape digital defense, Carson challenged us to rethink risk, responsibility and the future of securing AI in an increasingly complex world.

Joseph Carson, CISSP - ISC2 Security Congress 2025Cybersecurity stands at a critical juncture. On one side, AI is becoming increasingly embedded across digital infrastructure with the goal of increasing speed, efficiencies and defense capabilities. On the other side, AI brings with it fast-growing risks. The question we all must answer is this: How much risk are we willing to accept from AI adoption? And, by extension, who becomes responsible when that risk presents real-world consequences?

In AI vs. AI – The Future of Automated and Autonomous Cybersecurity, Joseph Carson, CISSP, Chief Security Evangelist and Advisory CISO at Segura, shared his perspective at ISC2 Security Congress in Nashville. He discussed these pressing questions at a time when autonomous systems are starting to take over decision-making processes once managed by humans. At the root of the AI vs. AI conundrum is a deepening web of complexity not only for cybersecurity professionals but for all humans in the way we work, live and interact within the pervasively digital world.

The Complexity Crisis

Carson recollected that computers used to be simple systems: “Computers actually had to have, basically, punch cards” to use them. He emphasized, “I could touch the data (i.e., the tape backups).” Now, as Carson emphasized, "The world we live in today has never been more complex.” The rapid transformation to “cloud, hybrid cloud, multi-cloud, bring your own device, bring your own identity and bring your own office” has created a world where “the complexity is so challenging that for us to manage it and for us to be able to protect it is so difficult today.”

Carson shared a compelling example of this complexity. Working with a large transportation company, he discovered 20,000 more devices on their network than the IT team thought they had. Despite the organization being “adamant” they had only 120,000 devices, the actual count reached 140,000, revealing a significant visibility gap in their asset management. Their refrain? “That’s what our spreadsheet says.”

After doing the proper, real-time inventory scan, Carson and his team discovered the extra devices. The finding highlighted the security risks associated with outdated and unpatched licenses and legacy systems still present on the network. That thought should make every cybersecurity professional shudder. Carson's example is just one of many prompting a widespread wakeup call about AI’s place among an ever-growing laundry list of cybersecurity threats. According to Carson, “The reality check is that we have so many threats to deal with,” including the top issues keeping security leaders up at night:

  • Data breaches
  • Ransomware
  • Financial fraud
  • Insider risks
  • Malware
  • Revenue/brand damage
  • Data poisoning
  • Compliance failure
  • Service downtime
  • Application outage
  • Securing AI

“Securing AI” is yet another of many other threats cybersecurity professionals must deal with daily. “The complexity of threats is so much greater today, and it increases all the time,” said Carson, especially as we are shifting to an AI-powered world.

AI-Powered Everything

Weighing the implications of a future in which AI powers everything is not a light task. Clearly, AI is the brightest shiny object in tech today. Carson cautioned, though, “AI is one thing, but, without context, it's important to understand how it is helping you.” He raised crucial questions such as, “Is it making your employees’ lives better?” and “How can it empower employees?”

Carson turned to frontline employees as a use case for answering these questions. He explained, “We have to look at AI as an enabler—as something that we can do to make their lives better. It's not here to replace; it's here to augment.” In short, Carson argued that AI can give back time, “the most valuable thing in this world.” By acting like the Super Mario Kart mushroom that makes you bigger, stronger and faster, AI “gives you superpowers,” and “it is meant to help us accelerate.” Still, despite presenting his great analogy, Carson cautioned that, “AI may make me a little bit bigger and a little bit faster, but it doesn't necessarily make me better.”

While AI can enable frontline employees and others to do things faster, data veracity and data accuracy are crucial to being able to make good decisions using AI. That’s the bright side of AI. Unfortunately, there is a dark side lurking on the AI vs. AI battlefield.

Using AI to Evolve Real-Time Cyber Attacks

While AI is making work faster and more efficient, threat actors are seizing it, too, to work fast and furiously at scaling nefarious tactics, techniques and procedures (TTPs). While defenders are racing to build secure, ethical AI systems, attackers are exploiting the same technologies. Carson explained that threat actors are using AI to rewrite their code in real time. For instance, “when they find that there is some defensive tool that's preventing them from laterally moving, or from getting initial access, they find that they're being blocked in real time. In turn, they're analyzing that feedback and augmenting the code to bypass it.” He elaborated that, “They're doing basically real-time attacks.”

Attackers simulate scenarios, test vulnerabilities and refine their strategies using AI. They’re not just creating fake images or text, for example. They are building entire identities via social engineering, blurring the now-thin line between truth and fiction. Defense tools must evolve to keep up with these TTPs and changing and evolving algorithms. As Carson noted, “We're moving to the point where AI is defending organizations, and AI is attacking organizations, and cybercriminals are augmenting themselves in real time.”

Training AI Like Your Best Analyst

Against this backdrop, a major takeaway is that “we can never live any longer in a static world of security. We can't live where policies are static and where security is static.” Indeed, cybersecurity “has to be almost like a living organism.” Adopting an AI vs. AI posture means that cybersecurity professionals must create policies and defenses that are evolving, self-training and self-modifying; otherwise, organizations “will become bystanders” and placid observers as the attacks are happening.

Perhaps the highlight of Carson’s talk with his final point of advice: “Train your Al like it's your best cybersecurity analyst. One day, it might be.” We’re moving toward a future where humans become navigators who observe, guide and correct AI systems in real time. The machines will handle the bulk of detection and response, but we must remain vigilant as stewards of their behavior.

Related Insights