At the recent ISC2 SECURE Washington D.C., a fireside chat discussed the challenges of balancing security with the rapid innovation and adoption of AI technologies. We take a look at what was discussed.

The need for protection and prevention within cybersecurity is sometimes seen as a barrier to innovation, slowing down progress. On the other hand, this is a necessary safeguard, particularly when considering the potential threat posed by rapid adoption and uncontrolled or untrained use of AI tools and technologies that could lead to AI-based attacks and data leaks. Striking a balance between innovation and security is always tricky. To discuss this and the wider security vs. innovation challenge facing AI, James Packer, Chair of ISC2’s Board of Directors and Tim Rohrbaugh, founder and principal at LLM Strategic solutions, sat down for a detailed look at the issues.

Rohrbaugh began by setting the scene regarding what AI actually is. “AI doesn't exist and it probably won't exist for maybe a couple of generations, because … what we're thinking in our brain is HAL 9000 - we're thinking about human level intelligence. That doesn't exist, but we do have augmented intelligence”. A fairly blunt starting point.

Packer asked Rohrbaugh for his view on the age-old question of whether security technology helps us do business or simply gets in the way and slows things down. “From your perspective,” he asked, “an organization is very early in its journey, and is a little worried about what it could do, but is also really excited that it could bring benefits. What would your mindset be to get started on that journey?”. The easy answer: both. On the downside are the hosted AI engines many of us use. Rohrbaugh said: “If you use a proprietary model like OpenAI, Anthropic, CoPilot, Gemini, to get the benefit of that, you have to send your data out. So, this is why I termed the last two and a half years as the great leak, because people are doing it to themselves”. A legitimate observation and the reason that so many organizations worldwide block access to public AI engines on their web filters.

Rohrbaugh had a very encouraging upside, noting that: “Gen AI is the greatest privacy tool since the creation of the internet” – which no doubt raised a few eyebrows in the audience. How could this be? “If you use an Open Weight model, the ones that you can download,” he noted, “you can actually take a model and download it and run it on your computer. You're not sending the data out, obviously, to get the benefit, and you also cease to give your data away”.

Using AI Without Knowing Why

Packer moved on to ask about the tendency, in these relatively early days of AI as we know it, for organizations to fixate on using AI without actually knowing what they will do with it. “Something that I've observed a lot in my day job, is the rush to adopt AI and apply AI to things, ‘just because’”, he said, going on to add: “How do you rein in reckless enthusiasm to apply AI to everything so that leadership can take security and risk more seriously?” Rohrbaugh seemed very determined to keep everyone’s feet on the ground with regard to AI, describing it as “the most advanced text completion tool in human history”. He noted that you can do one of two things with it: long term knowledge, storage and retrieval; and simulated reasoning – the latter of which he considered “the interesting one”, because it lets us “mirror some of the things that we do and get automation without writing algorithms”.

Packer then diverged slightly into a comparison of continents and their attitudes to AI, noting that in North America the focus tends to be on rapid adoption of AI as a road to competitive advantage, while Europe is much more conservative, with a focus on regulation and privacy. Rohrbaugh harked back to his previous point, that using Open Weight models in private infrastructure it can be perfectly safe to exploit AI. “Even knowing what I know about threat landscape, would I use a Chinese Open Weight model? 100%! Qwen, DeepSeek … absolutely. But I will run those in isolation, right?

Supply Chain AI Risk

Rohrbaugh then shared some words of warning about one of today’s most topical security subjects: supplier risk. He noted that organizations in our supply chains are generating efficiencies by using AI, but what do we know about how they are doing it? “The first question that should come to your mind is, where are you sending my data? Which models are you using? I'm going to tell you, almost all of them are proxying it to one of four organizations, regardless of whether you're in Europe or not”. Packer echoed Rohrbaugh’s feeling about organizations using AI. He pointed out that “these [supply chain partners] decide they want to flick a switch and turn on something AI-enabled in their service, that could have a very negative impact on you as an organization”.

Rohrbaugh then reiterated his point about data leaks, turning his attention to new features that have winked into existence on many of our devices. “How many people have Outlook on their phones?” he asked. “How many of you have noticed a little button that showed up called CoPilot? I've changed the name of that button everywhere … it's called the data exfiltration button”. He then made a very poignant expansion on his point: our phones don’t have the power to run an AI model, so the only place the data can be going is somewhere out on the internet, namely to a cloud service that can do the processing and heavy-lifting needed to make the service work.

However, there were positives to be had. Rohrbaugh had some words of encouragement about where AI is as a technology, telling us to imagine that it’s 1992, but we already know where development of the internet is going. “We’re at the beginning of this [AI]”, he said, “so now is the time to get in on the ground floor”.

AI and Jobs

There were more words of encouragement, too, about whether AI will be taking our jobs. Packer asked: “You mentioned that in your experience that there's a lot of Gen AI - projects that have a hidden, hidden ceiling. Maybe elaborate a little bit more?” Rohrbaugh referred to the idea that Gen AI will displace jobs as “a misunderstanding”, going on to say: “The only one that can judge that [a model] is you, the professional in a domain area. With Gen AI, it cannot exceed the knowledge of the individuals on that project. That's the ceiling. You are a mentor [for the AI tool] and you will be working side-by-side. You're not being displaced”.

The conversation moved back to the risks of cloud-based AI. What about commercial AI systems where we have contracts with the providers, queried Packer? Rohrbaugh replied: “If you have an enterprise contract and [the provider] says that it’s not going to use your data for training, that's great, then maybe it won't. But it doesn't mean that your data is not in the logs, where it could be taken from”. He referenced a 2024 paper on situational awareness, referencing one section in particular (IIIb – which states: “The nation’s leading AI labs treat security as an afterthought”). “Does it have a security program that is able to defend against every nation state in the world?” he asked rhetorically, to which the inevitable answer was: “I doubt it”.

Key Takeaways

Packer asked about practical takeaways that the audience could go home with. “They ask you: hey, Tim, write us a policy. Let's get started with something really simple. What would your policy say?” Simple, said Rohrbaugh: “Absolutely use it, but use it in a very specific way. If an organization wants to sponsor a proof of concept project, I would make one rule, that we must use Open Weight models”. The reasons were the same as before; the data is not exiting the organization and whatever successes you have are now your own intellectual property. Regarding running models on in-house equipment, he was clearly a fan of Apple kit which, seemingly by accident, is great for AI processing: “High-end MacBooks can run the same models that I run on servers – not as fast, but it can do it on battery all day long. It's mind blowing! So, give your developers MacBooks!”

Packer adopted an industry perspective, asking: “What do you think of the role of the government and industry working together in evolving enforceable guardrails for AI?” Tim’s view was that in a model such as AI where what comes out of it is not predictable, it is difficult to understand where to put the guardrails. Instead, “What the regulators should be focused on is that individuals and companies today are giving their data away and they don't realize it. Regulators should focus on the data that's leaving right now, where it's going, what the sovereignty is of it”. Regulation requirements should be more about reporting breaches and the like.

What about protecting AI, asked Packer, particularly ethics and bias? Regarding ethics, Rohrbaugh’s view was that “I would only be concerned with this if I expose the model as a chat interface to customers”. On the topic of bias, Rohrbaugh’s view was that he wants the model to stick to the facts: he used the example of using an AI model to analyze developments in the context of published vulnerability data. He seemed not to be overly concerned about the model saying something he might take offense at: “I want it to tell me as a contrarian, without regard for my emotional state or how it's going to affect me or anything else. I just want the facts”.

Packer wrapped up the fireside chat by asking Rohrbaugh about how to get AI accepted up and down an organization. “Thinking about the culture of the organization and the gap between innovation and the right level of caution – up into the management and board level – is there something that you've done a lot of work with?”

Rohrbaugh’s response: “I am more of the side where we try to put in the systems that benefit users for work, but put some controls around it. I would probably lean towards management and board just explaining to them the benefit of not giving data away that they didn't realize that they were giving away and putting in the infrastructure for the company to use and teaching them how to use it”.

Related Insights