Cybersecurity professionals will have a critical role to play as their organizations develop and deploy AI, a panel of legal experts told attendees at ISC2 Security Congress in Nashville, Tennessee this week, given that infosecurity disciplines and tasks are woven throughout the NIST AI Risk Management Framework (RMF).
The NIST AI RMF was unveiled late last year, coincidently a month after the launch of ChatGPT. It aims to help organizations manage the risks around designing, developing, deploying, or using AI.
It spells out potential harms to individuals, organizations and ecosystems, and the characteristics of trustworthy AI – including that it be “secure and resilient” and “privacy-enhanced”. It also lays out “core functions” for achieving this, with governance at its heart.
Legal standards
Adam Cohen of Baker Hostetler said, “Standards in law, when it comes to cybersecurity, come from industry best practices.” With a paucity of case law in this area, he continued, “When you’re looking for an anchor to explain why what you’ve done is reasonable these kinds of frameworks are what we turn to.”
As with other security frameworks, it’s not going to be mandatory, he said. “But this will help you in looking at these issues and having a structured way to do that– by showing that you align with a standard that can support your justification for how you did things or a legal defensibility.”
More practically, the NIST AI RMF Playbook lays out “suggested actions for achieving the outcomes” laid out in the framework. Infosec professionals should expect to be involved in virtually every aspect of the Seven AI System Lifecycle Stages laid out in the playbook, the panel said. They highlighted the framework’s focus on ensuring the “resilience” of AI systems.
The framework spells out responsibilities around the planning and designing of AI systems, including how to build in security from the outset, and the obvious confidentiality and security implications around the collection and processing of training data.
There are also specific infosec related aspects to the build and use, verification and validation, deployment and use, and use or impacted stages, ranging from detecting hidden functionality, red teaming, vulnerability disclosure and bug bounties.
Cohen added that the framework was not just focused on information security risk. And, he added, cybersecurity professionals should not be under the illusion that any of this detracts from or supplants their existing range of duties.
“It doesn't mean that you're going to use this instead of all the other ways or frameworks or organizing principles, or elements of your security program that you use to approach other kinds of applications,” said Cohen. “You're still going to have to think about vulnerability and patch management, you're going to have to think about logging and monitoring.”
Understanding the legalities of AI
While AI tools might have some unique characteristics and risks, “that doesn’t mean you take a completely different approach in dealing with it from a security point of view.”
David Patariu, of Venable LLP, added that when it came to AI, there were many “fuzzy” areas. “AI’s new, so it’s a little unknown [as to] what is reasonable.”
Nevertheless, said Patariu, the framework gave a solid foundation and “a lot of tools and ways to think about how to assess risk, how to get the right process in place.”
However, security pros need to be aware of the broader legal context too, the panelists said. While the current state of the US Congress means it’s unlikely there will be Federal legislation any time soon, as with privacy, State legislation can fill the gap. And the EU’s AI Act will effectively become the “law of the land”, at least for organizations operating across borders.
Ultimately, when considering how AI affects security at your organization, Cohen said, professionals had to “go back to fundamentals of your security programs and apply them to these kind of applications. Don’t think this is new and unique… if you’re not doing these fundamentals, you’re lost.”