After taking on a new role, Amey Thatte, CISSP, went through a thought exercise and realized that the core concepts of CISSP domains offered him a way to navigate through an area – AI – that is witnessing groundbreaking advancements at a rapid pace.
Disclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.
Once, I was just a new graduate starting out in the field of cybersecurity, surrounded by subject matter experts with decades of experience and knowledge across different areas. While I was excited to work in cybersecurity, I remember feeling somewhat of an imposter.
At the time, studying for my CISSP felt like a way to overcome this sensation. But, in hindsight, it gave me much more, serving as the foundation for everything that followed. Earning my CISSP gave me more than just technical validation; it provided me with a structured approach or a framework to think about different aspects of the cybersecurity ecosystem, technical and non-technical. It has, undoubtedly, been the pivotal experience of my career.
Leveraging CISSP Domains
My primary goal at the time was to gain knowledge, feel comfortable as a cybersecurity professional and gain credibility along the way. However, as I was pursuing my CISSP, almost everyone I spoke to told me some variation on the theme that the CISSP examination is a mile wide and an inch deep. However, what I gained was deeper than that. CISSP domains helped me to elevate my perspective from “tasks at hand” – focused on technical defense – to enterprise-level thinking; from risk assessment to policy creation to control evaluation. This shift in my mindset helped me become better at communicating security priorities to leadership and think about security as a strategic priority for the business.
That same breadth of thinking is just as essential today. The emergence of artificial intelligence (AI) technologies demands a whole new set of controls, technical controls, governance, risk frameworks and ethical guardrails.
The Rise of AI and the Evolving Role of Security
AI adoption has given rise to a variety of new threats (or evolutions of existing threats). By now, we will surely be familiar with the likes of:
- Adversarial Attacks: In which slightly altered inputs cause models to misclassify – such as a manipulated stop sign, consequently misread by an autonomous vehicle.
- Data and Model Poisoning: In which malicious records are inserted into training sets to bias outcomes or embed backdoors.
- Model Inversion and Theft: Where sensitive training data is extracted or proprietary models are duplicated through repeated querying.
- Supply Chain Attacks: Involving the infiltration of an enterprise’s AI systems by compromising third-party components, such as software libraries, tools, or models.
- Data Leakage: Where attempts are made to extract personal details from training data and previously generated model outputs.
- Compliance Gaps: In which organizations struggle to align AI use with EU AI Act, NIST AI RMF, or other emerging AI Act regulations.
While these are just a few examples of AI-related risks, they also illustrate precisely where CISSP domains can play a big part in addressing those risks.
Mapping CISSP Domains to AI Security
My recent thought exercise made me realize that a control framework can help build a holistic AI security program in a structured manner. In my case, the underlying principles of which are, of course, the domains covered by CISSP:
- The Security and Risk Management domain maps onto the requirement to establish AI governance frameworks aligned with or derived from industry best practices, such as ISO/IEC 42001 (AI management systems), to manage AI systems and model risks, and codifying ethical AI practices. For example: we can log AI-specific risks into enterprise risk registers.
- The Asset Security domain maps onto the need to identify the AI assets that are your ‘crown jewels’: training data, custom models and hosting environments, and the controls required to protect them. This could include data integrity checks, logging and monitoring controls, and encryption at rest/in transit. For example: we can use hashing techniques to detect unauthorized dataset modification.
- The Security Architecture and Engineering domain maps onto the need to design of secure machine learning pipelines, with secure feature engineering and protections against adversarial attacks. For example, we might need to create reference architectures for development teams to enable secure AI applications building
- The Communication and Network Security domain maps onto the requirement to protect model APIs and cloud AI services with TLS 1.3, segmentation and rate-limiting to mitigate model extraction attacks.
- The Identity and Access Management (IAM) domain maps onto the objective to enforce strict access control policies on environments, API keys, model repositories and AI assets. For example: we can tie model retraining privileges to authorized users with Just-in-Time access.
- The Security Assessment and Testing domain maps neatly onto the expansion of “red teaming” to include adversarial testing of AI models against a wide array of attack techniques, including custom intents to break the model context.
- The Security Operations domain maps onto the requirement to perform regular vulnerability scans of deployed models and to use the resulting information to implement guardrails to protect them from vulnerabilities like jailbreak attacks. For example, we might think about integrating model telemetry into SIEM systems to detect unusual query patterns.
- The Software Development Security domain maps onto the technique of applying secure DevOps principles to MLOps, through dependency scanning, signed model artifacts, AI supply chain security and real-time monitoring capabilities. For example, this can help us to detect malicious prompts and/or responses in transit.
What I’m suggesting is that, while AI brings significant novelty to the attack surface, AI security is merely an extension of traditional cybersecurity – and that we can address them by applying existing CISSP domain thinking. In essence, it is an evolution that reuses familiar principles in a novel context.
Building AI Security on Solid Foundations
For me, the last decade has proven that cybersecurity principles endure even as technologies shift and, for me, the CISSP provided that foundation – even as the frontiers have changed, as it has in the form of AI.
My takeaway from this exercise is the opinion that the principles of the CISSP certification remain timeless, precisely because CISSP teaches breadth and provides a broad perspective. My humble view is that AI security is merely the extension of the CISSP philosophy, marrying classical security controls with AI ecosystem specific defenses, with a little lateral thinking.
Thus, for any organization unsure about where to begin, start your journey to securing AI with the approach mentioned above and focus on the fundamentals even as you explore emerging domains like AI.
Amey Thatte, CISSP, has 10 years of experience in cybersecurity strategy, operations and risk management. He has held technical and operational roles, with responsibility to deliver cybersecurity strategy initiatives, risk assessment engagements and remediation projects. His cybersecurity work spans across Generative AI, cloud, infrastructure and endpoint security.

.png?h=500&iar=0&w=500)
