Generative AI technology has the potential to improve, simplify and automate many things. As Devdatta Mulgund, CISSP, CCSP explains, these potential benefits do come with a cybersecurity overhead that can be more complex to deal with than it seems.

A world using Generative AI technology has the potential to unlock many benefits. At the same time, it introduces significant security challenges, so organizations and users must do proper due diligence before implementing this technology.

The list of major security and privacy concerns when an organization adopts Generative AI technology is extensive, and includes:

  • Sensitive Information Disclosure
  • Data storage
  • Compliance
  • Information Leakage
  • Model Security
  • Vulnerabilities in Generative AI tools
  • Bias and fairness
  • Transparency
  • Trust
  • Ethics
  • Infringement of intellectual property and copyright laws
  • Deepfakes
  • Hallucinations (nonsensical or inaccurate)
  • Malicious attacks

There is plenty of information available on these concerns and on the proactive measures that organizations can take to address them. Typically, these measures include the creation of organizational policies, data anonymization, the principle of minimum privilege, threat modelling, preventing data leaks, secure deployment and security audits.

Here, I want to share with you how we addressed these issues in my organization.

A Holistic Approach with Specific Measures

We adopted a holistic approach to Generative AI security. This encompasses the entire AI lifecycle, including data collection and handling, model development and training, and model inference and use. Simultaneously, we secured the infrastructure on which the AI model was built and run. Finally, we established an AI governance process in the organization. We achieved this through the following, practical measures:

Acceptable Usage Policy (AUP). We established an AUP for Generative AI tools, outlining the principles that employees of our organization must follow when using Generative AI tools. The purpose of the policy is to ensure that employees use Generative AI systems in a manner that is consistent with our organization’s values and ethical standards. For example, our policy states that employee should not submit any PII data or copyrighted materials to Generative AI tools. As part of this initiative, we delivered employee awareness training and education to educate the employees to ensure they understand these policies and know how to follow them.

Data security. To secure the data at rest and in transit, we started with a data discovery and classification process to establish the sensitivity of data and determine which data should goes into Generative AI model. To protect sensitive data in the training data sets, we anonymized sensitive data in the training data sets, we encrypt the data at rest and in transit with strong encryption algorithms to ensure the confidentiality of data, and we restrict access to AI training data sets and the underlying IT infrastructure by implementing access controls within the enterprise environment. The risks of data leakage lie primarily at the application layer, rather than the chat LLM layer, OpenAI. so, we built a custom front-end application that replaces the ChatGPT interface and leverages the Chat LLM APIs (OpenAI) directly that way we bypassed the ChatGPT application and mitigated the risk of losing the sensitive data. We prioritize the secure data capture & storage at the application level to mitigate risk associated with data breaches & privacy violations.

To ensure sensitive data remains under direct control, we created a sandbox to isolate the sensitive and non-sensitive data. The sandbox acts as the gateway for the consumption of LLM services, and other filters are added to safeguard data and reduce bias.

We conducted security risk assessments to evaluate security risk associated with Generative AI applications. To manage the risk associated with shadow IT, we ran the Cloud Discovery scan to see what was happening in our organization’s network.

Securing our AI model. Supply chain attacks are common during the development of Generative AI systems, due to the heavy use of pretrained, open-source ML models readily available on the Internet to speed up the development efforts. Application programming interface (API) attacks are another concern. Organizations rely on APIs to consume the capabilities of prepackaged, pretrained models due to the limitations of the resources or expertise to build their own large language models (LLMs). Attackers recognize this will be a major consumption model for LLMs and will look to target the API interfaces to access and exploit data being transported across the APIs.

Data is at the core of large language models (LLMs) and using models that were partially trained on bad data can destroy your results and reputation. The outputs of Generative AI systems are only as unbiased and valuable as the data they were trained on. Inadvertent plagiarism, copyright infringement, bias, and deliberate manipulation are several obvious examples of bad training data.

To address this concern, we take the following proactive measures to ensure the security of our AI model:

  • We continuously scanning for vulnerabilities, malware, and corruption across the AI/ML pipeline
  • We review and harden all API and plug-in integrations to third-party models
  • We configure enforcement policies, controls and RBAC around ML models, artifacts and data sets. This ensures that no one person or thing has access to all the data or all of the model functions
  • We are always open and upfront about the data used to train the model to provide much needed clarity across the business
  • We created guidelines around bias, privacy, IP rights, provenance, and transparency to give direction to employees as they make decisions about when and how to use Generative AI
  • We used Reinforcement Learning with Human Feedback (RLHF) to fine tune the model and secure it from harm

We also need to address possible attempts by hackers to use malicious prompts to jailbreak models and get unwarranted access, steal sensitive data, or introduce bias into outputs. one example is a “prompt injection” attack, where a model is instructed to deliver a false or bad response for nefarious ends. For instance, including words like “ignore all previous directions” in a prompt could bypass controls that developers have added to the system, another concern involves model denial of service, where attackers overwhelm the LLM with inputs that degrade the quality of service and incur high resource costs. We also consider and secure against model theft, in which attackers craft inputs to harvest model outputs and train a surrogate model that mimics the behavior of the target model. To ensure the security of our Generative AI tools from different types of attacks, we introduced and take the following steps:

  • Monitoring for malicious inputs such as prompt injections and outputs containing sensitive data or inappropriate content
  • Implemented new defenses such as machine learning detection and response (MLDR), which can detect and respond to AI-specific attacks such as data poisoning, model evasion and model extraction
  • Configured alerts and integrated with our Security Information and Event Management (SIEM)
  • Focus on our network infrastructure security. This is important in Generative AI security, because LLMs are built and run on the top of infrastructure. To secure the infrastructure we deployed the AI systems on a dedicated network segment. Using a separate network segment with restricted access to host AI tools to enhance both security and availability. we hardened the OS, databases, network devices to ensure the security around the Generative AI tools
  • Conduct regular penetration testing exercises against Generative AI tools, aiming to identify security vulnerabilities before deployment into production environment
  • We established a governance process around Generative AI tools, which includes everything from policies, designated roles and responsibilities, security controls, and security awareness training. All relevant documentation is readily available for employees when necessary

Generative AI is an emerging technology that makes our lives easy and may positively impact our everyday lives. But every technology comes with inherent security risks that business leaders must understand; so, before implementing Generative AI solutions, organizations should do proper due diligence. You must strike a proper balance between agility and security for the benefit of your organization.

There are ever evolving models, frameworks, and technologies available to help guide AI programs forward with trust, security and privacy throughout. Focusing on trustworthy AI strategies, trust by design, trusted AI collaboration and continuous monitoring helps build and operate successful systems.

Devdatta Mulgund , CISSP, CCSP, has over a decade of experience in banking, financial services, insurance and telecoms. Mulgund currently holds a senior security consultant role, with responsibility for application security testing, DevSecOps integration, penetration testing, cloud security, security architecture review, container security and Kubernetes security.