Faced with users adopting AI tools at a variety of levels, Kelven Leverett, CISSP, realized that he and his organization not only needed to develop and implement clear rules and policies on AI use, but that it was also time to rethink the scope of his own role.
Disclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.
When AI tools first started gaining traction in the workplace, I wasn’t alarmed. Like many in cybersecurity, I assumed we were still a few years away from serious integration beyond casual chatbot use. What changed my perspective was seeing employees actively seek out AI tools to enhance their work; they weren’t just using tools to draft emails, but to solve meaningful challenges. It was the moment I realized that AI wasn’t theoretical but was already here. And, if we didn’t establish guardrails, we’d be left reacting instead of guiding.
This realization sparked a shift in how I viewed my role. My job was no longer just about protecting systems. The role became, just as much, leading conversations about balancing innovation with responsibility.
My Learning Journey: AI-Assisted C-File Reviews
One of the first major applications of AI in our environment came through the District Attorney’s office, which was piloting a tool designed to assist with the review of Criminal Files (C-Files). These files contain detailed records of an individual's criminal history, including arrests, charges, sentencing information and incarceration data. They are used to support sentencing recommendations and to identify individuals who may qualify for resentencing or early release.
The tool promised improved efficiency and consistency. However, as someone responsible for information security, I had immediate questions. Where is this data processed? Are the servers located in the U.S.? Is the data encrypted? This was Criminal Justice Information Services (CJIS) regulated data and it needed to be handled with extreme care.
We decided against starting our pilot using a cloud-based version of the tool; the risk of processing this data externally, even with contractual safeguards, felt too high. After discussions with our chosen vendor, it agreed to provide a locally processed version of the tool. This gave us better control, increased our comfort level and allowed us to move forward more responsibly.
Ethical questions also surfaced. Would this tool influence decisions about who might be released from jail? Would there be proper human oversight? These concerns helped shape the pilot and reinforced the need for clearly defined boundaries.
Leading with Governance
To limit risk, we allowed only one department manager to test the tool. This approach minimized data exposure and gave us time to evaluate both the technology and the process in a tightly scoped environment.
I led the effort to vet the vendor’s security posture and to ensure its controls aligned with legal and policy requirements. We reviewed documentation, evaluated access protocols and confirmed compliance with CJIS standards. As AI adoption accelerated, it became clear that our existing vendor evaluation process needed an update. I revised our Vendor Security Assessment forms to include AI-specific questions that help determine how technology functions and how it protects data.
The updated form now includes questions such as:
- Does your solution use machine learning or generative AI capabilities?
- Is customer or user data used to train your models?
- Where is AI processing performed? On-premises, in the cloud, or a hybrid?
- Can AI-generated outputs be audited or explained to end users?
- What controls are in place to prevent unauthorized access to AI insights?
- How does your solution comply with regulations such as CJIS, HIPAA, or CCPA?
These additions were not about creating roadblocks. They were designed to ensure we asked the right questions before introducing AI into environments that involve sensitive data.
We also began requiring departments to submit a business case for any AI tool they wanted to implement. This allowed us to understand the use case, to evaluate potential risks and to provide targeted guidance for the specific role or function. This shift improved internal alignment. Departments stopped viewing information security as an obstacle. They started seeing us as partners, helping them use technology responsibly and safely.
Lessons Learned
Our work reinforced a simple truth: that there is no finish line when it comes to responsible AI implementation. The tools will continue to evolve and the pressure to adopt quickly will remain. But that does not mean we should move forward without thoughtful planning. Even when the path feels steep, it’s worth the effort. The goal isn’t perfection, but deliberate progress.
I’ve also learned that technical solutions alone are not enough; culture, training and policy matter just as much. Encryption and access control are essential – but they’re only effective if users understand how to make smart decisions. These reasons are why I prioritize communication: I meet regularly with department leaders, provide briefings, and help design training tailored to their operational realities. A generic approach doesn’t work and every team has different responsibilities and levels of risk.
Continuing the Work
There is still much to do. As enterprise AI tools become more embedded into daily workflows, our focus is shifting to developing more detailed safeguards. This includes refining role-based access models, expanding data classification efforts across departments and ensuring that DLP enforcement aligns with how people actually work, rather than just how policies are written.
We’re also beginning to explore how to bring more transparency and auditability to AI-generated outputs. This will become more important as these tools start influencing decisions in areas such as legal review and personnel matters.
Another key area is training. We’re designing sessions that are tailored to each department’s needs and responsibilities. These sessions go beyond teaching features and instead emphasize the risks, expectations and protections specific to our environment. The goal is to create a culture where responsible AI use becomes second nature.
The work will continue to evolve. As technology changes, our frameworks, questions and cross-departmental collaboration will change with it. This isn’t about achieving a final state; it’s about making steady progress and building a foundation that can adapt and improve over time.
Kelven Leverett, CISSP, has over 15 years of experience in cybersecurity and IT, and has held technical and leadership roles with responsibility for securing sensitive systems, managing audits and building governance programs. His cybersecurity work spans vulnerability management, incident response, data privacy and the ethical adoption of emerging technologies such as AI.
Related Insights