For a long time, artificial intelligence (AI) governance mostly lived in presentations and strategy discussions. Regulation was something we tracked, but it didn’t strongly shape day-to-day decisions. That changed once the EU AI Act became something we must realistically prepare to operate under, says Ali Nouman, CISSP.

Ali Nouman, CISSPDisclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2. 

The EU AI Act focuses less on model sophistication and more on whether we can demonstrate that our AI systems are controlled, explainable and accountable – which is where many of us are discovering uncomfortable gaps. From my work leading AI risk and governance initiatives, I’ve found that the real difficulty is not understanding the regulation but rather translating its expectations into practical controls within existing business processes.

One of the first mindset changes we had to make was shifting the conversation. Teams often asked whether a specific AI solution was allowed. A more useful question became what happens if this system makes a mistake, who is affected and how serious could the consequences be. And that question led us directly into the Act’s risk-based framework.

When AI influences hiring, access to services, financial outcomes or critical operations, it may fall into high-risk territory under the regulation. For such systems, the Act introduces structured requirements around risk management, data governance, documentation, traceability and human oversight that go beyond traditional IT controls.

In several cases, I found that our technology capability had moved faster than our governance maturity.

Data Quality is a Governance Issue

For high-risk systems, the Act requires that training, validation and testing datasets are relevant, representative and, to the extent possible, free of errors and complete. This is to reduce bias and discriminatory outcomes. It also expects appropriate data governance and documentation practices.

In one initiative, we aligned a machine learning workflow with these expectations. The model performed well, but the data revealed deeper issues. Training datasets came from multiple systems, with inconsistent labels and historical records that reflected outdated practices.

We learned that clean data is not a one-time technical fix. It requires ongoing governance, including lineage tracking, documented assumptions, bias review and clear ownership. Perfect data was not realistic. What mattered was being able to demonstrate that we understood the limitations and were actively managing the associated risks.

Your Role in the Supply Chain May Change

The Act distinguishes between Providers and Deployers, with different compliance obligations attached to each. A deployer may assume provider level responsibilities if it substantially modifies a system or places it on the market under its own name or trademark.

In one case, a business unit integrated a third-party AI tool and initially assumed we were acting solely as a Deployer. However, we embedded the tool into a customer facing process, added our own decision logic and presented the output under our brand.

After reviewing the Act’s definitions, we recognised that our level of integration and branding could alter our regulatory responsibilities. This meant stronger documentation, clearer accountability and more structured oversight than we had first assumed.

As a result, we introduced a formal role classification step for every AI use case to determine whether we are acting as a provider, a deployer or both. That clarity at the beginning has reduced ambiguity later in the lifecycle.

Human Oversight Must Be Real

For high-risk systems, the Act requires that human oversight is designed into the system in a way that allows individuals to understand limitations, interpret outputs and intervene or override when necessary.

We discovered that human-in-the-loop can easily become symbolic. In one workflow, analysts were reviewing model outputs before final decisions were made. On paper, this appeared to satisfy oversight expectations. In practice, workload pressure meant reviews were often quick and procedural.

To address this, we redesigned the process so reviewers could see confidence indicators and contextual explanations. We adjusted workloads so they had the time and authority to question outputs. Oversight only became meaningful when people had both the insight and the operational capacity to intervene.

Moving Governance into Operations

The EU AI Act is pushing AI governance out of policy documents and into daily operations. Progress depends less on advanced algorithms and more on structure. That structure includes understanding use cases, clarifying regulatory role, documenting decisions and monitoring systems over time.

In our experience, the foundations that made the greatest difference were straightforward. We built a clear inventory of AI use cases, formally classified risk and organisational role, established minimum data governance standards, ensured oversight was practical rather than symbolic, and clarified vendor responsibilities early.

Each of these steps builds traceability. Traceability then becomes the foundation for demonstrating control.

While the Act is often discussed in terms of penalties, its longer-term impact is about trust. Regulators and customers increasingly want to understand not only what AI systems do, but how they are governed. Compliance may be required, but credible governance will differentiate mature organisations from reactive ones.

The EU AI Act is accelerating a shift that was already necessary. AI is moving from experimentation toward accountable and well governed operation. Those who make that transition early will be better positioned with regulators and with the people affected by their systems.

Ali Nouman, CISSP, has 18 years of experience across retail, enterprise, fintech and highly regulated environments. He has held global cybersecurity leadership roles with responsibility for maturing security capabilities and translating complex risk into measurable operational controls. His work spans strategy, risk transformation, SOC design and incident response across distributed and cloud environments.

Related Insights