Could (and should) artificial intelligence (AI) be used to autonomously grant or deny access to systems, data or software infrastructure without human oversight? Aadithya Francis, CISSP, who has spent years watching Identity and Access Management (IAM) programs struggle to keep up with the weight of modern enterprise complexity, offers their perspective.

Aadithya Francis, CISSPDisclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.

Whether it’s to scale the identity and access management (IAM) processes that place a heavy cognitive demand on decision-makers (access approvers and access reviewers) or to reduce ‘rubber-stamping’ of access decisions, the idea of using AI to make access decisions on behalf of humans has been gaining momentum. I understand why so many people are keen to let AI step in: human approvers are overwhelmed and reviewers rubber-stamp because they simply don’t have the bandwidth. Every year, our cloud footprints expand, APIs multiply and regulatory demands tighten. Could and should AI be used to autonomously grant or deny access to systems, data or software infrastructure without human oversight?

I’ll start by acknowledging the appeal: in theory, AI and machine learning (ML) could analyze patterns that a human decision maker would never spot, assess risk in real time and adapt access controls dynamically – things that humans can’t do at scale. The idea that AI could streamline role lifecycles, or fine-tune entitlements, is undeniably attractive.

However, in practice, I think it’s far too early to trust an AI model with the authority to grant or deny access to sensitive systems or data, especially in highly regulated sectors such as financial services or for organizations adopting zero trust or least-privilege access models. Here’s why.

Lack of Explainability and Auditability

Access decisions require traceability. Regulators expect it. Auditors expect it. I expect it. Yet most AI systems, especially deep learning models, can’t give a clear, human-understandable rationale for why a decision was made. This is an issue. If I can’t articulate why, say, a user retained a privileged permission, then how am I supposed to stand behind that decision when challenged?

While I acknowledge there are ways to generate post hoc rationales for why a decision was made, such rationales often lack context, are impossible for someone without a good grasp of the system to understand and require nuanced human interpretation for it to make sense.

Drift, Bias and Fragility

I’ve worked and experimented enough with machine learning models to know they don’t stay accurate forever; they drift and degrade. AI models pick up patterns in old, biased data and then repeat that bias at scale. In my experience, when historic approval patterns have been flawed, AI has simply automated and amplified those flaws.

Risk of Adversarial Manipulation

I’ve seen how easily adversaries can exploit ML models. Attacks such as poisoning training data and crafting adversarial inputs aren’t theoretical anymore – and I’ve personally encountered enough badly (albeit inadvertently) crafted inputs, which then created completely unexpected and unacceptable erroneous outputs, to know the problem is real. If AI becomes the new gatekeeper, then the decision-making system itself becomes a new attack surface.

Inference and the Risk of Flat Networks

One of the more subtle risks I think people underestimate is AI’s ability to infer sensitive information from patterns. If an AI system has visibility into the entire access dataset, it may inadvertently reveal sensitive information it should not have. This wide access to datasets is something most human reviewers don’t have, or have only limited access to, due to organizational restrictions and segregation of duty policies. Traditional access control models and interfaces aren’t designed to manage this kind of risk, requiring additional controls and monitoring.

Governance, Liability and Trust

If an AI model grants inappropriate access, I am still accountable. The organization is still accountable. But we don’t yet have established standards for certifying or auditing machine learning systems in access control, so those liabilities are rather murky and dangerous. This, alone, is enough for me to hit pause on letting AI make access decisions for me.

What AI Can Do for Access Control Today

Despite my skepticism about autonomy, I’m still optimistic about using AI as a decision support system. That is to say: I think AI shines when it augments human judgment, not replaces it. As of now, I’ve experimented with and successfully deployed AI and ML models in IAM in the following areas:

  • Highlighting Risky Access: Using learning systems to understand and identify high-risk access. By exposing such risk and significance data to approvers and reviewers, decisions could be made more confidently by human reviewers, as decision quality depends on the quality of the context that is provided to the decision maker.
  • Detecting Anomalies: Using ML to identify access clusters and detect outliers to normal access patterns. Information about such anomalous access is exposed to our decision makers, with some visual cues and colors to focus attention on problematic access.
  • Suggesting and Enhancing Roles or Policies: Using pattern-analysis models to continuously look for groupings of access and groupings of users. These groupings are then used to suggest, create and enhance roles and policies, thereby simplifying access governance.
  • Making Decisions Understandable: Leveraging generative AI to infer and add context to access decisions. We leveraged generative AI to enhance the readability of descriptions as well as to infer context based on metadata attached to the elements of the decision - including the beneficiary, the group, the organization, the justification. The system then uses this data to provide a summary of the access decision to the decision maker.

These are clearly solutions where AI adds intelligence without making decisions autonomously.

Where AI Will Eventually Fit into the Process

I don’t believe autonomous AI access decisions are a pipe dream, far from it. With advances in explainable AI, hybrid rule-plus-ML models, tighter guardrails and better governance frameworks, I can see a future where AI is granted a narrow, carefully controlled domain of autonomy. This is what I think we need:

  • Explainable AI and Hybrid Systems: Advances in explainable AI could make it possible for IAM teams to easily understand and explain why a model made a particular access decision. Hybrid models that combine rules-based logic with machine learning may offer more predictable behavior with AI-enhanced context. With such advancements, I’d be better equipped to explain an access decision made autonomously by AI.
  • Guardrails: Even if AI systems are allowed to make decisions, secondary systems can validate them in real time. For example, when unexpected access is detected, the system could require an IAM team member’s confirmation after the decision is made or could trigger alerts for review. With the option to provide my confirmation for high-risk decisions and with continuous monitoring, the adoption of decision-making systems would become more acceptable.
  • Continuous Retraining: IAM teams should commit to continuous re-training, validation and adversarial testing of AI models used for access decisions. This could include ‘red teaming’ the AI to probe its behavior and identify potential failure points. With such an approach, I’d be more confident in adopting these systems, with the assurance of drift minimization and bias reduction.
  • Scoped Autonomy: IAM teams could use AI to grant or deny access only within narrow, low-risk domains. High-value systems and sensitive data would remain under strict human control. This reduces the blast radius of any AI error. Large-scale adoption of autonomous systems for low-risk decisions is something I can foresee in the near future.
  • Governance Frameworks and Regulatory Evolution: As standards such as NIST’s AI Risk Management Framework are adopted more widely by organizations, I look forward to identifying standard pathways to adopt AI responsibly, with clear audit trails, explainable models and built-in oversight mechanisms.

The Bottom Line

For now, I believe AI should remain an advisor and not be positioned as the gatekeeper. The consequences of a bad access decision are simply too severe. The technology isn’t mature enough to shoulder that responsibility without human oversight.

If I were advising any cybersecurity team today, I’d suggest they start small and focus on low-risk use cases, to use AI to inform decisions rather than make them, as well as monitor every AI-generated insight closely. I’d also stress the importance of demanding explainability from any AI system they adopt and of putting strong governance frameworks in place now, well before fully autonomous decision-making becomes a reality.

Fully autonomous access decisions may come sooner than we expect. When the time is right, I’ll be ready to revisit my stance. But today? I still believe the smartest choice is to keep humans firmly in control, with AI providing decision support.

Aadithya Francis, CISSP, has 20 years of experience in the engineering of IAM solutions and security models for large enterprises. He has held management and technical roles, with responsibility for enterprise-scale cybersecurity solutions and the management of large IAM programs. His cybersecurity work spans user-centric IAM, modern access models and governance at scale.

Related Insights