Sainag Nethala, currently an ISC2 Candidate, shares his experiences and perspectives on the challenges of balancing business need for visibility of users, operations and systems to maintain a strong security posture, with the requirements and expectations of data privacy, regulatory compliance and maintaining a level of trust.

Sainag Nethala, ISC2 CandidateDisclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.

As a technical product specialist working in security and observability solutions, I’ve noticed a persistent conflict in today’s technology environment. On one side is the unquenchable desire for in-depth security visibility, to identify and address constantly changing threats. On the other, there’s the unavoidable need to adhere to an expanding regulatory framework and safeguard data privacy. I frequently refer to it as "the observability dilemma." Navigating it has been a hands-on, frequently difficult, but ultimately rewarding experience for me. Here’s what I’ve learned about it.

The Modern Data Paradox

Like many others, I've seen companies start with a "collect everything" approach to observability. A unified view from a single monitoring stack often seems like a revolutionary step for finding threats and responding to incidents. Teams want to see everything because they think that more data means more security.

And, for a while, their dashboards are awash with new information. It's easier to find problems and conduct investigations. However, as the data lake gets bigger, so does a creeping sense of worry.

Companies often take in logs and metrics that are useful for security but also contain sensitive personal identifiable information (PII), customer financial information and business data that is not public. The very asset being built for security begins to transform into a significant risk itself – a honeypot for breaches (as highlighted by broad data breach statistics) and a compliance headache, especially concerning the high cost of insider risks. The stark reality hits: their quest for visibility directly clashes with the duty to protect data, a core tenet of regulations like GDPR, CCPA and many others.

In the course of guiding clients, it became evident to me that data collection alone is insufficient and that meticulous control over who sees what, along with why is paramount. This is where robust access control systems transition from being a "nice-to-have" to being an essential component of any sound security strategy. The objective isn't to restrict security, but to enable responsible security. Achieving a strong access framework isn't instantaneous; it demands a methodical, multifaceted approach.

Creating a Sturdy Access Control System

A surprisingly impactful first step that I now advise is the establishment of rigorous role naming conventions and data classification standards. While seemingly mundane, a consistent policy is essential for a scalable and manageable access control system. Successful teams adopt clear patterns indicating data sensitivity, access levels, and functional roles, which significantly aids auditing and access understanding.

With this foundation, I then guide the organizations I work with to structure access control around three primary role types:

  • Data Segregation Roles
  • Functional/Application Roles
  • Baseline/General Roles

Data Segregation Roles: Guardians of Your Data

I recommend meticulously inventorying data sources and classifying them by sensitivity (e.g., highly sensitive, internal, public). Specialized roles are then established, granting access only to necessary data sets.

  • Example: An analyst might access general logs but require explicit, time-limited, approved access via a restricted role for highly sensitive data during an investigation. The principle of least privilege is crucial here.
  • Result: A dramatic reduction in the "blast radius" of accidental data exposure, ensuring analysts see only pertinent data.

This becomes the core mechanism for data segregation.

Functional/Application Roles: Controlling Tool Access

Modern security and observability involve not just raw data but also powerful applications and dashboards. Organizations need to control who uses these tools and their in-app capabilities.

  • Implementation: Teams create roles granting users access to specific dashboards or investigation workbenches, separate from administrative functions. A read-only role, for instance, might only view service health dashboards.
  • Importance: This ensures users access only relevant tool functionalities for their tasks, preventing accidental misconfigurations or use of untrained advanced features.

Baseline/General Roles: Broad Strokes with Fine Control

These roles serve as building blocks or provide baseline access. A "basic user," for example, might run only pre-defined reports, while a "power user" could have broader search capabilities across non-sensitive data but still lack administrative rights.

  • Key Lesson: I've consistently seen teams learn caution with inheritance. General roles can inadvertently grant excess access if not carefully designed and regularly audited.

Beyond role assignment, effective teams leverage platform capabilities for search/resource filters and explore data masking, anonymization, or tokenization for sensitive fields, especially in aggregated views needing to exclude raw PII.

A crucial tip for data-onboarding: Implement clear disclaimers in data onboarding portals or request processes, stating that raw PII or other highly sensitive information should not be collected unless essential and approved. I recommend mandating that data owners are directly responsible for ensuring data is pre-processed or filtered to remove sensitive information before ingestion into security or observability tools. This proactive step significantly reduces risk at the source.

Navigating the Journey: Challenges and Best Practices

Implementation brings inherent challenges. For example, initial friction is common; some analysts feel their investigative capabilities are curtailed, often stating, "I need to see everything to do my job!". These are the steps we’ve learned.

  • Overcoming Resistance: My team and I, with the organization, invest heavily in communication and training. We conduct workshops explaining the why behind changes – regulatory mandates, the financial and reputational costs of non-compliance, and ethical obligations. We demonstrate how to work effectively within the new framework, often showing that targeted access combined with strong querying skills yields better results.
  • The Power of Iteration: Perfection isn't achieved on day one. I encourage customers to solicit feedback, review access patterns and make adjustments. For instance, initially overly granular data roles, leading to role proliferation, were later consolidated where risk profiles permitted.
  • The Challenge of Maintenance: A frequent and often underestimated difficulty is keeping access controls current. The framework requires constant updates as new data sources are onboarded or applications deployed, demanding dedicated effort and clear processes.
  • Controlled Role Creation: While creating roles via a UI is often easy, it's crucial to monitor who has this capability and ensure changes are managed through version control and automated workflows, rather than encouraging ad-hoc creation.
  • Critical Collaboration: Service Desk & IdM: A significant success factor I’ve observed is partnering with IT Service Desk and Identity Management (IdM) teams. Close collaboration between security/platform teams and these groups ensures smoother operations. Automating role assignments based on HR attributes (e.g., department/job title via SAML/SCIM) or integrating access requests into existing service desk ticketing systems (with proper approval workflows) streamlines user onboarding and modifications, reducing manual toil and ensuring consistency.
  • Essential Documentation – Service Maps and Diagrams: I also stress the importance of maintaining clear IT service maps, architecture diagrams and a historical data inventory. Understanding data lineage – which IT services feed which platforms, what's missing, its business context, and original purpose – is fundamental for accurate RBAC, especially as environments evolve. These become living documents vital for effective access control.

Responsible Visibility, Reduced Risk

The transformation in committed organizations is significant: improved data privacy posture with provable control over sensitive data, aiding compliance and building trust. Security teams remain effective with clearer data understanding. They reduce insider threat risks and minimize breach impact, aligning with key cybersecurity trends identified by industry analysts.

Key Takeaways

My experience of guiding organizations through the observability dilemma with strong access controls has yielded broadly applicable insights:

  • Plan Carefully: Before creating roles be sure to understand, classify and define clear access requirements for your data. Consider future metrics, business expansion, or third-party access needs for a flexible, future-proof RBAC design. A solid naming convention is vital for long-term manageability.
  • Embrace Least Privilege: Grant only necessary job access – it's easier to add than revoke. Standardize roles organization-wide: replicate core permissions for similar roles across departments, adjusting only for department-specific data, to simplify management and auditing.
  • Collaborate Widely: Work closely with data owners, legal, compliance, service desk, identity teams and platform users. You will find their buy-in crucial.
  • Educate Continuously: Users need to understand the why and how; ongoing education is key for adoption and adherence.
  • Audit Relentlessly: RBAC isn't static. Regularly review roles, permissions and logs for effectiveness and compliance. Governance frameworks and industry advice can inform this. Audit diagrams/inventories against actual data ingestion. Platform teams can aid this with automated audit workflows and brief, regular office hours to proactively educate on RBAC best practices and prevent role sprawl.
  • Iterate and Adapt: Your first attempt won't be perfect, so adjust based on feedback and evolving needs. Attribute-Based Access Control (ABAC) might be a future step for finer granularity.

Ultimately, taming the observability dilemma requires an organizational mindset shift: from choosing between security visibility and data privacy, to achieving secure, responsible visibility. For many customers I've assisted, a well-structured access control framework, built on clear principles and practical experience, is the key. This ongoing commitment enables them to confidently harness data's power while upholding their fundamental duty to protect it.

ISC2 Candidate Sainag Nethala has 10 years of experience in cybersecurity, observability, and machine learning applications. He has held technical roles with responsibility of architecting enterprise security and observability solutions. His cybersecurity work involves transforming data into actionable security insights and threat detection.

Related Insights