When Suresh Akkemgari, CISSP, transitioned into a cloud security leadership role, he was surprised by how challenging it was to maintain visibility across all cloud assets. Here he explains what happened and what he did to address the issue.

Suresh Akkemgari, CISSPDisclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.

We believed things were under control; we had policies, procedures, processes and regular reports. However, visibility was far from complete. Despite using compliance programs and security frameworks, we found numerous gaps when applying them in the cloud.

During the early days of my new role, we experienced a couple of security incidents. Not catastrophic, but each one revealed fundamental weakness. One eye-opening issue involved a compromised development environment outside our monitoring scope. It hadn’t been onboarded to security tools and asset ownership was unclear. Although it was just a development environment, it became a gateway for attackers to spot weaknesses. Development systems often lack the same level of controls as production, making them easier targets and footholds for lateral movement into more sensitive system and data.

These early experiences highlighted the need for a strategic shift. I refocused on three core principles: asset inventory, ownership, along with logging and monitoring. Simple, not flashy­ – but implementing them consistently transformed our posture from reactive to proactive and resilient.

Maintain an Accurate Asset Inventory

“You can't protect what you don't know exists" became my team’s mantra after a painful incident that exposed a major gap in our cloud security. We were alerted to abnormal activity, but when we attempted to respond, we hit a wall with no asset inventory, no owner and no creation history. The lack of information delayed our response to assess the risk and contain the activity. Only after digging through audit logs were able to trace its origin. It clarified for us that a centralized asset inventory isn’t just a tool, but critical for real-time, effective security operations.

Real-Time Asset Discovery

We began with manual inventory updates and quarterly audits, which worked at first but quickly became unsustainable as we scaled. Partnering with the DevOps team, we implemented automated, real-time asset discovery using cloud-native and third-party tools. Despite challenges – especially reconciling resource classification across cloud providers – we soon had reliable, continuously updated inventory. One lesson I learned that automation is only effective with proper tagging. During reviews or incident response, we often found missing or inconsistent tags, making difficult to identify owners or understand their purpose. To address this, we enforced mandatory tagging – owner, environment and compliance status – using AWS Tag Policies. There was some initial pushback from engineering and it took a few iterations to get it right, but over time, consistent tagging eventually became standard practice and a key enabler for real-time visibility and control.

Regularly Audit

We integrated asset tracking into real-time access reviews and cost optimization. This allowed us quickly to spot unused resources, detect shadow IT activities and reduce unnecessary spending. More importantly, it gave us the visibility and context to respond with speed and confidence when things went wrong.

The most unexpected benefit came during an annual disaster recovery test. With properly tagged resources, we were able to immediately identify and prioritize critical systems, which accelerated response and cut recovery time objective by nearly 40% compared to previous years.

Define Asset Ownership

Getting asset inventory under control felt like a major win. Until, that is, we faced a deeper challenge: ownership ambiguity. Knowing what existed was only half the battle; knowing who owned it was another.

This hit us hard during a critical vulnerability patching exercise. Vulnerability scanning flagged multiple critical issues across cloud workloads, but triaging was chaotic: no clear owners, no escalation paths and in some cases, no one knew they had provisioned the affected resources. That moment pushed us to prioritize asset ownership as a core part of our security and operations strategy.

Ownership wasn’t just metadata. It needed to be a living, enforced part of provisioning and governance model. So, we made a few key changes:

  • Ownership tags became mandatory: Every new resource required a valid email tied to a business unit or DevOps team. If we were missing this information in terraform or automation scripts, the pipeline failed by design.
  • Unowned assets were flagged and escalated automatically: Using AWS Config rules and custom scripts, we scanned for orphaned assets and piped those into communication channels and ticket queues for triage.
  • Review cycles were built into our ops rhythm: Monthly, we ran reports on stale or unowned resources, notifying asset owners of misconfigurations, security risks, or cost anomalies. Ownership became part of teams’ operational hygiene, not just a tag.

We didn’t get it perfect at first. In fact, my first iteration relied too much on tagging compliance without validating ownership. We had tags, but sometimes they pointed to distribution lists or users who had long left the company. So, we added a validation layer: automation scripts cross-checked owners against our identity provider, flagging stale entries for cleanup. Over time, these practices evolved into a culture shift. Teams began proactively managing their cloud resources. Infra engineers added ownership annotations even outside the mandatory tags, just to make incident response smoother. Security stopped being something handled only by a central team, instead becoming part of how every team naturally built and managed their systems.

Enforce Robust Logging

"We have no logs for that time period" – these words nearly derailed our response to the potential issues. Despite having advanced detection tools, we couldn’t answer basic questions about who accessed sensitive resources during a critical timeframe. Logging had significant gaps: logs were scattered, inconsistently collected, and varied in format and retention across platforms. This experience proved that even the best security tools are ineffective without comprehensive logging. It forced us to rethink our entire approach to cloud logging and monitoring.

When I first assessed the system, cloud logging was enabled in some accounts but not all. Some teams implemented custom logging solutions, while others relied on default configurations. This fragmentation made it difficult to correlate events and establish a clear timeline during incidents.

I brought together stakeholders to define logging requirements. Our first major decision was selecting a centralized logging architecture. After evaluating several options, we decided to move to cloud solutions and forward everything to a SIEM platform. This enabled seamless integration with our cloud providers and provided powerful analysis capabilities across our entire environment.

Next came automation. We deployed native agents and configured CloudTrail to automatically forward logs to the centralized system, fully managed through Infrastructures as Code (IaC) and enforced organization-wide. To ensure integrity, logs were stored in tamper-proof storage with dedicated log accounts secured with strict access controls.

Cost management became a concern as logging volumes grew. To control expenditure, we implemented intelligent filtering to reduce noise while preserving security-relevant events, cutting costs by nearly 40% without losing critical visibility. Filtered logs were stored in S3 for future analysis and compliance. 

Over time, logging evolved from compliance task to a vital real-time operational tool. Now, when issues arise, we investigate with confidence, backed by structured, ready-to-use data.

Focus on the Fundamentals

After years working in hybrid and multi-cloud environments, I’ve learned that strong security doesn’t come from flashy tools, but from nailing the basics and being consistent.

If I had to give one piece of advice to any team building or maturing their cloud or on-prem security operations, it would be to focus relentlessly on the three fundamentals I’ve described above:

  • Maintain an accurate asset inventory: Real-time discovery and dynamic inventory are essential for patching, incident response or compliance.
  • Enforce clear asset ownership: Visibility means little without accountability. Assigning and validating ownership reduces risk and accelerates actions.
  • Centralize logging and monitoring: Logs tell your story. Without comprehensive logging, you’re flying blind. With structure, they enable fast, confident detection and response.

These aren’t new ideas, but they are often overlooked. By consistently applying these core practices, you move from reactive firefighting to proactive security. If you're revisiting your strategy this year, start here: visibility, accountability and observability.

Suresh Kumar Akkemgari, CISSP has 18 years of experience in cybersecurity, cloud computing, DevSecOps, AI/ML driven security solutions and risk management across IT and hybrid infrastructures. He has held security architecture, engineering and management roles, with responsibility for designing and implementing zero-trust security for cloud and on-premises systems aligning with business needs and industry standards.

Related Insights