Picklu Paul, CISSP discusses how working across enterprises of varying sizes has revealed to him a consistent, uncomfortable truth: finding the bugs is the easy part. The real struggle is fixing them. He shares how he has approached the complex hurdle of remediation and mitigation across different stages of organizational maturity.

Picklu Paul, CISSPDisclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.

In the context of vulnerabilities, we often celebrate the "hunt." We deploy sophisticated scanners and generate reports with thousands of entries. But visibility without action is just liability. Throughout my career, I’ve seen organizations drown in alerts: paralyzed by the sheer volume of "critical" vulnerabilities while their actual risk posture remains unchanged.

The challenge isn’t usually technical; it is cultural and logistical. In my experience, effective remediation requires moving away from a "scanning" mindset to a "fixing" mindset.

Breaking Down Silos

During my early days at a fintech startup, it felt like we were two teams talking past each other. Security was huddled in one corner, developers were in another, the tension was palpable. We ran monthly scans and tracked every finding in a shared spreadsheet. I measured success then by how fast we deployed patches, often rushing to developers with a mandate to "fix this now." Naturally, I received pushback. They viewed me as a roadblock and I viewed them as negligent.

I realized we’d built silos. I couldn’t simply ‘throw a report over the wall’ to DevOps and expect results; I needed to change how I communicated. Instead of issuing directives, I began adding context to every ticket: why the bug mattered, what the business impact was and clear steps to resolve it. When I stopped being the "nagging security guy" and started providing actionable data, completion rates climbed.

We also learned a hard lesson about ownership. We initially assigned fixes to specific engineers. That sounds logical, but the fix languished whenever that person went on vacation or got pulled into a fire drill. One critical patch that was left half-finished on a Friday had been forgotten by Monday. We duly shifted our model to assign vulnerabilities to teams rather than individuals, ensuring that the team’s queue remained active even if one person was away. This change also helped to turn security from a personal burden into a shared quality metric.

Automation and the Shift to Rebuilding

When I moved to a larger technology company, my manual spreadsheet approach collapsed under the weight of data. To mature our process, I integrated our scanning tools with our IT service platform so that reports of high-severity vulnerabilities automatically generated tickets.

However, automation isn’t just about creating tickets; it’s about changing the remediation strategy entirely. In large IT estates, the effort to patch thousands of machines is overwhelming. I saw teams delay critical updates for months because of "technical debt" and the fear that a patch would break a legacy application.

To solve this, I pushed for a shift toward immutable infrastructure. Instead of patching a live server and, essentially, hoping that it re-booted, we started rebuilding images with updated packages and redeploying them. By incorporating security patches directly into infrastructure as code (IaC) templates, we ensured that new instances would spin up already patched. This prevented old vulnerabilities from recurring and made security hygiene "baked in" rather than applied as a bandage.

The Essential Precursor: Visibility and Discovery

I realized early on that you can’t fix what you can’t see: the remediation pipeline stalls before it even begins if the asset inventory is incomplete or inaccurate.

At the startup, we faced the common issue of having no formal inventory at all. Developers deployed new services daily, sometimes using unmanaged repositories or cloud accounts I hadn't yet onboarded. We were operating with dangerous blind spots, forgotten servers or old applications that attackers love to exploit. So, my first tactical effort wasn't scanning, but building a live inventory of hardware, software, cloud resources, and codebases. I had to regularly reconcile our limited scan results against known assets and their owners to ensure coverage.

The challenge changed completely when I moved to a large enterprise. Suddenly, I wasn't dealing with a lack of inventory, but asset sprawl. I had to wrestle with ephemeral cloud workloads, containers, and serverless functions that would appear and disappear in minutes. Standard scanning tools couldn't keep up so, to acquire visibility, I leveraged API integrations to automatically capture these resources.

I also realized that true visibility meant tackling Shadow IT. I started by using cloud-billing logs, DNS monitoring, and identity audits to find rogue workloads. Modern VM platforms helped highlight configuration drift that traditional network scanning simply missed. It was common to find that one team's assets were being scanned multiple times while another team's critical cloud accounts were missed entirely – a recipe for dangerous gaps. I learned to prioritize tools and processes that gave deep, constant asset visibility because, without it, any remediation effort would be chasing a rapidly moving target.

Scaling Through Root Cause and Prioritization

The steepest learning curve of my career came when I joined a ride-sharing company with tens of thousands of microservices. The scale was enormous. Each day brought a flood of new CVE alerts. I realized that reacting to every flaw was mathematically impossible. We couldn't possibly hire enough people to patch everything. We needed to stop asking "When can we patch this?" and start asking "Why does this exist?".

As an example of how that approach worked: we had a container image that kept triggering security bugs. Rather than patching the image daily, we fixed the image definition itself, updating the base OS and removing unneeded packages. That single root-cause fix eliminated dozens of future alerts.

We also had to get smarter about prioritization. During the early part of my career, I assumed every "Critical" CVSS score demanded immediate action. But I quickly learned that, if everything is a priority, nothing is – so I began applying context-aware remediation. An over-permissive role on a production database is an emergency; the same flaw on a sandbox server is not.

When immediate patching wasn’t possible – say, because a vendor no longer existed or business uptime was paramount – I relied on compensating controls. This meant deploying a specific Web Application Firewall (WAF) rule to block the exploit or using micro-segmentation to isolate the asset, enabling us to mitigate the risk without breaking the business.

Security Really is a Continuous Journey

The most important metric I tracked wasn't the number of bugs found, but the Mean Time to Remediate (MTTR). When our MTTR for critical flaws dropped from weeks to days, I knew the process was working. More importantly, I realized that vulnerability management is very much a continuous journey: One of aligning people, processes and technology. It’s not just about buying the best scanner, but about building a slick machine that can digest risk and output security.

Now, when I focus on shared ownership, leverage automation to reduce manual toil and apply intelligent mitigations when patching isn't an option, the friction between security and engineering begins to dissolve. By building relationships with developers, automating the toil and focusing on root causes, we stop fighting fires and start building resilience.

Picklu Paul, CISSP, is a cybersecurity and engineering leader with over 10 years of experience across fintech, ride-hailing and large-scale technology platforms. He has held technical and leadership roles with responsibility for DevSecOps strategy, infrastructure security and AI-driven risk reduction. His cybersecurity work spans cloud-native security, scalable vulnerability management, automation and platform resilience.

Related Insights