Shilpi Mittal, CISSP, CCSP, shares her experiences of changing and improving cybersecurity processes, central to which raising awareness of cybersecurity and its criticality during software development and application lifecycles.

Shilpi Mittal, CISSP, CCSPDisclaimer: The views and opinions expressed in this article belong solely to the author and do not necessarily reflect those of ISC2.

A few years ago, I joined a consumer lending company that was simultaneously rebuilding a public portal and a set of internal services. Releases were often delayed whenever a late security issue was discovered. Developers dreaded the words "security review", while product managers dreaded the words "release risk".

I joined the organization with a remit to strengthen our application security posture by modernizing tools, improving governance and embedding cybersecurity best practices across development and operations. In my first month, I saw smart people doing their best – yet our existing process saw most cybersecurity work done only at the very end. I set out to change the process without slowing delivery.

Understanding the Workflow

My first action was to sit with two product teams for a sprint. I didn’t arrive with a new policy; I simply carried a notebook and asked to join standups and design sessions. This enabled me to map where security decisions were being made and how/where they were getting delayed. Two patterns quickly emerged.

The first pattern was that threat discussions occurred only when someone requested a penetration test; the second was that, while our pipeline had scanners, the noise drowned out the signal. This resulted in people clicking past warnings because they didn’t know which ones truly mattered.

My plan to address these issues comprised three simple elements:

  • Bring the first conversation forward
  • Bring the first fix forward
  • Maintain a strong signal to noise ratio

And here are the steps I took – and now recommend to others – to achieve that plan.

Creating a Planning Process Engineers Can Work With

For each new feature, we start with a simple whiteboard session. I refer to it as a walkthrough, not as a threat model. The product owner explains what is being built, leaving me to ask three blunt questions. What are we protecting? Who might try to break it? Where would they start?

This helps us focus only on what matters, by which I mean data flows, entry points and trust boundaries. We now leave these sessions with 3-5 specific checks to build or verify. The output is stored in the ticket system alongside the work and makes planning feel like part of delivery.

I have found this to work well: attendance is high because the session was short and focused, while junior engineers spoke up because the format is casual. We catch missing authorization checks early, which avoided painful remediation. However, keeping it short is essential; people initially tuned out when I tried to cover every possible risk.

Implementing Pull Requests with Real Security Intent

A pull request (PR) is a collaborative tool in software development that allows developers to propose changes to a codebase. Pull requests are standard in modern development workflows, enabling code review, collaboration and maintaining code quality.

However, our pull request template included a checkbox that states “Security review if applicable”. Uncomfortably, few ever checked that box, so at the first opportunity I replaced it with three yes/no prompts.

  • Does this pull request touch [the concepts of] authenticate or authorize?
  • Does this introduce a new data store or modify an existing schema?
  • Does this change input parsing, output encoding, or file handling?

If any answer is “yes”, I or another security champion was tagged for a fast review. We look for concrete issues – such as direct object references, access decisions in the client, secrets in the code and unsafe string handling – working with them on fixes when needed.

This works well: prompts were clear and easy to answer, and we endure fewer last-minute surprises – simply because someone was reviewing changes through a cybersecurity lens while the code was still fresh. I had to learn to avoid leaving lengthy comments and numerous links as this slowed people down. I learned to give one or two specific changes and move on.

Tuning the Scanners So People Trust Them

We didn’t replace our tools, but we fine-tuned how we used them. Static analysis now runs automatically on every code change and it only blocks progress when it detects a small set of critical issues that we’ve already validated against our codebase. All other findings are shared as guidance, with links to short internal examples for developers to follow. For dependency risks, we generate a simple software ‘bill of materials’, flagging known critical issues in daily reports which we share with the team via the channel. When a widely-used library is found to have a vulnerability, we already know where it lives and where it is used, cutting the time to identify and determine the impact from days to hours.

Dynamic scanning runs nightly against a stable preview environment. Whenever an issue is detected, we reproduce it once and write a 'how to verify' note. That note travels with the ticket; critically, this enables developers to confirm the fix without guesswork.

Again, parts of this process always functioned well, while others didn’t. Positively, developers trust the alerts because false positive rates have dropped, while nightly scans keep noise out of working hours. Less positively, my first attempt gated every pull request at the static analysis stage. That increased build times, with consequent complaints, so I refocused the gate on a few patterns we all agreed were dangerous.

Creating a Release Cadence That Allows For Security Work

Ship dates create pressure. I ask the product team to reserve one day per sprint – our “buffer day” for security tasks discovered during the sprint. Allocating this small, predictable slice of time for each sprint is far more economic than finding a whole extra week later.

We also schedule a short penetration test two weeks before a significant release. Findings are allocated to named owners and a target date for remediation is set by the engineering manager (i.e., not me). My role in the overall process is to keep everyone honest about risk and support fixes.

Two things work well here: the buffer day created breathing space, while the penetration test burst gives a real-world check without derailing the release. We learned that when we tried to skip the penetration test burst, we paid for it in production.

Staying Curious After Release

We have added a few high-value events to our logging, including repeated failed logins on admin routes, large data exports and changes to access control lists. I work with the operations team to write simple detections for these patterns; then we meet monthly to review what we’ve seen and what we’ve missed. When something goes wrong, we hold a brief but non-blaming review with the team that built the feature. We ask what made the issue easy to forget, with a view to making that harder to do next time.

Using the engineers who built the system to help write the detections makes them better and easier to maintain. Reviews are quick and valuable because they focused on learning and improvement. However, our early attempts at dashboards tried to show everything and this was unsuccessful; we replaced them with a focus on answering the questions we ask most often.

Consider Firewalls, Training and Culture

Web application firewalls help to reduce obvious attacks, but don’t excuse weak code. We treat firewalls as a guardrail, not a cure. We found that annual training slides failed to change habits and that what actually moved the needle was a weekly allotted hour during which any developer could bring a branch of code and pair with me or another security champion to discuss it. People quickly learned by fixing their own code.

The most important impact of this process was social. I stopped trying to sell cybersecurity as a separate goal, instead framing it as “a way to make Friday evenings boring again”. That line got laughs, then buy-in.

A Real-World Test of Our Effectiveness

Perhaps inevitably, a critical flaw was detected in a popular open-source component we used. We had two active releases and one hotfix in progress, so we pulled the inventory from our bill of materials to determine where the component was located and used. This enabled us to patch the highest risk service first and to add a runtime mitigation for one edge case.

We wrote a brief, plain-language note for the customer support and legal teams about exposure and action. Then, we closed the loop with a brief post on the changes we made in build and deployment so that we can be faster next time. Our entire response took only one day because we had already done the leg work.

How Things Changed

There are still issues. We just find them sooner. But release days are calm, engineers schedule walkthroughs without prompting and I’m invited to roadmap reviews because cybersecurity is no longer a surprise expense. Our incident reviews have become shorter – and kinder. Most of all, people feel proud of the system they are shipping. If these sound like positives to you, then you’re right.

Security is not a gate at the end. It is a set of small choices we make all the way from the first diagram to the last decommission notice. When those choices are easy and part of the routine, good outcomes follow.

Shilpi Mittal, CISSP, CCSP, has 13 years of experience in cloud security, application security, identity management, governance, risk and compliance. She has held technical and leadership roles, with responsibility for enterprise secrets management, API security, application protection, cloud governance and compliance. Her cybersecurity work encompasses critical infrastructure protection, DevSecOps automation, vendor risk management, supply chain resilience and cloud-native security.

ISC2 Webinar

New to the security industry? Or thinking about transitioning into an information security role? If so, this webinar is for you. Please join us for a virtual webinar, Security Industry 101: What Every Newcomer Needs to Know on October 15 at 1:00 p.m. ET. 

The session will cover what you need to know about the cybersecurity field including:

  • Size and growth of the security industry
  • Useful vocabulary terms and buzz words
  • Five types of cyberthreat actors
  • Modern cyberthreats and tactics
  • Categories of security defenses
  • Common security job roles
  • Security industry ecosystem

Related Insights