By: Robert Fritz, CISSP, CSSLP
Companies, consumers and citizens depend on a shared Internet, and the services that connect to it. This “cyber domain” interconnects cloud services and vendors that store information about our finances, politics, activities, as well as our locations. Supporting a safe cyber domain in which everyone can live and work allows freedom from worry about how our information could be shared or (mis)used.
Repeated failures of stewards of private information like Twitter/X, Uber, or Equifax, increase the general risk of identity theft. Failure of infrastructure companies make our world less safe. In one recent example of the latter, Colonial Pipeline's failures caused gas shortages in part of the U.S. In another, an unauthorized release of lye into a water treatment plant north of Tampa, Florida had the potential to risk water safety.
The outlook remains challenging. Without systemic intervention, well-respected organizations that consider cyber-domain risk, see bad-actors as a continued source of pain, and governments are responding with corporate accountability:
- “According to the World Economic Forum Global Risks Report 2020 over the next 10 years, cyber-attacks will be the second greatest risk businesses will face.”
- “Cyber threat actors use increasingly sophisticated capabilities
to undermine ... our economy and democracy, steal intellectual
property, and sow discord.”
- CISA Strategic Plan, September 2022
- The US announced a new system of incentives and accountability for
companies that create the nation’s critical infrastructure and its
- US Cyber Implementation Plan, July 2023
The officials we elect and the organizations that employ us and drive our shared economy, have a duty to secure the cyber domain that we all share. How can we make this actionable?
Fortunately, there are well-established principles and tools that organizations can use to significantly lower risks to their stakeholders. Vendor-agnostic groups like National Institute for Standards and Technology (NIST), the Cybersecurity and Infrastructure and Security Agency (CISA), and others have created standards that can measure an organization’s or technology’s preparedness for attack. They have also created forums for knowledge sharing and even direct support.
Unfortunately, such technical tools only work when they are used, and using them takes time, money and staff, which can be an especially hard sell to participants outside IT. Low prioritization of cyber security engagement with the broader business can burn out existing security staff. In a survey of departing CISOs performed by Korn Ferry in 2020, and cited by Forbes, they found that, unlike CEO and CIO tenures of 8 and 5 years, the average CISO tenure was about 2 years. They cited the following drivers for CISO departures:
- Corporate culture: One-third ... stated that as CISOs, they would change their job when they feel their employer doesn't have the culture that emphasizes cyber-security.
- No visibility: One-third ... stated they would [leave] if they were not actively engaged with the executive leadership team.
- No resources: One third ... stated they would [leave] if they felt the budgets were not realistic to the risk associated with the company's size or industry.
Implied by the survey, and borne out by my own experience, these conditions prevail for understandable reasons; while financial risk is a core competency of most boards, cyber-security is a much more recent accountability. Often, cyber security is left to “IT” to figure out as a technical issue, despite the fact that many of the issues in cyber security depend on work throughout the company. In addition, IT is often positioned within the company as a cost center, so cyber investment and risk rarely line up organically in the same way financial investment and risk are understood and managed.
Past Attempts at Security Accountability
In practice, current regulatory forces have not yet affected sufficient change. Insurance companies, regulators, and even boards still often ask overly-specific questions or vague, easy questions about general security. These kinds of questions encourage narrow or vague answers, and frequently miss opportunities to drive the kinds of frank and uncomfortable conversations that affect change. Structured and probing discussions are best driven top-down for a few reasons. First, there is a high bar for sending bad news up-chain, and second, accurately communicating security risk in business and financial terms requires a reporting program. Such a mandate requires board initiative to ensure it is of sufficient transparency.
Absent impetus from accurate and transparent reporting coupled with accountability, security is often personality-driven outside the security team, or by vendor relationships.
More Detailed Technical Standards: Hard to Align to Business Risk
How can we collectively improve accountability? It’s likely not by creating a law or regulation mandating one of the 1,000+ page federal technical standards directly. Such an approach can become a cumbersome and expensive exercise, resulting in little change. Despite such a requirement, even the U.S. government can’t keep up. In a 2021 Senate report “Federal Cyber Security: America’s Data Still at Risk” (https://www.hsgac.senate.gov/imo/media/doc/Federal%20Cybersecurity%20-%20America's%20Data%20Still%20at%20Risk%20(FINAL).pdf) the Committee on Homeland Security noted that since a similar scathing report 3-years prior, there was very little progress on security. Furthermore, when requirements are highly technical, they are naturally delegated to technical staff.
Even when executive accountability is required, like in the NERC-CIP and PCI standards, the expense of compliance tends to ensure that regulators only require compliance in very narrow parts of the enterprise. As businesses rely more and more on the integration of business and operational systems, such narrow focus is less viable. As we saw in the case of Colonial Pipeline, it wasn’t the regulated Industrial Control Systems (ICS), but the less-regulated business systems that caused the company to shut down its control systems. In a similar recent attack on Raytheon development systems, the breach was not in the more-regulated operational systems, but still had an impact. Raytheon felt the need to ask their customers to disable Raytheon remote services. That still disrupted routine operations in hundreds of critical infrastructure companies.
Cyber-Business-Risk Focus: Empower Companies to Define Risks Engage Boards
There is another approach to regulation, arising since the Enron incident and the DotCom collapse. In one example, Sarbanes-Oxley requires principles of market transparency to investors. In another, the 2003 Global Settlement created an investment analyst conflict of interest separation “wall.” These approaches focused on the board, the part of the organization that is best positioned to drive ethical behavior and resource.
Boards are institutions designed to align the behavior of the company to the interests of stakeholders. Fiduciary duty to oversee management of the company on behalf of investors is historically the core function of a board. They already have Risk (often only financial) and Audit Committees dedicated to meaningful understanding and governance of risk.
Boards need not wait for regulation. I’ve seen where accountability has worked and where it has not worked. In one experience, as the cybersecurity lead in a holding company, I saw a case where a board sought and achieved meaningful change by continuing to demand more and more detail on the security posture of its operating companies. This mandate allowed us to hire and empower a neutral party (with no stake in the outcome) to build wholistic security metrics. As you’d expect, there was a lot of work needed. Our board rightly demanded a plan and held us to it. In contrast, in another company in a similar position, I saw a similar set of metrics built by in-house staff, “pushed up” the chain. The attempt met executive resistance to sharing unflattering news that the board hadn’t asked for, and so never reached the board.
I hear this story regularly from peers: If a board demands accountability, transparency, and concrete planning, they get it. Alternatively, pushing risk-management policy and costs up-chain, especially in cases where the true costs may fall outside the company, is a hard sell.
Ethics-based board accountability is a natural extension of the approaches that have served the public in service of investor and financial risk in the past. As security practitioners, citizens, consumers, and board members, we have a joint stake not just in the global financial markets, but in the global cyber domain as well. The good news is that there are institutions that are already advancing organic development of voluntary standards:
- Groups like ISACA and ISC2 already certify security-risk professionals and hold them to an ethics standard.
- Groups like the ICD in Canada already recommend an individual-board-member code of conduct that include principles of individual and collective responsibility, and a requirement for first-hand knowledge.
Organizations like CISA have a recently expanded mandate and implementation plans to coordinate security work for the federal government, and support of private and public companies. Their offer of greater public / private partnership should be encouraged by cyber professionals and organizations.
Boards and board education programs and organizations like ICD include cyber security awareness in their Environmental, Social, and Governance (ESG) initiatives. We should encourage current and aspiring senior cyber leaders to join those groups, and for our professional associates to collaborate.
If we continue to nudge these organizations to work together, in directions that are already trending, we can continue to foster the sort of board-empowerment in an emerging areas of business risk that often boards are the last to be engaged on. Many board members want to get engaged. They need to be shown how, by their peers, partners and stakeholders.