Top of Page

InfoSecurity Professional INSIGHTS Archive: April 2019

Galvanize Advertisement

Whitepaper: Enforcing Data Privacy in the Digital World

Technology has transformed the way information is processed; however, this has also created opportunities for sensitive data to be misused. Although laws have been created to protect this data, evolving regulatory requirements present new privacy and compliance challenges. This whitepaper addresses and presents solutions to your top data compliance challenges.

Request brochure »

Look Before You Leap: What to Know Before Diving into Machine Learning

By Deborah Johnson

IDC anticipates a $57.6 billion worldwide investment in cognitive and artificial intelligence (AI) by 2021, which means there’s a good chance your company is considering, if not already buying or building, AI and machine learning (ML) solutions. And not just to improve business processes; companies are also considering adding AI and ML solutions to security operations centers.

Silhouette Looking Over the Edge of a Diving Board

In a recent telephone interview, Paulo Shakarian, CEO and co-founder of CYR3CON, which uses AI to predict cyberattacks, offered some words of advice—and a few warnings—to make sure AI and ML implementations work as intended and do not lead to data leakage and other potential cybersecurity threats.

Beware of the hype
Do your homework before you spend a dime (or thousands of dimes), cautions Shakarian. “The hype is mainly coming from vendors.… The CISOs then feel pressure from the executive suite. We actually have met with quite a few [CISOs] who are under pressure from their boards to leverage AI in their work for cybersecurity.”

Shakarian warns that companies could jump into machine learning/artificial intelligence for the wrong reasons. “It is a dangerous situation where you have companies that are potentially using AI and machine learning primarily as a tool for marketing as opposed to a very specific use case. And that is dangerous to the industry because it leads to a lot of products being both created and purchased just because it has that label as opposed to having any real value.”

What to do before you buy
Shakarian recommends doing adequate due diligence before an AI/ML purchase.

Engage the board. “Board members often come across innovations—and they may be intrigued enough to bring it to the attention of the CSO,” he says. “I think board members do this not to pressure the CSO, but for the purpose of investigating if a new technology can add value.”

It’s up to the CISO, Shakarian says, to coach board members. “The CISO’s job isn’t to yield to everything that comes across the board member’s email or web browser, but rather to guide the board member as a subject matter expert on what they actually need.”

Know your business needs. Not every solution requires AI, Shakarian counsels. “What is the underlying business need that’s being addressed and is that relevant?”

There are, Shakarian agrees, specific business needs that would benefit from the newest technologies. “If you’re looking to predict something; if you’re looking to find something that is abnormal and that would normally require human interaction; if you’re looking to optimize the decision-making process in an automated way—I see those as the holy trinity of AI, probably 90 percent of what you need AI and machine learning for.”

Challenge the vendor. When listening to a pitch from a vendor, Shakarian advises information security professionals to get answers in some crucial areas.

Peer review: The first question to ask, Shakarian says, is whether the underlying technology in the product has undergone peer review. “And if the answer is ‘yes,’ it will be a resounding ‘yes.’ The vendor will have that as part of their marketing. If it’s not, you’re going to hear a lot of hemming and hawing and hesitation or maybe they’ll bring up a technical report…that was produced by marketing. That’s not real science, because that’s created and vetted by the same organization. That should be a big alarm bell if they’re vetting their own stuff.”

Relevant data: Does the data fed to algorithms make sense? This is a huge issue within all avenues of data science, including predictive analytics that relies heavily on machine learning: garbage in equals garbage out. How does this solution help limit dirty or faulty data from producing poor results? “If you work for a dairy company, you may be interested in software to predict the consumption of cheese. But would you buy a tool to make such a prediction based on the number of people who die by becoming tangled in their bedsheets? These items are actually correlated, but it doesn’t mean one necessarily has anything to do with the other,” Shakarian posed in a blog post on this subject. “Regardless of how fancy an algorithm or piece of software is, it’s making the prediction based on some piece of data—and you should ask the vendor what that is and ask him or her why it makes sense.”

Data security and reliability: Unless your company is large enough to afford a data scientist or data science department, you’re going to outsource to an AI/ML provider to develop algorithms and feed your data into them. This raises the scrutiny required to ensure these providers keep your data safe and available at all times. “Does this solution require me to send data to the vendor, and what are the ramifications of doing so? Does the vendor use data from a third-party provider? If so, what happens if that third party goes away—will it limit the new AI/ML based capability, or even cause it to stop working?”

“Transparent” algorithms: In order to monitor accuracy, you need transparency, Shakarian warns. If the algorithm is a “black box,” that could be a problem. Understanding why an AI or ML solution provided a certain result not only leads an organization to better trust the solution, but it also allows for troubleshooting if accuracy suffers. “If it’s a black box, you can’t tell the difference between failure and your normal error rate. Whereas, if there’s some level of transparency of how it’s producing the results, the user can check up on it.”

Updates to the machine learning model: Today’s technologies are constantly evolving. “Especially in a domain like cybersecurity where you have adaptive stress, changes in technology.… You would expect that the model is being updated on a regular basis by the vendors. If it’s not, that, I think, is a major red flag because there’s a high chance that the product might not work as advertised.”

Before succumbing to the siren song of machine learning as the business solution, Shakarian believes you should ask if such a solution is needed at all. “Does the business need/require AI or machine learning to address it in an impactful, sustainable way? It really comes down to a close technical evaluation of the business need.” If the answer is “yes,” then you have a roadmap here to follow.

Deborah Johnson is managing editor at InfoSecurity Professional magazine.

View INSIGHTS Archive >>