Drexel University Advertisement

Online MS in Cybersecurity from Drexel University
Drexel University’s online MS in Cybersecurity utilizes the College of Computing & Informatics and College of Engineering’s network of professionals to give students access to the latest research, tools and insights, and prepares students to meet the workforce needs through rigorous academic and experiential practical training.

Learn More
AI is predicted to change cybersecurity forever. In this complex new era, defenders will not only have to protect networks and devices against attack but ensure the deeper integrity of the AI systems themselves. John Dunne takes a closer look.

How might cybercriminals of the near future deploy AI to attack networks, cloud systems, and perhaps AI applications themselves?

If you’re a CISO, answering this question is probably relatively low on your order of priorities right now. Today’s non-AI cloud cyberattacks are hard enough to defend against so the possibility that criminals might start using machine intelligence to aid their campaigns will seem far off.

And yet cyber-AI must be growing for a reason. According to a 2022 estimate by Acumen Research, the current size of the market for defensive AI systems is around $15 billion per annum. That’s small fry but by 2030 this will have grown to $133 billion, a growth that points to an imminent surge in demand.

The task, then, is to understand what this means for each organization. This is not easy. AI and its terminology are unfamiliar, its effectiveness unproven. Defenders will use it but so will attackers. Meanwhile, AI is becoming a part of mainstream business processing. In time, these systems will become targets too.

What is AI?

Broadly speaking, AI is a catch-all term for two approaches:

Machine learning is based on supervised and unsupervised techniques in which machines spot patterns and make predictions in data based on algorithms without the need for human intervention or programming. They ‘learn’ or adjust these predictions as they are fed new data.

Deep learning is a type of advanced machine learning based around artificial neural networks. Deep machine learning is structured so that the output of one stage – the ‘learning’ - serves as input for another deeper layer of analysis in which the machine learns without being instructed by humans.

Both ML and DL are often based on public algorithmic models which underpin its inferences and decision making.

AI cyber-defense

The primary issues of AI right now are for automation, basic decision making, and threat/anomaly detection. The theory holds up quite well. A major limitation of traditional cybersecurity systems is that they must be programmed to recognize known patterns and behaviors, which makes them labor-intensive and retrospective. False positives make automation harder. AI should do better because it can spot and infer more complex patterns in new data without needing these to be defined.

However, the technology is complex and consumes skilled staff who are in perennially short supply. Setup can take time as the models are refined. Despite big claims, the effectiveness of AI threat detection and response remains unproven.

Adversarial AI

If defenders can use AI, so can attackers, who can exploit the same open-source models. A lot of white papers talk about AI-based cyberattacks as if they’re already a fact even though hard evidence for this is patchy. Machine intelligence doesn’t announce itself in an attack – it looks like any other technique. What we have instead is a lot of proof-of-concept research and this is what I would do if I were an attacker inference. Probable current uses include:

Ransomware automation – using machine learning to automate the reconnaissance of targets and the vulnerability discovery at scale.

Phishing attacks – Supercharging phishing attacks by personalizing credential phishing, including convincingly impersonating colleagues of family members using publicly available data. Eventually, this might deploy video deepfakes and voice phishing.

AI stealth – cloaking C&C traffic by mimicking legitimate traffic patterns for a given network, thus avoiding anomaly, behavioral and even deep packet inspection defense.

Defeating anti-botnet CAPTCHAs – this has already been demonstrated, for example GSA Captcha breaker.

Plausibly, AI attacks could churn through the first six or seven layers of MITRE ATT&CK killchain an order of magnitude more effectively than traditional hacking tools.

AI is also a target

The ability to target AI itself is one reason why this era will be unlike the cybersecurity everyone is used to. AI promises to do a lot of things well but that means that anyone able to target its vulnerabilities could gain a strategic advantage. For example:

Input and poisoning attacks – messing with the data being fed into AI models without anyone detecting that this has happened. The system either malfunctions or makes an incorrect inference advantageous to the attacker. Even if defenders detect tampering, the fallback of manual processing might be impractical.

Edge AI – Another trend overlapping the cloud and AI is the rise of edge computing. This includes a wide range of sensors and low-level devices that will use AI to become more self-managing. Equally, locating these devices away from centralized oversight increases their vulnerability to sophisticated attacks.

What Gartner thinks

Gartner has developed an AI risk framework called Artificial Intelligence Trust, Risk and Security Management (AI TRiSM). The company predicts organizations will be compelled to integrate compliance and best practice into their operation because “by 2028, AI-driven machines will account for 20% of the global workforce and 40% of all economic productivity.” Gartner’s research also suggests, acidly, that “organizations have also deployed hundreds or thousands of AI models that IT leaders can’t explain or interpret.”

Cloud implications

Without the cloud, AI would be stuck in datacenters where it would not scale. Combining the two allows organizations of every size to access AI cybersecurity technology through software-as-a-service but the same goes for attackers. Cloud AI makes possible powerful AI data models, cloud self-management and automation while creating systemic risk, both to the AI but also to the input data. Adversarial AI will hijack and exploit cloud platforms in the same way as traditional cyberattacks but at greater speed and scale.

Conclusion

AI in 2023 will be more like a slow-motion revolution. Attackers such as nation states will use it sporadically for specific tasks, as will defenders. However, the technology’s capabilities and limitations are still not well understood. Ironing this out will require experimentation as the technology matures. The biggest hindrance of all will be the acute shortage of skills in this field, something which won’t be solved quickly or inexpensively by attackers or defenders.

Further reading:

The Security Threat of AI-enabled Cyberattacks – a recent Finnish Transport and Communications Agency (Traficom) report on how AI technologies might be used to optimize attack killchains. The report also suggests useful near and long-term predictions about where AI might go.