Author: Phil Muncaster
Date published: January 13, 2025
We are witnessing the revolution of AI in cybersecurity. On the one hand, artificial intelligence (AI) is introducing dangerous new cyber risks and upskilling threat actors. On the other hand, it offers network defenders new ways to detect threats, accelerate incident response, and enhance cyber resilience. The key to harnessing AI's potential without succumbing to its threats is to understand where the risk is and how it can best be mitigated.
AI cybersecurity risks are growing. The United Kingdom National Cyber Security Centre (NCSC), which works closely with U.S. security agencies, warned in January 2024 that AI will "almost certainly increase the volume and heighten the impact of cyberattacks over the next two years." The NCSC highlights several ways this could happen.
The ability of AI to analyze, process and summarize large amounts of data will enable it to identify high-value devices, systems and assets that may be vulnerable to attack. This information can help threat actors to scale attacks with little effort, improving their return on investment (ROI).
Generative AI has the ability to interact in natural language and generate flawless, grammatically correct text in a variety of languages—a capability that could help threat actors improve the effectiveness and scale of their phishing campaigns.
As the number of successful data exfiltration cyberattacks increases, the data feeding malicious AI will improve, leading to "faster, more precise cyber operations," according to the NCSC.
As is often the case with advances in technology, the latest AI systems are used for both nefarious purposes and positive ones. We can divide the risks into three main categories:
Legitimate systems can be subverted or "jailbroken" in order to fit the needs of an attacker. Opportunistic hackers are selling jailbreak-as-a-service kits on the dark web. These kits use a prompt injection attack to trick legitimate large language models (LLMs) like ChatGPT into behaving in an unintended manner. This could be used to reveal confidential information but is more often employed to manipulate the LLM into answering questions that violate its own policies—such as how to construct a phishing campaign. Malicious users are given an anonymized connection to a legitimate LLM app as part of the jailbreak service package.
Another way to subvert AI is through a more complex technique known as data poisoning. In this approach threat actors tamper with the data used to train the underlying model in order to produce outcomes aligned with their goals.
Although jailbreaking legitimate AI systems like ChatGPT is an attractive prospect, cybercriminals are concerned that their activities may be flagged. That's why some turn to malicious large language models that are deliberately created to help threat actors launch attacks. Malicious LLMs such as WormGPT and FraudGPT, have no security guardrails and protect user anonymity. Available through hacking forums for a subscription fee, they help users create malicious code, find leaks and vulnerabilities and build phishing pages and campaigns.
There are also malicious LLMs on the dark web that help create deepfake content such as audio, still images or video designed to impersonate a real individual. They can be used to bypass Know-Your-Customer (KYC) checks, facilitate scam campaigns and even carry out sophisticated virtual kidnapping schemes.
Sometimes AI cybersecurity risks don't involve malicious actors at all. In several instances, corporate users accidentally shared sensitive information—such as proprietary code, corporate strategy or meeting notes—with commercial LLMs. This kind of leak could invite regulatory scrutiny under the European Union’s General Data Protection Regulation (GDPR) or other privacy regulations in the United States and around the world, especially if customer or employee information is compromised.
Generative AI is also known to present incorrect statements as facts, a phenomenon known as hallucinations, which could introduce enterprise risk if the appropriate guardrails and policies aren't put in place.
Fortunately, positive use cases for AI in cybersecurity continue to emerge. In fact, network defenders have used the technology for years in various forms such as legacy intrusion detection and spam filters. Here are some more recent and emerging use cases for AI in cybersecurity defense.
AI can analyze baseline normal behavior to find patterns in extremely large datasets, enabling it to flag suspicious activity and prioritize alerts across endpoint, email, network and other IT systems. Depending on the incident, it can then automate response playbooks to isolate and contain threats, then remediate and recover.
AI algorithms can be trained in machine learning techniques to observe deviations from users' normal writing style—a useful capability in flagging phishing or business email compromise (BEC). These techniques can be combined with analysis for other tell-tale indicators of malicious activity—such as risky sender domains—to risk score or block potential malicious emails.
User behavior analytics (UBA) tools use machine learning to identify suspicious patterns similar to the examples above, which could help organizations catch malicious employees or threat actors inside the network.
Generative AI trained on specific datasets could help to lighten the load for IT administrators by triaging security updates for patching and identifying bugs in in-house code. It could also help to identify common cloud and other misconfigurations and create synthetic data sets to test application security posture.
Multiple vendors are producing generative AI tools to help Security Operations (SecOps) teams work more productively. They can close skills gaps by explaining and contextualizing alerts, suggesting recommended actions, translating complex scripts into plain English and helping analysts develop advanced threat hunting queries.
AI in cybersecurity can support Zero Trust initiatives by optimizing threat detection and performing continuous monitoring and validation for automated identity and access management (IAM). It's already used in Verizon Advanced Security Operations Center (SOC) services to monitor and alert organizations to potential threats.
Above all, remember that AI is a fast-evolving field and responsible governance is essential. Machines should never be left to make critical decisions on their own. Human oversight is usually required—and often advised—to check the output of vulnerability scans, threat alerts, coding efforts and more.
To better understand how AI is currently used in your organization and where policies should focus, consider the following questions:
Network defenders can use AI tools to defeat malicious AI—or AI subverted for malicious ends—through threat and phishing detection, IAM, vulnerability management and more. Further measures can be taken to avoid AI cybersecurity threats and risks, including:
Learn more about how Verizon Network Security Solutions can help your organization to manage cyber risk more effectively across the AI attack surface.
The author of this content is a paid contributor for Verizon.
Call sales
888-789-1223
Chat with us
Start live chat
Have us contact you
Request a call
Get updates
Sign up for insights
Already have an account? Log inExplore support