Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Experts Warn of Cyber Risk Due to Rapid AI Tool Evolution

Expect Near-Term Spike in Frequency and Severity of Small Incidents, They Say
Experts Warn of Cyber Risk Due to Rapid AI Tool Evolution
The window to patch vulnerable systems will close due to AI-enhanced scanning. (Image: Shutterstock)

Rapid advances in artificial intelligence tools presage increased use by cybercriminal and nation-state cyber operators, cybersecurity officials and insurance experts warn.

See Also: The SIEM Selection Roadmap: Five Features That Define Next-Gen Cybersecurity

Over the next 12 to 24 months, "it is likely that the frequency, severity and diversity of smaller-scale cyber losses will grow," due to attackers' use of ever more effective generative AI and large language models, "followed by a plateauing as security and defensive technologies catch up to counterbalance," says a recent report from Lloyd's of London.

The insurance and reinsurance marketplace expects gen AI and LLMs to alter the "cyber risk landscape" both for attackers and defenders, and says business resilience practices must adapt.

"Cyber resilience challenges will become more acute as the technology develops," says a January report released by Britain's National Cyber Security Center, which is part of intelligence agency GCHQ. The NCSC says that attackers of every stripe - from less-skilled cybercriminals all the way up to sophisticated nation-state groups - are "to varying degrees" already using AI.

Criminal and nation-state interest in such tools is high, based on underground chatter and attempts to use and refine the tools for malicious purposes, experts say.

"We are seeing this with phishing emails - the language and grammar used is better; the email is better constructed; and it is less obvious that the sender is a non-native speaker," Dan Caplin, head of British consultancy S-RM's European incident response practice, said in a blog post for law firm Pinsent Masons. "This increases the likelihood that recipients will take an action they want them to take, like disclosing data or clicking on a link to malware."

Coming Risks

The NCSC predicts that generative AI and LLM improvements over the next nine months "will make it difficult for everyone, regardless of their level of cybersecurity understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts."

The NCSC expects greater AI automation to speed attackers' ability to find unpatched software and exploit it during the window between the patch's release and its installation. It also expects to see improvements in reconnaissance and exfiltration as attackers use AI "to identify high-value assets for examination and exfiltration, enhancing the value and impact of cyberattacks," including ransomware.

The report from Lloyd's details further upcoming potential cyber risks:

  • Automated vulnerability discovery: Given the potential payoff from a single zero-day vulnerability or widespread but poorly patched bug, "threat actor tooling is likely to outpace defensive tools created by the security industry due to asymmetric incentives," not just for software but also firmware, device drivers and other "domains which are very challenging for humans."
  • Hostile cyber operations: If AI improves nation-state groups' capabilities, expect to see more effective espionage and sabotage campaigns.
  • Lower barriers to entry: Better tools may have a snowball effect, driving more skilled attackers to use them as well as "lowering the barrier to entry" for less sophisticated criminals.
  • Phishing campaign optimization: More automated target discovery could lead to a greater number of more sophisticated campaigns being conducted by attackers, allowing them to better refine playbooks and hit preferred targets at lower cost and with greater frequency.
  • Single points of failure: Interruption or compromise of services based on - or closely integrated with - LLMs could have unforeseen consequences, such as "large blackout events, cyber-physical damage, data breaches or market failures."
  • Shifting risk-reward: Novel capabilities could drive attackers to be bolder, especially if AI tools can better hide digital forensic evidence, such as ties to nation-state group activity.

While more sophisticated and well-funded nation-state hacking groups will initially have the edge, the NCSC expects the cybercrime ecosystem to quickly catch up, which "will almost certainly increase the volume and impact of cyberattacks - including ransomware."

More sophisticated criminals will likely pass these capabilities on to others, for a price. "Commoditization of cybercrime capability, for example 'as-a-service' business models, makes it almost certain that capable groups will monetize AI-enabled cyber tools, making improved capability available to anyone willing to pay," the NCSC said.

Many AI-powered tools are also inherently designed so that less advanced users can employ them. "AI services lower barriers to entry, increasing the number of cybercriminals, and will boost their capability by improving the scale, speed and effectiveness of existing attack methods," James Babbage, director general for threats at Britain's National Crime Agency, recently warned.

"Fraud and child sexual abuse are also particularly likely to be affected," he said.

In the long term, as "AI lowers the barrier for novice cybercriminals," the NCSC said it expects to see "novice cybercriminals, hackers-for-hire and hacktivists" conducting more "effective access and information-gathering operations."

Disappearing Barriers

So far, malicious use of AI has faced numerous hurdles, including the "effectiveness of AI model governance, cost and hardware barriers, and content safeguards," the report from Lloyd's says.

Microsoft said that based on its work with OpenAI, nation-state interest remains high, although "our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely."

At the same time, the technology giant last month reported seeing extensive experimentation by advanced persistent threat groups tied to China, Russia, North Korea and Iran, and said that accounts tied to such activity have been blocked (see: OpenAI and Microsoft Terminate State-Backed Hacker Accounts).

The groups appeared to be testing the use of LLMs for a variety of purposes, such as reconnaissance, including by a Russian group to support the country's invasion of Ukraine. Some also appeared to be seeking help with developing malicious code - Microsoft said built-in ethical safeguards blocked such requests - as well as help with translations, likely in support of social engineering attacks.

Ongoing refinements mean existing barriers to illicit use might soon fall away, especially as attackers gain access to alternatives to such cloud-based chatbot options as OpenAI's ChatGPT-4, Microsoft's Copilot, and Google's Gemini and PaLM, including via open-source options such as Meta's LLaMA.

"The release of unrestricted frontier models, plus recent algorithmic efficiency discoveries, represent a pivotal breakdown in AI governance," Lloyd's said. "There are now many publicly available models which can create explicitly harmful content, and they can now be run on commodity hardware cheaply."

In short, more sophisticated gen AI tools and LLMs are increasingly available to all, regardless of their intent.


About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing paymentsecurity.io, you agree to our use of cookies.