Get ready for 2026, the year of AI-aided ransomware

Cybercriminals, including ransomware crews, will lean more heavily on agentic AI next year as attackers automate more of their operations, Trend Micro’s researchers believe.
The prediction comes hot on the heels of Anthropic publishing a report – disputed by some – claiming it saw the first example of agentic AI being used to orchestrate a cyberattack by a Chinese state-sponsored team.
Ryan Flores, Trend’s lead for data and technology research, said the rise of agentic AI technology is “something that we are very wary about,” and that state-sponsored groups will lead the way in innovating on it before cybercriminals start using it for themselves.
According to the security shop’s intelligence, there are no signs that cybercriminals are using agentic AI in attacks right now, since it’s often the state-sponsored groups that lead early adoption of technologies, but once the technology proves to be successful and scalable, that’s when cybercriminals will start adopting it.
He added that agentic AI is a technology that will certainly appeal to cybercriminals’ lazy approach to attacks: Getting the maximum reward for expending as little effort as possible.
For the uninitiated, agentic AI is the next big evolution from generative AI, a technology with which most people, whether they like it or not, have become familiar.
The technology deploys AI agents, which give AI-powered systems the freedom and autonomy to perform actions on behalf of an organization.
It differs from generative AI in that the technology can take action without human input. In an ideal world, this would see a properly permissioned, secure agentic AI system deployed to handle tasks such as employee onboarding.
One example Flores offered was in an HR scenario, where instead of manually creating an email address, Active Directory account, and all the various platform setup processes involved in introducing a new employee to a company, an agentic AI system could fully automate the entire procedure.
The major potential pitfall of agentic systems is the fact that they can take action without human input – a boon to cybercriminals looking to launch attacks.
Flores said: “For example, if you’re a cybercriminal and then you design a similar system and then just say, you know what, I’m interested in this company or website located in this country with this particular IP address, scan it for vulnerabilities, and if you find a vulnerability, exploit it, gain access, and then create a remote shell that would allow me to access that compromised system and that would allow me to see what’s in there.
“That’s a cybercriminal or threat actor use of an agentic AI, and this is something that we are very wary about because the technology is there. The LLMs can actually reason to solve this ‘problem’ for the cybercriminal and design the steps that are needed to make [this happen].
“The tools that are needed for this are actually also available. The tools to identify companies’ websites and domains, and IP addresses, are there. The tools to scan for vulnerabilities are there. The tools to exploit these vulnerabilities are there, and the tools to install a backdoor or remote access are already there. So, the capabilities are there, it’s just a matter of threat actors adopting them.”
David Sancho, senior threat researcher at Trend Micro Europe, said this won’t be a change we see overnight, and the full automation of the entire attack chain won’t be something we see in the short term.
It will start with just one or a few elements of attacks being powered by agentic AI, and increase steadily until the cybercriminal model is overhauled, he said.
Regardless, as Trend Micro noted in its AI-ification of Cyberthreats report today, the shift toward agentic automation of cyberattacks will mark a “major leap” for the cybercrime ecosystem.
The report stated: “The continued rise of AI-powered ransomware-as-a-service (RaaS) will allow even inexperienced operators to conduct complex attacks with minimal skill, reducing reliance on traditional RaaS affiliates and making independent ransomware operations increasingly more common.
“We predict that this democratization of offensive capability will greatly expand the threat landscape.”
Sancho went on to predict that it may start with more sophisticated cybercriminals offering these agentic services to others, creating a new underground market for these capabilities, which will help to drive agentic AI-driven attacks into the mainstream.
As ever in the attacker-defender dynamic, the initial advantage will always go to the cybercriminal, Flores said, and defenders will be tasked with keeping up with the crooks’ tradecraft.
The same principles of the pre-agentic times still apply: Vendors are economically incentivized to adopt the latest technology and protect their customers, and the assume-breach mentality must still persist throughout organizations.
And as AI agents become more prevalent, they should be treated in the same way that any other user who can take action on a system or network should, by assigning them the least amount of privileges as necessary and applying access management controls in the same way they’re used currently.
Just as human users can have their accounts compromised, the key consideration for defenders will be to protect the agents from being taken over by attackers, for example, to issue payments from organization funds, create accounts, send emails, and so on.
As Trend noted in its report, attackers also don’t need to use or even exploit AI agents directly to cause harm.
“They can also manipulate the surrounding infrastructure, inject poisoned modules, or exploit shared orchestration layers, therefore subverting trusted AI agents into performing malicious actions,” the security shop said. “Subtle attacks, such as prompt injections, can silently hijack multi-agent workflows and influence downstream behavior without leaving obvious traces.
“By mapping which services and platforms organizations are adopting, attackers can strategically position themselves to exploit the associated weaknesses.”
Researchers at Hudson Rock also recently highlighted the dangers of embedding agentic AI into operating systems, which many believe is where the technology is heading.
Specifically looking at Windows 11, the team said the new Copilot taskbar essentially creates a centralized data hub that could be exploited by infostealers.
Infostealer malware, and the data it scoops up, are used frequently by financially motivated attackers, such as ransomware crews, to offer the resources needed to gain initial access to a victim’s network.
Hudson Rock said that agentic infostealer attacks are already being observed today. Instead of running a malware payload that scans a machine for secrets, such as credentials, the “agentic-aware stealer” targets these centralized data hubs, like Copilot, to steal the data on the attackers’ behalf.
“Agentic-aware stealer,” in this context, refers to a benign-looking document with hidden instructions, written in white text to appear on a white background.
If a user then opens this and asks Copilot to summarize, analyze, or otherwise parse the file, the agentic AI will follow the hidden instructions and exfiltrate the data for the attacker, often without triggering any security alerts. ®



