AI-powered large language models (LLMs) are fuelling next-generation cyberattacks. Stefanie Schappert reports
There is a critical shift in the cyber-threat landscape – and it is all about AI. That is according to a updated report from the Google Threat Intelligence Group.
This just-in-time AI malware marks what Google is calling a ‘new operational phase of AI abuse’. Moreover, it is already being actively used by low-level cybercriminals and nation-state actors alike.
Google makes it clear that attackers have moved from using AI as a simple productivity tool to creating the first-of-its-kind adaptive malware that weaponises large language models (LLMs) to dynamically generate scripts, obfuscate their own code, and adapt on the fly.
Don’t get it wrong, attackers are still using artificial intelligence to generate basic and yet hard-to-detect phishing lures for social engineering attacks. But adding to their arsenal are built-to-go modular, self-mutating tools that can evade conventional defenses.
As Google puts it: “These tools can leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware. While still nascent, this represents a significant step toward more autonomous and adaptive malware.”
And while the research indicates that some of these novel AI techniques are still in the experimental stage, they are a surefire harbinger of things to come.
What also makes this evolution particularly worrying is the lowered barrier to entry. Google found that underground marketplaces are offering multifunctional AI toolkits for phishing, malware development, and vulnerability research, so even less-sophisticated actors can tap into the toolset.
Meanwhile, nation-state groups, such as Russia, North Korea, Iran, and China, have already figured out how to leverage AI tools across the full attack lifecycle, from reconnaissance and initial compromise to maintaining a persistent presence, moving laterally through the target network, and developing command-and-control capabilities and data exfiltration.
In effect, defenders must now prepare for an era of adaptive and autonomous malware and AI tools that learn, evolve, and evade in real-time, creating new challenges for this generation of cyber defenders, who must learn to combat self-rewriting code, AI-generated attack chains, and an underground AI toolkit economy.
Traditional static signature defences will soon become ineffective, leaving already burnt-out CISOs scrambling to quickly pivot to anomaly-based detection, model-aware threat intelligence, and real-time behavioural monitoring.
Furthermore, AI-enabled tooling will almost certainly raise attackers’ success rates; not because every attack is flawless, but because automation, real-time adaptation, and hyper-personalised lures will massively widen the attack surface.
And let us not forget the trickle-down effect that these AI-driven cyberattacks will have on the average person.
What happens when AI, which can already ingest a person’s public posts, bios, photos, and leaked data to mimic their language, references, and relationships, begins to tailor its attack strategy against its target in real-time?
AI-fueled scams, phishing emails, fake websites, and voice or video deepfakes will sound and look far more convincing than ever before, putting personal finances, privacy, and even digital identity at greater risk.
The result? An era where cyber deception feels authentic, the line between real and fake blurs, and the average person is exposed to attacks that feel real, personal, and nearly impossible to detect.
Stefanie Schappert is a Senior Journalist at Cybernews, the independent media outlet where journalists and security experts debunk cyber by research, testing, and data.
Engineer News Network The ultimate online news and information resource for today’s engineer