Microsoft Warns Hackers Operationalizing AI to Accelerate Tradecraft 

Microsoft has warned that threat actors are operationalizing AI along the cyberattack lifecycle to accelerate tradecraft, abusing both intended model capabilities and jailbreaking techniques to bypass safeguards and perform malicious activity. They’re embedding AI into their workflows to increase the speed, scale, and resilience of cyber operations, with the most malicious use of AI centering on using language models for producing text, code, or media.

Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure. For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions.

This dynamic is especially evident in operations likely focused on revenue generation, where efficiency directly translates to scale and persistence. To illustrate these trends, this blog highlights observations from North Korean remote IT worker activity tracked by Microsoft Threat Intelligence as Jasper Sleet and Coral Sleet (formerly Storm-1877), where AI enables sustained, large‑scale misuse of legitimate access through identity fabrication, social engineering, and long‑term operational persistence at low cost.

More details can be found here: https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/

Ensar Seker, CISO at SOCRadar:

“AI is rapidly becoming embedded across the entire cyberattack lifecycle, but not always in the ways people expect. In many cases, threat actors are not building their own advanced AI models; instead, they are operationalizing existing generative AI tools to accelerate traditional attacker workflows. We are seeing AI used to scale reconnaissance, generate convincing phishing content in multiple languages, automate vulnerability research, and refine social engineering campaigns. The real shift is not sophistication alone, it is the speed and scale at which attackers can now execute tasks that previously required significant manual effort.

“The biggest impact of AI in cyber operations is efficiency rather than completely new attack techniques. Attackers are using AI to shorten the time between reconnaissance and exploitation. For example, AI can help analyze large datasets of leaked credentials, generate exploit scripts, or summarize technical documentation for vulnerabilities. This lowers the barrier to entry for less experienced actors while allowing more advanced groups to increase operational tempo and run campaigns in parallel across multiple targets.

“However, AI does not replace traditional attacker tradecraft or eliminate the need for human expertise. Sophisticated campaigns, especially those conducted by nation-state groups, still rely heavily on manual reconnaissance, custom tooling, and operational security discipline. AI is acting more as a force multiplier than a replacement for established tactics. Threat actors still need access, infrastructure, and a clear objective; AI simply helps them move faster once those elements are in place.

“For defenders, the most important takeaway is that AI-driven attacks will increasingly look more polished, personalized, and scalable. Security teams should expect a rise in high-quality phishing, automated reconnaissance against external assets, and AI-assisted malware development. The response should not be panic about AI itself, but investment in visibility, especially around identity, external attack surface, and threat intelligence, so organizations can detect attacker activity early in the intrusion lifecycle before AI-assisted campaigns gain momentum.”

Martin Jartelius, AI Product Director at Outpost24:

“We are seeing the same trend in our own research. In one recent investigation, we observed a threat actor using ChatGPT to assist with vulnerability research related to potential zero-day exploitation. In this case, the attacker’s operational security was weak enough that their activity left a visible trail, giving us rare insight into how generative AI is being used as a ‘research assistant’ during attack preparation. What this highlights is that AI is increasingly acting as a force multiplier for attackers, accelerating reconnaissance, scripting, and vulnerability analysis while lowering the technical barrier to entry.”

AI can do a lot of cool things. But it can also do a lot of bad things if given the chance to. This illustrates the fact that those who defend against attacks should expect more attacks than ever before. Which is of course a bad thing.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading