Threat actors are increasingly leaning on generative AI (GenAI) tools to drive their malicious activity. That's according to CrowdStrike's "2025 Threat Hunting Report", which was published today and outlines several trends across the threat landscape. For example, the number of voice phishing (or vishing) attacks in the first half of 2025 exceeded the number of attacks tracked in all of 2024, and hands-on keyboard intrusions increased 27% year over year. However, some of the most eye-opening stats concerned how threat actors are using GenAI to enhance their operations. Historically, many of the emerging attacks featuring GenAI involved prompt injections or some kind of basic social engineering trick (like using ChatGPT to come up with a phishing email), but that is changing quickly. Those basic concepts are being applied in more sophisticated ways, and we see "technical" AI attacks popping up more and more.
CrowdStrike highlighted this increasing sophistication in its report, emphasizing how attackers — such as those involved in IT tech worker scams — are using LLMs to enhance their tactics, techniques, and procedures (TTPs). As CrowdStrike put it, threat actors have capitalized on GenAI models for social engineering, technical, and information operations in ways that "have likely contributed to increased speed, access to expertise, and scalability in threat actor operations." Moreover, organizations' increasing integration of AI means threat actors have a new attack surface to exploit. CrowdStrike highlighted how, in April, it saw multiple threat actors exploit a Langflow AI vulnerability tracked as CVE-2025-3248 to achieve unauthenticated remote code execution. Langflow AI is a popular tool used for building AI agents. "This activity demonstrates that threat actors are viewing AI tools as integrated infrastructure rather than peripheral applications, targeting them as primary attack vectors," the report stated. "As organizations continue adopting AI tools, the attack surface will continue expanding, and trusted AI tools will emerge as the next insider threat." On the social engineering front, attackers are using AI to generate natural language phishing emails, generate digital personas with matching online presence, and convincingly develop the documents and files needed to secure target responses. "This isn’t just about GPT-generated phishing emails. GenAI is becoming a core part of how adversaries operate. We’re seeing both nation-state and eCrime actors use it to move faster, scale operations, and stay undetected in victim environments," Meyers says. "A definitive sign that AI-powered attacks have moved from theoretical to operational is what we’re seeing with adversaries using GenAI to write scripts and build malware. This is happening now."
To counter Famous Chollima (and presumably other attackers like them), CrowdStrike recommends implementing enhanced identity verification processes during hiring that checks professional online profiles; implementing real-time deepfake challenges; hardening remote access security controls "with a particular focus on geolocation masking and endpoint security circumvention attempts"; and creating relevant training programs for hiring managers and IT personnel. Asked about the scale of this employment scam problem, Meyers explains it's a "fully scaled revenue operation for the [North Korean] regime." "Over the past year alone, CrowdStrike OverWatch tracked more than 320 cases of FAMOUS CHOLLIMA operatives fraudulently employed as remote IT workers – that’s a 220% increase year-over-year," he tells Dark Reading. "Based on the volume and consistency of this activity, we assess the campaign could be generating hundreds of millions of dollars annually for North Korea. We’ve attributed it to the 75th Bureau, which supports the regime’s weapons program." Meyers adds that the scheme is pervasive, spanning multiple countries and targeting global organizations.