Fraudsters Have Cracked the Code With Gen AI Email Scams

Companies are losing the AI-infused email fraud battle despite robust resistance.

No doubt about it, scammers are getting better at Generative AI email scams – and it’s costing companies some serious cash.

Over 65% of U.S. companies have experienced some form of AI-based fraud just in 2022, the Association for Financial Professionals reported. 71% of those companies note the fraud was conducted through email channels. The problem was even more pervasive for more prominent companies with over $1 billion in annual revenues. The AFP said those firms were “more susceptible” to email scams.

The issue has grown more prevalent with the rise of Gen AI, which currently boasts 180.4 million users.

Traditionally, email hacks could be efficiently dealt with as scammers had to rely on templates to conduct their email raids. Most security software packages could easily detect template-based fraud attempts where the hacker invariably used the same domain name or a malicious email link.

Not anymore.

“Generative AI, however, allows scammers to craft unique content in milliseconds, making detection that relies on matching known malicious text strings infinitely more difficult,” noted Mike Britton, CISO of Abnormal Security, a software security provider, in a December 2023 research note. “Generative AI can also significantly increase the overall sophistication of social engineering attacks and other email threats.”

For example, Britton cites bad actors’ ability to leverage the ChatGPT API to “create realistic phishing emails, polymorphic malware, and convincing fraudulent payment requests.” He adds that cybercriminals have gotten so good they can now build their malicious forms of generative AI chatbots.

The deepfakes are so effective that people talking with an AI avatar on the phone or via a video exchange believe they’re talking to a real human being when they’re not. A recent U.S. Library of Medicine study reveals that “People cannot detect AI deep fakes but think they can.”

$25 Million Lost in Hong Kong

The deepfake trend took an ominous turn last week when Hong Kong police reported a global finance company employee paid $25 million to deepfake fraudsters, thinking it was his chief financial officer who was calling for the payment.

The worker was tricked into a video call with what he viewed as real co-workers and managers, including the fake CFO.

“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching told Hong Kong’s public broadcaster RTHK.

In fighting back against email fraud, the financial sector seems to be leading the way. Using its own Gen AI models, Mastercard detects fraudulent email requests by uncovering so-called “mule accounts” that scammers use to reach victims via email.

“Banks have found these scams incredibly challenging to detect,” Ajay Bhalla, president of cyber and intelligence at Mastercard, noted in a recent statement. “Their customers pass all the required checks and send the money themselves; criminals haven’t needed to break any security measures,” he said.

The use of AI-powered algorithms, as Mastercard is doing, can reveal fake transaction requests, Bhalla said.

Oher technologies like advanced facial recognition software should eventually replace increasingly archaic ID and password-based digital authentication processes.

“We’ve reached a point where only AI can stop AI, and where preventing these attacks and their next-generation counterparts requires using AI-native defenses,” Britton said. “To stay ahead of threat actors, organizations must look to email security platforms that rely on known good rather than bad.”

“By understanding the identity of the people within the organization and their normal behavior, the context of the communications, and the content of the email, AI-native solutions can detect attacks that bypass legacy solutions,” he added. “This is the only way forward—it is still possible to win the AI arms race, but security leaders act now to prevent these threats.”

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *