Cyberthieves Are Leveraging AI to Make Fraud “Almost Undetectable”

Using AI as a shield against cyber fraud detection is a big problem, new research warns.

The worldwide average cost of a data breach was $4.45 million in 2023, according to a recent report from IBM.

That’s a 15% increase since 2020 and evidently, there’s more bad news to come.

As AI technology advances and as cybercriminals figure out how to harness generative AI to create more sophisticated emails and fraudulent ads, expect those cost figures to rise in 2024 and beyond, one new study stated.

“In the past, signs such as misspelled words or the awkward use of language could often be used to detect the use of emails or web ads to trick users into providing sensitive information — a method known as “phishing.” But with the high-quality of human language generation provided by these new AI-based language generators, detecting such emails and fake ads is much harder than it used to be,” said Eric C. Larson, an associate professor in the SMU Lyle School of Engineering’s Department of Computer Science.

Larson, along with SMU executive director of Darwin Deason Institute for Cybersecurity, looked into how large language models used for AI chatbots are being leveraged by cyber fraudsters as an “evolving threat.”

Getting Smarter All the Time

The problem for consumers and businesses is that LLMs are getting smarter, day by day.

Increasingly, data thieves use algorithms and AI to mimic human intelligence, making it virtually impossible to determine the difference between an automated AI-based LLM and something written by an actual human, the duo reported.

“A malicious use of these applications would be to use them in generating ‘fake’ emails that are designed to elicit personal information from victims, or via online chat agent where a victim may think is a real person,” Thornton noted. Hackers using AI-generated images created on programs like DALL-E or Midjourney to make phishing emails look even more authentic “could be another potential threat,” Thornton added.

As advanced AI-powered phishing threats grow “it’s especially important that people double and triple-check sources before providing sensitive information,” Larson said.

“For example, if your financial institution contacts you through email and asks for information, it’s probably best to check the source carefully before answering.”

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *