CFO Watch: Keep a Sharp Eye Out for AI Email Fraud

Email fraud is serious and all too real – and it’s up to corporate financial officers to ensure their companies are taking the issue seriously.

The U.S. Federal Bureau of Investigation (FBI) received 21,832 Business Email Compromise (BEC) complaints in 2022, with total financial losses standing at $2.7 billion. Additionally, there was 65% increase in worldwide exposed BAC losses in the same time period signaling escalating attacks by cyber-fraudsters.

One emerging factor in email-related email scams is artificial intelligence, especially generative AI tools like Chat GPT that criminals are leveraging to steal both data and money from companies, according to Fortra a cybersecurity management firm.

A case in point.

In the first quarter of 2023, 25% of all business emails received were “malicious or untrustworthy” Fortra notes.

“99% of these threats were emailed impersonation threats, such as BEC and credential theft lures, that lack attachments or URLs delivering malware payloads,” Fortra reports. “Cybercriminals continue to bypass traditional email security tools and reach end users by impersonating individuals, suppliers, and trusted brands.”

Cybersecurity experts say the motives haven’t changed much on the email fraud front but the tactics sure have changed.

“Third-party targeting, AI, and phishing-as-a-service (PhaaS) have enhanced what was already working, putting the pressure on security teams to identify and mitigate social engineering threats before employees fall victim,” Fortra says.

In the past, law enforcement officials and company cybersecurity managers could easily identify email scams due to poor grammar and wordsmithing. With the advent of AI and machine learning, it’s getting harder to identify potentially fraudulent emails as scammers can use Gen AI to clean up phony emails and they can do so in an abundant number of languages.

“Today we see these same scams attempted in French, Polish, German, Swedish, Dutch, and several other languages,” says John Wilson, Fortra’s threat research analyst noted in comments to CFO Dive. “While we cannot be certain if generative AI was used to improve the grammar or to perform translation beyond the capabilities of Google Translate on any specific message, the timing and volume of the improved grammar and expanded language coverage would suggest the use of generative AI.”

It’s Not Just Email

Other forms of AI-generated cyber-fraud patterns include “sophisticated voice cloning” from phone-based scams where family members’ voices are effectively mimicked to extort cash from the victim, according to the U.S. Federal Trade Commission.

The FTC says it’s keeping a “close watch” on AI-generated cyber-scams “as more AI-related fraud cases emerge.

“We are ultimately invested in understanding and preventing harms as this new technology reaches consumers and applying the law,” the FTC reports. “In doing so, we aim to prevent harms consumers and markets may face as AI becomes more ubiquitous.



Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *