Skip to main content
Risk Management

The next era of fraud is here

CFOs may not be able to believe their eyes.

A detected deepfake displayed on a glitchy computer screen.

Francis Scialabba

5 min read

As much as finance professionals and tech evangelists love to tout the power of generative AI to positively change the workplace, there’s a downside that doesn’t get talked about as much: AI is supercharging technology-enabled fraud that is costing organizations trillions.

It’s a problem taking many forms. Fraudsters are creating deepfake videos impersonating CEOs to fool employees and grab millions. Malignant foreign actors are using AI to get hired for remote jobs with US tech companies, giving them access to sensitive company processes and data. Business email compromise, already a significant problem for finance departments, is accelerating with the use of AI.

Companies are feeling the impact of the fraud increase in very expensive ways. A March study by TrustPair found “a 118% year over year increase in advanced generative AI tactics like deepfakes and deepaudio,” according to CFO.com. A 2024 Deloitte study predicted that “GenAI could drive a substantial increase in fraud losses in the United States: from some $12 billion in 2023 to $40 billion by 2027.”

So it makes sense that CFOs are increasingly interested in protecting themselves against AI fraud, according to Patricio Munoz, director in forensic practice at KPMG.

“We’re seeing an uptick in client interest with regard to AI fraud, what they can do to mitigate that risk, what they can do to get prepared for that risk,” Munoz told CFO Brew.

To help figure that out, CFO Brew spoke with experts about what CFOs need to understand about the changing AI-enabled fraud risk landscape, and what companies can do to protect themselves.

The new world of fraud. Fraud is nothing new, but AI is putting the speed and scale of scams into hyperdrive. The velocity, breadth, quality, and quantity have all increased dramatically, according to the experts we spoke with.

AI-enabled fraud is making it harder for companies to believe their own eyes and ears. Fraudsters are increasingly using deepfake audio and video technology to dupe companies out of money, hack into their systems, and steal data, according to Munoz.

Munoz has seen executives being impersonated in virtual meetings and voice calls. For example, a CFO might receive a voicemail, short Zoom call, or voice note that purports to be from the CEO. It says—in the CEOs voice—“I’m about to get on a flight, but I want to transfer money to this bank to start heading up our new office abroad. I’ll send you the details via text.”

News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.

This is exactly what happened to “a finance worker at a multinational firm” who transferred $25 million dollars to scammers posing as the company’s CFO, according to CNN. The scam included a video meeting with deepfakes of company staff real enough to convince the victim to wire the money.

“Normally, we would have seen this via email, but we’re seeing this now moving to other messaging and communication platforms,” Munoz said.

The call is coming from inside the house. With AI, scammers are likely to uncover and exploit even more holes within organizations. Jim Wetekamp, CEO of Riskonnect, has seen even more devious tactics to get inside companies, including job applicants created with deepfake impersonations. With AI, that task is simpler for scammers, according to Wetekamp.

“They can code faster than they’ve ever coded,” Wetekamp said. “They can adapt faster than they’ve ever been able to adapt. They can exploit and find cracks in the infrastructure faster than they’ve ever been able to.”

The defense will catch up, but according to Wetekamp, that’s going to take a minute. But the crooks will be moving quickly and AI will probably create more scam artists, Munoz said.

“[AI’s] going to make fraudsters more sophisticated and harder to detect,” Munoz said. “I think we’re also going to see traditionally less tech-savvy fraudsters use AI tools.”

Magic words. Because of AI’s power, new security steps will need to be layered on top of controls already in place. The standard security protocols “will still be relevant, but they just won’t be as effective,” according to Munoz.

One of the most important security features companies should add are unique passwords or keywords for identity management. With deepfake voices and video, a password only known to executives can be used to identify if the person in a video or audio is real or counterfeit.

Companies are also taking other preventive measures. Some of Munoz’s clients are automatically kicking unknown numbers out of conference calls and forcing everyone to have video on to identify anomalies like suspicious face-covering shadows or delayed responses.

For the past decade, we’ve lived with “triangulated multiple points of validation,” as Wetekamp puts it. But he thinks that a post-Covid snapback to face-to-face interactions and hard copies is imminent because of AI.

Munoz agrees.

“[AI fraud] is going to probably create new standards for verification,” he said.

News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.