Generative AI financial scammers are getting very good at duping work email
More than one in four companies now ban their employees from using generative AI. But that does little to protect against criminals who use it to trick employees into sharing sensitive information or pay fraudulent invoices.
Armed with ChatGPT or its dark web equivalent, FraudGPT, criminals can easily create realistic videos of profit and loss statements, fake IDs, false identities or even convincing deepfakes of a company executive using their voice and image.
The statistics are sobering. In a recent survey by the Association of Financial Professionals, 65% of respondents said that their organizations had been victims of attempted or actual payments fraud in 2022. Of those who lost money, 71% were compromised through email. Larger organizations with annual revenue of $1 billion were the most susceptible to email scams, according to the survey.
Among the most common email scams are phishing emails. These fraudulent emails resemble a trusted source, like Chase or eBay, that ask people to click on a link leading to a fake, but convincing-looking site. It asks the potential victim to log in and provide some personal information. Once criminals have this information, they can get access to bank accounts or even commit identity theft.
Spear phishing is similar but more targeted. Instead of sending out generic emails, the emails are addressed to an individual or a specific organization. The criminals might have researched a job title, the names of colleagues, and even the names of a supervisor or manager.
Old scams are getting bigger and better
These scams are nothing new, of course, but generative AI makes it harder to tell what’s real and what’s not. Until recently, wonky fonts, odd writing or grammar mistakes were easy to spot. Now, criminals anywhere in the world can use ChatGPT or FraudGPT to create convincing phishing and spear phishing emails. They can even impersonate a CEO or other manager in a company, hijacking their voice for a fake phone call or their image in a video call.
That’s what happened recently in Hong Kong when a finance employee thought he received a message from the company’s UK-based chief financial officer asking for a $25.6 million transfer. Though initially suspicious that it could be a phishing email, the employee’s fears were allayed after a video call with the CFO and other colleagues he recognized. As it turns out, everyone on the call was deepfaked. It was only after he checked with the head office that he discovered the deceit. But by then the money was transferred.
“The work that goes into these to make them credible is actually pretty impressive,” said Christopher Budd, director at cybersecurity firm Sophos.
Recent high-profile deepfakes involving public figures show how quickly the technology has evolved. Last summer, a fake investment scheme showed a deepfaked Elon Musk promoting a nonexistent platform. There were also deepfaked videos of Gayle King, the CBS News anchor; former Fox News host Tucker Carlson and talk show host Bill Maher, purportedly talking about Musk’s new investment platform. These videos circulate on social platforms like TikTok, Facebook and YouTube.
“It’s easier and easier for people to create synthetic identities. Using either stolen information or made-up information using generative AI,” said Andrew Davies, global head of regulatory affairs at ComplyAdvantage, a regulatory technology firm.
“There is so much information available online that criminals can use to create very realistic phishing emails. Large language models are trained on the internet, know about the company and CEO and CFO,” said Cyril Noel-Tagoe, principal security researcher at Netcea, a cybersecurity firm with a focus on automated threats.
Larger companies at risk in world of APIs, payment apps
While generative AI makes the threats more credible, the scale of the problem is getting bigger thanks to automation and the mushrooming number of websites and apps handling financial transactions.
“One of the real catalysts for the evolution of fraud and financial crime in general is the transformation of financial services,” said Davies. Just a decade ago, there were few ways of moving money around electronically. Most involved traditional banks. The explosion of payment solutions — PayPal, Zelle, Venmo, Wise and others — broadened the playing field, giving criminals more places to attack. Traditional banks increasingly use APIs, or application programming interfaces, that connect apps and platforms, which are another potential point of attack.
Criminals use generative AI to create credible messages quickly, then use automation to scale up. “It’s a numbers game. If I’m going to do 1,000 spear phishing emails or CEO fraud attacks, and I find one in 10 of them work, that could be millions of dollars,” said Davies.
According to Netcea, 22% of companies surveyed said they had been attacked by a fake account creation bot. For the financial services industry, this rose to 27%. Of companies that detected an automated attack by a bot, 99% of companies said they saw an increase in the number of attacks in 2022. Larger companies were most likely to see a significant increase, with 66% of companies with $5 billion or more in revenue reporting a “significant” or “moderate” increase. And while all industries said they had some fake account registrations, the financial services industry was the most targeted with 30% of financial services businesses attacked saying 6% to 10% of new accounts are fake.
The financial industry is fighting gen AI-fueled fraud with its own gen AI models. Mastercard recently said it built a new AI model to help detect scam transactions by identifying “mule accounts” used by criminals to move stolen funds.
Criminals increasingly use impersonation tactics to convince victims that the transfer is legitimate and going to a real person or company. “Banks have found these scams incredibly challenging to detect,” Ajay Bhalla, president of cyber and intelligence at Mastercard, said in a statement in July. “Their customers pass all the required checks and send the money themselves; criminals haven’t needed to break any security measures,” he said. Mastercard estimates its algorithm can help banks save by reducing the costs they’d typically put towards rooting out fake transactions.
More detailed identity analysis is needed
Some particularly motivated attackers may have insider information. Criminals have gotten “very, very sophisticated,” Noel-Tagoe said, but he added, “they won’t know the internal workings of your company exactly.”
It might be impossible to know right away if that money transfer request from the CEO or CFO is legit, but employees can find ways to verify. Companies should have specific procedures for transferring money, said Noel-Tagoe. So, if the usual channels for money transfer requests are through an invoicing platform rather than email or Slack, find another way to contact them and verify.
Another way companies are looking to sort real identities from deepfaked ones is through a more detailed authentication process. Right now, digital identity companies often ask for an ID and perhaps a real-time selfie as part of the process. Soon, companies could ask people to blink, speak their name, or some other action to discern between real-time video versus something pre-recorded.
It will take some time for companies to adjust, but for now, cybersecurity experts say generative AI is leading to a surge in very convincing financial scams. “I’ve been in technology for 25 years at this point, and this ramp up from AI is like putting jet fuel on the fire,” said Sophos’ Budd. “It’s something I’ve never seen before.”