Cyber-thieves are learning to “fake it till they make it,” much to the chagrin of the financial service institutions who are falling prey to this latest spate of scams.
So-called “deepfakes” — where fraudsters use various technologies to skillfully mimic video, pictures or voices of other people — have captured popular attention, with deepfake videos emerging of celebrities and politicians being convincingly portrayed as doing and saying things they never have. Unlike hoaxes of the past, fraudsters are now utilizing more advanced technologies, including artificial intelligence, to create deepfakes so realistic and seamless that it has become a real challenge for even the most skilled cybersecurity professional to spot the deception.
According to Nick Santora, CEO of security awareness training platform Curricula, while we’re only beginning to see the emergence of deepfakes in finance, “it’s not outlandish to think that it will be used in ways for cyberattacks that we’re not even expecting right now.”
“The threat of voice and visual deep fakes will continue to increase as the tools for these hackers become more widely used and readily available, said Santora. “Today, there’s no technology readily available for smaller financial services institutions, as well as consumers, to help prevent them being duped from these scams."
While deepfakes have already been used for a host of nefarious purposes — from creating hoax pornographic videos to fake news and other misleading images and commentary — these fabrications are quickly becoming fodder for cyber-thieves looking to steal money and data from FSIs. Using AI, black-hat hackers earlier this year created a voice simulation of an executive at a prominent bank in the United Arab Emirates and, in combination with phony emails, convinced bank personnel to release $35 million to them. (Two of the accounts through which the stolen funds were funneled were actually with a U.S. bank; hence, U.S. FSIs and regulators are following this case quite keenly.)
Deepfakes are not a new development. Going back at least two years, skillful scammers have been using deepfakes — particularly in tandem with phishing, fake news and social media accounts and other fraud techniques — to perpetrate business email compromise. (In March 2019, cybercriminals were able to copy the voice of a U.K. energy company CEO to rip off nearly a quarter of a million dollars.) While AI and subsequently deepfake technology continues to improve, these scams are getting more difficult to spot, and the heists are getting larger, too.
“As deepfake accuracy improves and as the tools to make them get better, they could become a real problem in the future,” said Roger Grimes, data-driven defense evangelist for KnowBe4, a security training firm. “People are pretty easy in general to socially engineer into doing the wrong thing.” Used in tandem with convincing phishing emails or duped phone numbers, these scams are quite persuasive.
Where deepfakes can be more dangerous is when the authentication process is more automated and humans are taken out of the equation, said Andrew Howard, CEO of Kudelski Security. If an organization uses a vulnerable service for authentication, an attacker could use compromised facial recognition, for example, to carry out attack on multiple locations, said Howard. Including content about deepfakes in training and creating additional verifications steps of high-value transactions could aid in mitigating such attacks, he added.
In January 2021, CyberCube, a cyber risk analytics firm tracking the insurance industry, released a report saying that due to exponential improvements in using AI to create realistic-looking video and audio fakes — and businesses’ growing dependence on using phone, email and video to connect internally and externally — deepfakes will be “a major cyber threat to businesses within the next two years.” New enhancements are continually being developed that make these digital fakes more plausible.
For example, developers at the University of Washington recently introduced a new technology called “mouth mapping," where the movement of the mouth is very accurately made to look as though the person is actually saying what the voice simulation has them saying, according to CyberCube.
“As the availability of personal information increases online, criminals are investing in technology to exploit this trend," said Darren Thomson, CyberCube’s head of cyber security strategy and the report author, in a prepared release. “New and emerging social engineering techniques like deepfake video and audio will fundamentally change the cyber threat landscape and are becoming both technically feasible and economically viable for criminal organizations of all sizes.”
David Blaszkowsky, head of strategy and regulatory affairs for Helios Data, said deep fakes are really a “worst-case scenario, because nearly any protection dependent on visual or audio metrics can be cracked because they can be, in fact, perfect beyond any margin of error.”
“Fingerprints? Retinal scans? One by one, all the 'unique' metrics that protect access to data and accounts are being wiped out, like antibiotics against ever-mutating infectious diseases,” Blaszkowsky said. “It has always been easy to fool human gatekeepers, but with deepfakes it is easier than ever to fool the computers, too.”
Thomson said it’s “only a matter of time before criminals apply the same technique to businesses and wealthy private individuals. It could be as simple as a faked voicemail from a senior manager instructing staff to make a fraudulent payment or move funds to an account set up by a hacker.”
“There is no silver bullet that will translate into zero losses,” Thomson said. “Training employees to be prepared for deep fake attacks will also be important.”