Security Program Controls/Technologies

Can ‘good’ AI defeat ‘bad’ AI?

Share
Credit: Getty Images

Artificial intelligence (AI) – specifically generative AI – and deepfakes are grabbing a lot of headlines, causing excitement about the potential and concern about how the technology may get applied amid the rapid pace of development. From applications that create an art-inspired portrait to ones that can write an email for a user, generative AI has popped up everywhere.

Deepfake technology takes this to the next level, creating life-like images, videos and voices. Generated images of the Pope in a white puffer coat and Will Smith eating spaghetti are just two examples in the world where AI can create realistic artificially intelligent avatars from pictures, recorded video, and audio.

These proofs of concept are two of many fantastical and entertaining examples of AI. But what about the 98% of us who are not VIPs or celebrities? Are we targets? Is anyone interested in cloning our image or voice? As these technologies become more sophisticated, how realistic are threats of fraud from deepfakes to businesses and the average consumer today?

While we’re not quite there yet, we should really start to worry when this technology becomes cost-effective and scalable – and cyber criminals can find ways to commit lucrative fraud. That doesn’t mean we should brush it off, either. The rapid evolution of AI and deepfake technology has moved quickly enough for companies to review their current identity protection and see how futureproof they are.

Because while AI and deepfakes make for interesting discussion, there are much more pressing concerns than this proverbial boogeyman. It’s the less sensational and tried-and-true fraud like document forgery and synthetic fraud that businesses should really worry about.

Deepfakes: The reality of fabrication

The examples of deepfakes seen in the news and social media are focused on celebrities, politicians, and CEOs of big companies – all of whom have a high volume of photos, videos and voice recordings easily accessible to the average person. And while the news paints deepfakes as a threat, given the state of audio/video deepfake and AI technology today, it’s important to understand that the average person will not become a target of this type of fraud – at least for now.

But technology moves fast. Generative machine learning (ML), advances in image generation, synthetic voice generators, and AI/ML generated avatars are moving at a pace that will soon make the development of deepfakes easier for hackers and more difficult for traditional cyber defenses to protect. Today, the technology, while illustrative of its power and concern, does not yet scale and it’s not cost-efficient enough for bad actors to deploy on a broad scale.

While most consumers are aware of deepfakes, they’re not worried quite yet. A recent study showed that while two in three consumers are aware of deepfakes, only 12% said they are worried that the technology could impersonate them.

But as history has shown, if there’s an incentive and there’s a low barrier-of-entry, it’s only a matter of time before cyber criminals will figure out how to effectively scale it. And as more consumers use video technologies and their image and voice signature are publicly available, businesses will need to deploy more sophisticated identity proofing and authentication technologies.

Businesses should worry, but not to the point where they are distracted by this particular threat and aren’t shoring up their current identity protection. In other words, it doesn’t make sense for everyone to worry about the lock on their front door when all their windows are open.

Ordinary threats, extraordinary consequences

Although deepfakes are not an immediate threat, there are other identity threats businesses should prepare for. Document forgery has become the most common type of identity fraud, and synthetic identity fraud, the fastest-growing form of financial crime today, costs financial institutions an estimated $20 billion per year. Synthetic identities use data points like social security numbers and false personally identifiable information (PII), often collected from children and deceased persons, to form a credible identity.

Single-channel, single-factor security systems, including voice authentication in a contact center, are the weakest and prone to attacks. Because many banks in the U.S. don’t require two-factor authentication, it’s easy for hackers to get into accounts with barely any information on the user. And even with two-factor authentication in place, SMS text verifications commonly used are highly vulnerable.

Businesses can combat this fraud through sophisticated identity proofing and authentication technology. Using biometric authentication, including facial and voice recognition, can combat very real and current identity fraud threats today and will prepare businesses for evolving threats in the future.

There are a few different important technologies businesses can implement to detect fraud including liveness detection, voice cloning detection, and identity proofing and authentication.

First, security teams can use liveness detection, AI-based algorithms that are trained to distinguish the face or voice of a real human from a presentation attack. A presentation attack happens when a fraudster uses masks, photos, videos, or voice recordings, combined with ever-sophisticated technology, to pass themselves off as a genuine person with a “true” identity to commit identity fraud. In addition, the capability to detect liveness on government-issued documents also thwarts fraud attempts at scale.

Voice cloning detection algorithms are tuned to detect synthetic voices generated by a wide range of systems. Security teams need to constantly update and model these types of algorithms as new synthetic voice tech gets introduced. Sophisticated identity proofing and authentication tools can manage a customer or employee’s identity through advanced document verification, biometric authentication, multi-factor authentication, and device-based authentication like passkeys. These technologies are all available on cloud-based platforms offer continuous updates on the latest technology

Outpacing the "bad" with the "good"

With rapid advancements in AI and deepfakes, here’s the big question: can “good” AI – the AI and machine learning technologies being developed by industry leaders – outpace the “bad” AI that bad actors use for fraud and other nefarious purposes.

According to consumers, they’re concerned about the threats and aren’t confident the industry can keep up with the increasing threats. A recent study showed that 92% of consumers believe that cybersecurity threats will continue to outpace cybersecurity technology.

Businesses should invest in a continuously adaptive, ML-based approach to human verification and authentication. It’s the only viable defense to combat fraudsters today and to prepare for sophisticated deepfakes in the future, ensuring that only the legitimate user actually participates in the activity. And equally important, all of these protections are not just about preventing loss from fraud, but they can increase revenue by creating a frictionless customer experience that promotes customer retention and satisfaction.

Ralph Rodriguez, chief product officer and President, Daon

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.