Application security, Threat Management, Threat Intelligence

Why security teams need to understand the risks of Deepfakes

Share
World Economic Forum Annual Meeting 2020 in Davos on deepfakes. Today’s columnist, Alex Romero of Constella Intelligence, warns that security teams need to watch for groups that use the technology to spread misinformation.

The novelty surrounding deepfake videos has garnered more intrigue than fear in years past, but synthetic media technologies are continuing to markedly improve – to the point where experts predict deepfakes will become indistinguishable from real content.

Actually, the term, which refers to a type of synthetic media that manipulates a person’s likeness in an image or video, wasn’t readily accessible before three or four years ago when videos that used this open-source technology began to gain traction on sites like Reddit. At first, deepfakes referred to pornographic material that incorporated this face-swapping technology in specific online forums and other spaces. But the notion of a deepfake has expanded to include a broader range of applications of synthetic media that fundamentally produce realistic images or videos of people that either don’t exist or were not the original subjects of the manipulated content.

Within the context of the wide range of applications of this technology, there are also constructive applications of the technology, particularly for businesses that leverage these capabilities for marketing and branding purposes. Many well-known brands use synthetic media to circumvent the physical and in-person limitations imposed by the COVID-19 pandemic on content and media production. However, as these technologies improve, so does the ability of malign actors to inflict harm on companies and high-profile individuals.

Most importantly, deepfakes have now become an additional building block atop multiple ways in which bad actors can attack executives or brands by leveraging current (and rapidly developing) capabilities within our digital ecosystem. I predict that as these digital attacks become more sophisticated, they will employ deepfakes as an additional building block in distributed, multi-layered efforts to target high-profile individuals and brands.

Impact on brand and executive reputation
In mid-2019, Moody’s published a research announcement declaring that artificial intelligence (AI) will make it easier to damage companies via fake videos and images, and that these deepfakes could harm the creditworthiness of a company as AI advances. We already know that it’s possible to use synthetic media to propagate misinformation, mislead audiences and influence public opinion. This can create financial and reputational risks for both individuals and organizations.

Social engineering—or duping employees into sharing confidential information—can lead to exposure of sensitive data or the facilitation of unapproved transactions. There have already been cases of corporate funds being transferred to criminals using synthetic audio content to impersonate high-level executives seeking additional credentials or the direct transfer of funds by employees. On a macro level, the ability to influence public opinion at critical moments can affect stock prices and even client or consumer confidence, not to mention confidence in the integrity of electoral processes and public institutions.

Rise in deepfake content

The applications of deepfakes are diverse and the overall volume of deepfake content has exploded. About a year ago, Forbes reported that the number of deepfake videos online had nearly doubled from 2019 to 2020. And this only refers to videos, not taking into account the proliferation of still images and audio content that can also be employed for an expansive range of uses.
What’s driving the proliferation of synthetic media content? It’s compelling and easy to create. And because of the open-source nature of the algorithms and programs that drive improvement and refinement of the technology required to produce deepfakes, the quality and ease of production of deepfakes have improved at a much higher rate than initially anticipated.  

How to safeguard against synthetic media

Despite increased vigilance from everyday users, deepfakes continue to improve in quality, making them increasingly challenging for any single person to identify well-produced synthetic media. Deepfakes and other similar AI-enabled advancements will ultimately require businesses to adopt approaches to security that go beyond malware detection and employee training. They need methods and teams that are both more agile and holistic to protect devices, applications, data, and cloud-service ecosystems.

The massive distribution of synthetic content across the digital public sphere, especially in cases where it’s weaponized by malicious actors, can wreak severe reputational consequences nearly instantaneously. Given the rapidly accelerating sophistication of deepfake technology, security teams need to watch for misinformation efforts that tailor machine learning (ML) models for targeted purposes and deploy deepfakes as yet another building block in the arsenal of capabilities leveraged by malign actors. A comprehensive monitoring program that continuously analyzes the footprint of an organization and its top executives and managers across the multitude of data points, actors, and sources in the digital ecosystem has become critical for the security of internal and external assets.

Alex Romero, co-founder and COO, Constella Intelligence

An In-Depth Guide to Application Security

Get essential knowledge and practical strategies to fortify your applications.