Threat Management

Deepfakes of victims used in sextortion attacks spike, FBI warns

Share
Deepfakes of victims used in sextortion attacks, FBI warns

Federal authorities are warning in a recent uptick in sexualized deepfake images designed to be used in a new wave of so called sextortion campaigns. On Monday, the Federal Bureau of Investigation warned of an uptick in such attacks since April.

"The FBI has observed an uptick in sextortion victims reporting the use of fake images or videos created from content posted on their social media sites or web postings, provided to the malicious actor upon request, or captured during video chats," the warning states.

Targets are then extorted for money or face the threat of having deepfake images or videos shared with family members or via social media feeds with friends, the FBI warns.

Sinister new twist on extortion

Sextortion – the threat of leaking sexually compromising content featuring a victim if they don’t pay up – is a well-thumbed chapter in the cybercrime playbook.

But threat actors have begun taking sextortion to the next level: using deepfake technology to generate explicit images or video that appears to be of the target. The fact the content is fake may not diminish the threat if the victim fears exposure could none-the-less embarrass or harm them.

According to the FBI, attacks can start with sextortionists scraping content their victims have posted on social media and using it to create the deepfakes. Another source of victims include targets being duped into handing over images and videos of themselves, or discovering that content captured during a video call has been repurposed.

“The FBI continues to receive reports from victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content,” the bureau said in a new alert from its Internet Crime Complaint Center (IC3).

“The photos or videos are then publicly circulated on social media or pornographic websites, for the purpose of harassing victims or sextortion schemes. ”

Victims typically reported the malicious actors threatened to share the deepfakes with family members or social media friends if they didn’t pay or, in some cases, refused to send real sexually explicit content.

Even before the deepfake element was added into the mix, sextortion was a growing industry for cybercriminals.

Problem getting worse

In 2021, the bureau said $8 million was known to have been extorted from Americans over seven months. And in January of this year the FBI and partner agencies issued an alert about an “explosion” of cases involving children and teens being sextorted.

“Over the past year (2022), law enforcement agencies have received over 7,000 reports related to the online sextortion of minors, resulting in at least 3,000 victims, primarily boys. More than a dozen sextortion victims were reported to have died by suicide,” the alert said.

“The FBI, U.S. Attorney’s Office, and our law enforcement partners implore parents and caregivers to engage with their kids about sextortion schemes so we can prevent them in the first place.”

The growing problem of malicious deepfakes – not just as a sextortion tool, but also for other nefarious activities including spreading misinformation – has attracted the attention of lawmakers. Some states, including California and Virginia, have already banned deepfake porn.

One deepfake victim, identified in an Insider report as QTCinderella, tweeted about her exploitation: "The amount of body dysmorphia I’ve experienced since seeing those photos has ruined me. It’s not as simple as “just” being violated. It’s so much more than that."

Deepfake crackdown

Congressman Joe Morelle (D-NY) introduced federal legislation last month that would make non-consensual deepfakes illegal.

“The spread of A.I.-generated and altered images can cause irrevocable emotional, financial, and reputational harm—and unfortunately, women are disproportionately impacted,” Morelle said at the time.

“As artificial intelligence continues to evolve and permeate our society, it’s critical that we take proactive steps to combat the spread of disinformation and protect individuals from compromising situations online.”

Deepfakes can be created using open-source software frameworks such as DeepFaceLab which are predominantly used by enthusiasts who are overseen by a robust and well intentioned community. But there are no such good intentions when the code is shared and exploited on the dark web.

Major technology companies and social media platforms have taken steps to address the issue of unconsented deepfake production and distribution, which in turn would reduce its effectiveness as a tool for sextortion attacks.

Last year Google prohibited the training of AI systems that can be used to generate deepfakes on Colaboratory, its publicly available data analysis and machine learning platform.

Facebook’s parent Meta has been working on deepfake detection technology with the aim of helping to keep harmful content from circulating on its platforms. Security researchers and vendors are also working on similar solutions, as the constant cycle of trying to outperform the threat actors continues.

Deepfake porn problem

For years explicit deepfake content has seeped into many sorted corners of the internet and victimized celebrities and created a cottage industry around deepfake adult videos. In 2019, applications such as DeepNude (now defunct) were marketed for $50 and allowed anyone to create them. Since then, reports of the scourge of synthetic images exploiting mostly women have grown.

According to a 2020 MIT Technology Review report, a Telegram-based service was blocked when it was discovered it used deepfake technology to “strip” nearly 100,000 victims. Since then, with the advent of AI-generated deepfake images, the problem has been exasperated.

A March 2023 NBC News report uncovered a bustling illegal industry hiding in plain sight. “You don’t need to go to the dark web or be particularly computer savvy to find deepfake porn. As NBC News found, two of the largest websites that host this content are easily accessible through Google. The website creators use the online chat platform Discord to advertise their wares and people can pay for it with Visa and Mastercard,” according a post written by Arwa Mahdawi written for The Guardian.

Mitigating damage and thwarting attacks

In the meantime, the FBI’s message is to be cautious about what you post or share online, through any platform or channel.

“Although seemingly innocuous when posted or shared, the images and videos can provide malicious actors an abundant supply of content to exploit for criminal activity,” the bureau says.

“Advancements in content creation technology and accessible personal images online present new opportunities for malicious actors to find and target victims. This leaves them vulnerable to embarrassment, harassment, extortion, financial loss, or continued long-term re-victimization.”

To check if your personal information (or your children’s) has been exposed and spread online, the FBI recommends running frequent searches for information such as your full name, address, and phone number.

The bureau also recommends using reverse image search engines to check for any personal photos or videos that may be circulating online without your knowledge.

Simon Hendery

Simon Hendery is a freelance IT consultant specializing in security, compliance, and enterprise workflows. With a background in technology journalism and marketing, he is a passionate storyteller who loves researching and sharing the latest industry developments.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.