AI/ML, Phishing, Threat Intelligence

FBI warns of rising AI tools deployment in financial fraud schemes

Fake ChatGPT, Claude PyPI packages spread JarkaStealer malware

The Federal Bureau of Investigation has issued a public service announcement warning of a rise in the use of generative artificial intelligence tools in financial fraud schemes, TechTarget reports.

According to the FBI, threat actors are employing AI-generated text, images, audio, and videos to execute highly convincing scams, making it increasingly difficult for victims to recognize fraudulent activities. This includes using AI-generated text to create fake social media profiles, phishing emails, and fraudulent websites. AI-generated images are often used to enhance fake profiles or impersonate real individuals in communications. Additionally, threat actors are forging identification documents such as driver's licenses to facilitate identity fraud.

The FBI also highlighted the use of vocal cloning, where attackers generate AI-powered audio to mimic voices of public figures or individuals close to their victims, aiming to gain access to financial accounts. AI-generated videos have been used in real-time video chats and promotional content for investment scams, further enhancing the believability of fraud schemes. To combat these threats, the FBI advises individuals to establish secret verification methods with trusted contacts, limit the sharing of personal images and audio online, and carefully examine content for imperfections.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds