AI/ML, Generative AI, Threat Intelligence, Phishing

Microsoft, OpenAI reveal ChatGPT use by state-sponsored hackers

Share
ChatGPT chat bot

Microsoft and OpenAI revealed Wednesday that Fancy Bear, Kimsuky and three other state-sponsored threat actors have used ChatGPT as part of their cybercrime operations.

Large language models (LLMs), including ChatGPT, were leveraged by Russian, North Korean, Iranian and Chinese nation-state hacking groups for scripting and phishing help, vulnerability research, target reconnaissance, detection evasion and more, as outlined in a blog post by Microsoft Threat Intelligence.  

OpenAI accounts associated with these threat groups were terminated as a result of collaborative information sharing with Microsoft, the company said in a corresponding post on the OpenAI blog.

“Microsoft and OpenAI’s priority is protecting platforms and customers. Within that context, security teams study the behavior of accounts, IP addresses and other infrastructure to learn attackers’ methods and capabilities, which can including blocking malicious connections and initiating suspension of malicious accounts and services attackers use that violate providers’ terms of service,” Microsoft Director of Threat Intelligence Strategy Sherrod DeGrippo told SC Media in an email.

Fancy Bear uses AI for recon amidst Russia-Ukraine war

Microsoft identified the following five nation-state threat actors utilizing ChatGPT: Russia-backed Forest Blizzard (Fancy Bear), North Korea-backed Emerald Sleet (Kimsuky), Iran-backed Crimson Sandstorm (Imperial Kitten), and China-backed Charcoal Typhoon (Aquatic Panda) and Salmon Typhoon (Maverick Panda).

“Microsoft threat intelligence uses industry accepted best practices around threat actor attribution,” DeGrippo said. “This activity was observed within the signals Microsoft typically uses for attribution within our Microsoft Threat Intelligence capability. We worked closely with Open AI to build confidence in our findings.”

Fancy Bear, a prolific cyberespionage group linked to Russian military intelligence agency GRU, was observed using LLMs to perform reconnaissance related to radar imaging technology and satellite communication protocols, which Microsoft says may be related to Russia’s military operations in Ukraine.

The group, which is also known as APT28 or STRONTIUM, has aggressively targeted the critical infrastructure of Ukraine and its allies throughout the conflict, including in attacks against energy facilities and campaigns exploiting Microsoft Outlook zero-days.  

Fancy Bear was further observed by Microsoft threat researchers using LLMs to help with scripting tasks like file manipulation and multiprocessing, potentially seeking to automate some operations.

“Microsoft observed engagement from Forest Blizzard [Fancy Bear] that were representative of an adversary exploring the use cases of a new technology,” the threat intelligence team stated. “As with other adversaries, all accounts and assets associated with Forest Blizzard have been disabled.”

Spear-phishing, coding helping, translation popular uses of AI by threat actors

Microsoft noted that the threat actors identified appeared to be “exploring and testing” the capabilities of LLMs and that no significant cyberattacks leveraging this form of generative AI were discovered by the researchers.

“The activities of these actors are consistent with previous red team assessments we conducted in partnership with external cybersecurity experts, which found that GTP-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools,” OpenAI stated in its post.

In this “early-stage” assessment of threat actor LLM use, generation of phishing and spear-phishing content, optimization and troubleshooting of scripting tasks, language translation assistance and general research were some of the popular activities observed, according to Microsoft.

Kimsuky, also known as Velvet Chollima, is a North Korea-sponsored threat actor that frequently targets think tanks, academic institutions and news organizations in spear-phishing campaigns impersonating fellow researchers, academics or journalists in order to gather intelligence. The group was spotted conducting a spear-phishing campaign to spread malware backdoors as recently as December 2023.

Kimsuky has used LLMs to help produce spear-phishing content, research potential targets with expertise on North Korea’s nuclear weapons program, perform scripting tasks and study vulnerabilities such as the Microsoft Office “Follina” vulnerability (CVE-2022-30190), according to Microsoft.

Crimson Sandstorm is affiliated with the Islamic Revolutionary Guard Corps (IRGC), a branch of the Iranian military, and spread custom .NET malware such as IMAPLoader through watering hole and spear-phishing attacks.

Microsoft researchers found that Crimson Sandstorm leveraged LLMs to attempt to develop code for evading detection in infected systems, generate snippets of code to assist with tasks like web scraping and remote server interaction, and generate phishing emails. These emails included one impersonating an international development agency, and another targeting prominent feminists to lure them to an attacker-controlled feminism website, the researchers wrote.

Two Chinese state-sponsored attackers, Charcoal Typhoon and Salmon Typhoon, were observed performing “exploratory” actions related to LLMs. For example, Salmon Typhoon was seen using LLMs like a search engine, researching a range of topics including global intelligence agencies, notable individuals, other threat actors and “topics of strategic interest.”

Charcoal Typhoon, also known as ControlX, RedHotel and BRONZE UNIVERSITY, conducted cyberattacks in more than a dozen countries, including the United States, Taiwan, India, in 2023, and previously compromised a U.S. state legislature and COVID-19 research entities. Its LLM use included efforts to potentially automate complex cyber operations, translate communications into different languages for potential social engineering, and assist with post-compromise activities such as gaining deeper system access and executing advanced commands.  

Salmon Typhoon also used LLMs for translation, specifically related to technical papers and computing terms, and attempted to use the model to develop malicious code but was blocked by the model’s filters, Microsoft said.

Microsoft identifies 9 new LLM-themed threat tactics

Nine specific threat actor tactics, techniques, and procedures (TTPs) related to LLMs were classified as part of Microsoft’s research.

“Recognizing the rapid growth of AI and emergent use of LLMs in cyber operations, we continue to work with MITRE to integrate these LLM-themed tactics, techniques, and procedures (TTPs) into the MITRE ATT&CK framework or ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) knowledge base,” DeGrippo told SC Media in an email.

The TTPs mapped by Microsoft are:

  • LLM-informed reconnaissance
  • LLM-enhanced scripting techniques
  • LLM-aided development
  • LLM-supported social engineering
  • LLM-assisted vulnerability research
  • LLM-optimized payload crafting
  • LLM-enhanced anomaly detection evasion
  • LLM-directed security feature bypass
  • LLM-advised resource development

SC Media reached out to OpenAI for more information about threat actor use of ChatGPT; an OpenAI spokesperson declined to provide further information beyond what was included in the company's blog post.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.