Cloud Security, AI/ML, Generative AI

‘LLM hijacking’ of cloud infrastructure uncovered by researchers

Share
Adobe Stock

“LLM hijacking” of cloud infrastructure for generative AI has been leveraged by attackers to run rogue chatbot services at the expense of victims, Permiso researchers reported Thursday.

Attacks on Amazon Bedrock environments, which support access to foundational large language models (LLMs) such as Anthropic’s Claude, were outlined in a Permiso blog post, with a honeypot set up by Permiso showing how hijackers used the stolen resources to run jailbroken chatbots for sexual roleplay.

Threat actors leverage AWS access keys leaked on platforms like GitHub to programmatically communicate with API endpoints, allowing them to check model availability, request model access and ultimately prompt the model using the victim’s resources. The blog identified a total of nine APIs targeted by the attackers, most of which are typically only accessed via the AWS Web Management Console, according to Permiso.

“AWS services are operating securely, as designed, and no customer action is needed. The researchers devised a testing scenario that deliberately disregarded security best practices to test what may happen in a very specific scenario. No customers were put at risk,” an AWS spokesperson said in a statement to SC Media.  

Why do attackers target LLM cloud services?

Hijacking of cloud resources via exposed access keys is often used by threat actors for financially motivated activities such as spam email campaigns and cryptomining. After Permiso researchers observed attackers increasingly targeting LLM cloud services like Amazon Bedrock, they set up their own honeypot to better understand how attackers were leveraging the stolen infrastructure.

The Permiso researchers established their own Amazon Bedrock instance and intentionally leaked their access key in a file on GitHub to create their honeypot. Attackers began attempting to conduct activities on the honeypot “within minutes,” but the scale of LLM invocations by hijackers didn’t take off until about 40 days after the initial access key leak, when the environment saw more than 75,000 prompts processed over the course of two days.

With invocation logging enabled on their honeypot environment, the researchers were able to read the prompts and responses to and from the LLMs, revealing the vast majority to be related to sexual roleplay with various virtual characters. This suggested the hijacker was leveraging the ill-gotten infrastructure to run a chatbot service using jailbreaks to bypass the content filters of the LLMs. A “small percentage” of the content observed also involved roleplay related to child sexual abuse.

The most common model abused by the hijackers was Anthropic’s Claude 3 Sonnet. Anthropic told SC Media its models are incapable of producing images and videos and that any child sexual abuse material (CSAM) input to the model is reported to the National Center for Missing and Exploited Children. The company also works with child safety advocacy organization Thorn to test and fine-tune its models against content related to child grooming and abuse.

“Jailbreaks are an industry-wide concern and our teams at Anthropic are actively working on novel techniques to make our models even more resistant to these kinds of attacks. We remain committed to implementing strict policies and advanced technologies to protect users, as well as publishing our own research so that other AI developers can learn from it. We appreciate the research community’s efforts in highlighting potential vulnerabilities,” an Anthropic spokesperson said in a statement.

During the honeypot experiment, which began on June 25, 2024, AWS automatically identified and notified Permiso about the leaked access key on GitHub the same day the file containing the key was uploaded. AWS applied the policy “AWSCompromisedKeyQuarantineV2” a few weeks later, on Aug. 1, 2024; however, services related to Bedrock were not blocked under this policy at the time, according to Permiso.

The honeypot account was ultimately blocked from using further Bedrock resources on Aug. 6, 2024, a day after invocations by the hijackers reached into the thousands. AWS updated its quarantine policies on Oct. 2, 2024, to include blocking of several of the APIs used by the hijackers, according to Permiso’s timeline.

Permiso’s blog post circumstantially links the honeypot activity to a service know as Character Hub or Chub AI, which offers uncensored conversations with various chatbots. Blog author Ian Ahl, senior vice president of p0 Labs at Permiso, told SC Media this hypothesis was based on many of the character names from the honeypot logs being the same as characters available on Chub, as well a similar jailbreak techniques being identified, but acknowledged that with popular characters and publicly available jailbreak templates being used, it was “hard to count as hard evidence.”

Chub AI told SC Media it has no connection to any cloud attacks and uses its own infrastructure to run its own LLMS, emphasizing that the service does not enable or condone any illegal activity.

“The reason our characters come up in the prompts is not, and I cannot emphasize this enough, not because they are coming from us. The reason the prompts contain our characters is because we are the largest open repository of character prompts on the internet,” the Chub AI spokesperson told SC Media. “Chat prompts going from or to any API, for any language model, will contain a large volume of our characters.”

The spokesperson also said less than 1% of messages sent through its chat user interface go to Anthropic models and that “any individuals participating in such attacks can use any number of UIs that allow user-supplied keys to connect to third-party APIs.”

Chub AI’s terms of service also forbid any use of real, drawn or AI-generated sexual images of children.

How to protect your cloud environments from LLM hijacking

AWS said in a statement that customers should avoid the use of long-term access keys (AKIAs) whenever possible to prevent keys from being inadvertently leaked and misused.

“Instead, human principals should use identity federation and multi-factor authentication through Identity Center and/or external identity providers. Software should obtain and use temporary AWS credentials via IAM roles for EC2, ECS/EKS, and Lambda if running inside AWS, and IAM Roles Anywhere if operating outside of AWS,” the AWS statement read.

Ahl also told SC Media that AWS customers should monitor for the use of AKIAs to access APIs like “GetFoundationModelAvailability,” “PutUseCaseForModelAccess,” “GetUseCaseForModelAccess,” “CreateFoundationModelAgreement” and “PutFoundationModelEntitlement,” as these APIs are not meant to be accessed via AKIAs, but rather through the AWS Web Management Console.

Users should also monitor the use of AKIAs without a user agent or with “Mozilla” as a user agent and investigate spikes in model invocations or cloud service billing that could indicate rogue use of generative AI resources.

A full list of atomic indicators of compromise and attacker tactics, techniques, and procedures (TTPs) for LLM hijacking are listed under “Detections and Indicators” in the Permiso blog post.

An In-Depth Guide to Cloud Security

Get essential knowledge and practical strategies to fortify your cloud security.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.