AI/ML, Generative AI, Cloud Security

New attack method called ‘LLMjacking’ reported

Share

Cybersecurity researchers have uncovered a new attack method that they have dubbed "LLMjacking," which targets cloud-hosted large language model services using stolen cloud credentials, after which attackers would attempt to sell the illegal access paths to these models to other threat actors, The Hacker News reports.

The attack, which was identified by the Sysdig Threat Research Team, begins with exploiting vulnerabilities in systems running the Laravel Framework, then obtaining Amazon Web Services credentials for access to the LLM services. An open-source Python script is utilized to validate keys for LLM services from various providers like Anthropic, AWS, and Google Cloud. Attackers leverage an open-source tool that functions as a reverse proxy server in order to obtain access to compromised accounts while concealing credentials. They have also been observed attempting to manipulate logging settings to evade detection.

This approach differs from traditional attacks, focusing on monetizing access to LLMs rather than injecting prompts or poisoning models. Organizations are advised to implement detailed logging, monitor cloud activity for anomalies, and maintain robust vulnerability management to mitigate such attacks.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.