Extensive dependence on large language models in the code development process could increase the risk of a slopsquatting supply chain intrusions, which involve the creation of hallucinated open source software to lure targets into downloading malicious packages, reports Infosecurity Magazine.
Researchers from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma who ordered over a dozen code-generating LLMs to produce 576,000 Python and JavaScript code samples discovered that 20% of recommended packages were hallucinated, according to a report from Socket. Repeating the prompts 10 times each recommended 43% of the hallucinated packages, indicating the viability of such an attack. "This threat scales. If a single hallucinated package becomes widely recommended by AI tools, and an attacker has registered that name, the potential for widespread compromise is real," said Socket, which urged developers to ensure proper tracking and verification of dependencies to prevent potential compromise.