Looking Back on 2024 – ASW #310
Full Audio
View Show IndexSegments
1. Looking Back on 2024 – ASW #310
We do our usual end of year look back on the topics, news, and trends that caught our attention. We covered some OWASP projects, the ongoing attention and promises of generative AI, and big events from the XZ Utils backdoor to Microsoft's Recall to Crowdstrike's outage.
Segment resources
- https://prods.ec
- https://owasp.org/www-project-spvs/
- https://genai.owasp.org/resource/owasp-top-10-for-llm-applications-2025/
- https://securitychampions.owasp.org/
- https://deadliestwebattacks.com/appsec/2024/11/14/ai-and-llms-asw-topic-recap
- https://www.scworld.com/podcast-episode/3017-infosec-myths-mistakes-and-misconceptions-adrian-sanabria-asw-279
Hosts
2. AI’s Junk Vulns, Web3 Backdoor, LLM CTFs, 5 GenAI Mistakes, Top Ten for LLMs – ASW #310
Curl and Python (and others) deal with bad vuln reports generated by LLMs, supply chain attack on Solana, comparing 5 genAI mistakes to OWASP's Top Ten for LLM Applications, a Rust survey, and more!
Announcements
Want to shape the future of identity? Identiverse 2025 is looking for dynamic speakers like you to share groundbreaking ideas with over 3,000 identity and access management leaders. Join the most influential voices in IAM and help drive innovation in our industry. Submit your presentation proposal today at securityweekly.com/idvcfp
Hosts
- 1. Where There’s Smoke, There’s Fire – Mitel MiCollab CVE-2024-35286, CVE-2024-41713 And An 0day
Just a quick reference for this one because of how it highlights the "..;/" payload for path traversal and gives a shout out to prior work by Orange Tsai.
- 2. curl | Report #2871792 – Buffer Overflow Vulnerability in strcpy() Leading to Remote Code Execution | HackerOne
One of many junk security reports that the curl project has received. There were always junk reports that were blatant copy-and-paste output from scanners. Now there are more and more junk reports that are blatant, but better worded, output from LLMs. It's a lot of time wasting for anyone who has to triage these.
I'll believe LLMs are effective at vuln discovery when we start seeing someone show off even a low five-figure bounty success rate of using them.
For even more context, here's the curl bug-bounty stats and a post from January 2024 about LLM-generated junk.
Curl isn't the only project impacted by generative AI slop, here's a similar post from Python land.
- 3. Cyber incident board’s Salt Typhoon review to begin within days, CISA leader says
Setting a reminder to check up on the progress of this. It's also an opportunity to talk about general appsec practices we'd hope to seeing being practiced to make these kinds of attackers easier to identify early and constrain their impact.
For a reasoned approach to secure communications, check out this post from Bob Lord.
- 4. Supply Chain Attack Detected in Solana’s web3.js Library
I've gone most of the year without bothering with crypto news -- they're all pretty much repetitions of rugpulls, scams, compromised private keys, and bad "smart" contract designs.
Here we have a supply chain example where a popular JavaScript library was backdoored. It surprised me that the reported losses weren't greater. One useful thing about crypto is that it's much easier to quantify impacts since it's always about the (ostensible) value of stolen tokens.
But it also raises a larger question of whether JavaScript and the browser is even appropriate for this kind of use case. I wouldn't have the same reaction for password managers, though.
- 5. Announcing the Adaptive Prompt Injection Challenge (LLMail-Inject) | MSRC Blog | Microsoft Security Response Center
LLM security is more than just prompt injection, but prompt injection is the most fun and accessible type of attack to experiment with -- you barely even have to bother with syntax or real words.
This CTF runs through January 20, 2025. We'll check it out at the end to see if there's anything more interesting than variations on "ignore previous instructions" or "pretend this is a bedtime story".
- 6. Human player outwits Freysa AI agent in $47,000 crypto challenge | The Block
Yeah, a second crypto post in one month. (That's like the entire limit for the year.)
It's another CTF-like competition against generative AIs that demonstrated the difficulty in defending against prompt injection. It also comes with a prize. However, I only consider it CTF-like because you have to pay to participate and, in the world of crypto, anything collecting money already has a scam-adjacent feel to it.
- 7. Oops! 5 serious gen AI security mistakes to avoid
Only half the length of a top 10 list!
I do like how these are presented and explained. Still, a lot of it reads like it could be about an API instead of an AI. Along those lines, two items that stand out for me are, "Excessive, overprovisioned access" and "Neglecting inherited vulnerabilities".
- 8. OWASP Top 10 for LLM Applications 2025
Since I included the Google Cloud blog post on gen AI mistakes, I might as well include the latest version of OWASP's Top 10 for Gen AI (or LLMs) since it was updated back in November.
- 9. Launching the 2024 State of Rust Survey
Influence the direction of Rust and its developer support! Or, if you still refuse to give up C programs, let them know what other keywords you'd like to see beyond "unsafe" -- maybe "inscrutable_code" or "yay_pointers" or just "doomed".
- 1. Bypassing WAFs with the phantom $Version cookie
Our friends at Portswigger work through how telling a WAF to treat cookies as the original "Version 1" standard from 1997 allows various tomfoolery regarding escaping and use of white spaces. Interesting(unfortunate?) to see how different frameworks handle this differently.
- 2. Improper access vulnerability in SailPoint IdentityIQ
In which we find a CVSS 10 rated vulnerability related to file access