Keeping Curl Successful and Secure Over the Decades – Daniel Stenberg – ASW #320
Full Audio
View Show IndexSegments
1. Keeping Curl Successful and Secure Over the Decades – Daniel Stenberg – ASW #320
Curl and libcurl are everywhere. Not only has the project maintained success for almost three decades now, but it's done that while being written in C. Daniel Stenberg talks about the challenges in dealing with appsec, the design philosophies that keep it secure, and fostering a community to create one of the most recognizable open source projects in the world.
Segment Resources:
Announcements
Security Weekly listeners save $100 on their RSA Conference 2025 Full Conference Pass! RSA Conference will take place April 28 to May 1 in San Francisco and on demand. To register using our discount code, please visit securityweekly.com/rsac25 and use the code 5U5SECWEEKLY! We hope to see you there!
Guest
Daniel Stenberg is a Swedish Internet protocol expert and developer who has participated in and worked with Open Source for 30 years. He is most known for being the founder and lead developer of the curl project, one of the world’s most widely used software components. He also participates in protocol development within the IETF and has authored books on curl, Open Source, HTTP/2, HTTP/3 and more. Frequent public speaker.
Hosts
2. QR Codes Replacing SMS, MS Pulls VSCode Extension, Threat Modeling, Bybit Hack – ASW #320
Google replacing SMS with QR codes for authentication, MS pulls a VSCode extension due to red flags, threat modeling with TRAIL, threat modeling the Bybit hack, malicious models and malicious AMIs, and more!
Announcements
Identiverse 2025 is returning to Las Vegas, June 3-6. Hear from 250+ expert speakers and connect with 3,000+ identity security professionals across four days of keynotes, breakout sessions, and deep dives into the latest identity security trends. Plus, take part in hands-on workshops and explore the brand-new Non-Human Identity Pavilion. Register now and save 25% with code IDV25-SecurityWeekly at https://www.securityweekly.com/IDV2025
Hosts
- 1. VSCode extensions with 9 million installs pulled over security risks
This is the kind of article I used for a journey through text, subtext, and context.
The text is that Microsoft pulled a VSCode extension from its marketplace because of security concerns over several red flags in the code, from obfuscated JavaScript to "unreasonable dependencies including a utility for running child processes." notice
The subtext is that the extensions owner created several alternative entities to re-upload a cleaned version, which looked like attempts to bypass the ban.
The context I wanted to highlight was the broader aspect of dealing with dependencies, especially in the npm and JavaScript world. The maintainer claimed, "That dependency has been there since 2016..." and that removing it "was a quick 30-second fix…" discussion.
That prompts my question of how many dependencies do projects preserve out of inertia. In this case, it was almost a decade old, apparently not meaningful to the extension's functionality, and trivially replaced. JavaScript and npms are a universe of polyfills and scaffolding around many simple or even trivial functions. The classic left-pad, also from 2016, boiled down to a 17-line file.
The ubiquity of browser auto-updates and the now minuscule variety of browser engines likely makes a lot of decade-old polyfills and even modern packages either obsolete or redundant. I love the approach of removing code as a security technique -- it reduces attack surface, reduces dependencies, and can improve performance. But who's going to prioritize that kind of work? Is this a task where LLMs can demonstrate effectiveness and value?
- 2. Threat modeling the TRAIL of Bits way
Yep, the uppercase TRAIL is a nod to a new acronym, Threat and Risk Analysis Informed Lifecycle.
I would have gone for BITS, Building Infosec Threat Scenarios.
But the more important aspect of this article is its approach to threat modeling. I like its emphasis on security boundaries and reviewing designs to not "just uncover direct threats to the data that each component handles, but also emergent weaknesses that arise from improper interaction between components, and other architectural and design-level risks."
In other words, if you're doing a threat modeling exercise with developers that involves going through each and every endpoint to repeatedly ask if it might be vulnerable to XSS or SQL injection you're wasting everyone's time. That kind of approach is better suited to scanners. Spend the time with developers asking how their core design intends to handle user-supplied data, where the trust boundaries are in the design, and what the major workflows are in the design.
That approach is a more effective way to produce a useful result. I strongly agree with the article's point that, "A proper threat model exposes design-level weaknesses (of which individual vulnerabilities are symptoms)…"
- 3. All 50 States Have Now Introduced Right to Repair Legislation
The U.S. has close to 50 state-level data privacy acts, so why not have another 50 related to right-to-repair?
The point here is that both data privacy and right-to-repair are important topics with complex trade-offs between businesses and consumers. It's not surprising that states would experiment with various approaches, but navigating all the differences gets difficult fast.
Nevertheless, this is a positive step for the security research community, for owners of devices (where devices includes the massive spectrum of tiny electronic gadgets to tractors and trains), and even for open source, which can provide alternatives to some software in these devices.
- 4. Risky Bulletin: North Korean hackers steal $1.5 billion from Bybit
Ah, crypto. The best self-funding bug bounty ever created.
I'm always torn about covering crypto hacks. They seem so commonplace and mostly result from scams and rugpulls. But many have interesting lessons in the category of "business logic" that appsec loves to talk about, where smart contracts have unintended consequences due to poorly thought through market logic, missing controls on who can take what types of actions, and manipulation through external sources.
This hack is obviously notable for its size, but also in the clever technical ways that it apparently played out, from manipulating a UI to subvert a "secure" ceremony to how the smart contract manipulated the wallet. It looked like a good topic to pair with the threat modeling article.
Additional resources:
- https://announcements.bybit.com/article/incident-update---eth-cold-wallet-incident-blt292c0454d26e9140/
- https://research.checkpoint.com/2025/the-bybit-incident-when-research-meets-reality/
- https://www.web3isgoinggreat.com/?id=bybit-hack
- https://blog.trailofbits.com/2025/02/25/how-threat-modeling-could-have-prevented-the-1.5b-bybit-hack/
- 5. OpenSSF Announces Initial Release of the Open Source Project Security Baseline
I know everyone has been worried about how they should "establish a minimum set of security-related best practices for open source software projects, relative to their maturity level."
Joking aside, it's still helpful to have a simplified way to map good practices for securing a project against existing standards. Not everyone wants to read standards. Plus, this effort notes that it fills in gaps that the OpenSSF Scorecard can't automate and adds criteria that's not covered by the OpenSSF Best Practices Badges.
So, there's a reason for this to be around. Overall, it reflects the welcome investment in open source in terms of demonstrating how projects manage security processes related to curating critical software, as well as providing another signal where they need more help.
Check out the baseline at https://baseline.openssf.org/
- 6. Alpha-Omega 2024 Annual Report
An older article, but one I wanted to include as a companion to the OSPS Baseline article.
OpenSSF's investments in the open source ecosystem are helping to staff security roles, create Rust-based implementations for TLS and a codec, and now looking to secure even more in 2025.
One of their goals for the end of the 2025 is to reach, "The top 10,000 open source projects are free of critical security vulnerabilities". It's a quantifiable number with a relatively quantifiable qualifier. We'll check back at the end of the year to see how this went.
More importantly, we'll track the strategies for achieving this goal. Chasing bug reports from a bunch of scanners might be correlated with success, but there'll have to be more effective approaches to eradicating vuln classes and applying automation to make security at that scale something that's sustainable.
Check out the full report (pdf).
- 7. Gmail Security Alert: Google To Ditch SMS Codes For Billions Of Users
Goodbye SMS delivery, hello QR codes.
Google is pushing for passkeys over passwords and now QR codes over SMS as a supplementary authentication mechanism. Deprecating SMS isn't surprising given the history of SIM hijacking attacks. Google also points out the financial impacts of SMS fraud, which is an important dimension to quantifying the negative impacts of relying on SMS.
I know many infosec folks will object to the lack of human readability of QR codes, but that objection seems superficial and based on a very shallow threat model.
My biggest complaint is that cybersecurity awareness training is going to replace one stupid name for phishing with an even more stupid name for phishing. I have some thoughts on this.
- 1. Clouded Judgment: The AMI Name Game Gone Wrong
It’s a case of mistaken identity in the cloud—where an automated script sees “latest version” and says, “Good enough for me!” without asking who’s behind it. It’s a reminder that, what you don’t see can hurt you, and trusting automation without verification is the cloud equivalent of leaving your door unlocked in a bad neighborhood.
This article dives deep into how attackers are exploiting AMI naming conventions to gain unauthorized access to cloud environments. By crafting malicious AMIs with lookalike names, they can trick scripts into deploying compromised instances.
- 2. Malicious ML models discovered on Hugging Face platform
When machine learning meets malware: The recent “nullifAI” exploit showed how attackers can stash malicious payloads inside seemingly benign ML models, bypassing platform defenses with a simple file format twist. This isn’t just a data science headache—it’s a security threat: any code you run, even code disguised as a helpful AI model, can turn into a security liability.
The AppSec Lesson: Treating ML models as a new breed of software dependency, verifying their integrity, and applying the same stringent security practices you would for any third-party library.