Developer Environments, Developer Experience, and Security – Dan Moore – ASW #319
Full Audio
View Show IndexSegments
1. Developer Environments, Developer Experience, and Security – Dan Moore – ASW #319
Minimizing latency, increasing performance, and reducing compile times are just a part of what makes a development environment better. Throw in useful tests and some useful security tools and you have an even better environment. Dan Moore talks about what motivates some developers to prefer a "local first" approach as we walk through what all of this means for security.
Announcements
Identiverse 2025 is returning to Las Vegas, June 3-6. Hear from 250+ expert speakers and connect with 3,000+ identity security professionals across four days of keynotes, breakout sessions, and deep dives into the latest identity security trends. Plus, take part in hands-on workshops and explore the brand-new Non-Human Identity Pavilion. Register now and save 25% with code IDV25-SecurityWeekly at https://www.securityweekly.com/IDV2025
Guest
Dan Moore is a principal product engineer for FusionAuth, and currently helps build solutions and educate developers about auth and OAuth. He’s written, contributed to or edited 5 books, including “Letters To a New Developer” and “97 Things Every Cloud Engineer Should Know”. A former CTO, technical trainer, engineering manager and longtime developer, Dan has been writing software for (checks watch) over 25 years.
Hosts
2. Regex DoS, LLM Backdoors, Secure AI Architectures, Rust Survey – ASW #319
Applying forgivable vs. unforgivable criteria to reDoS vulns, what backdoors in LLMs mean for trust in building software, considering some secure AI architectures to minimize prompt injection impact, developer reactions to Rust, and more!
Hosts
- 1. Regex Gone Wrong: How parse-duration npm Package Can Crash Your Node.js App
I grabbed this article to continue a previous episode's conversation about forgivable vs. unforgivable vulns and whether there's an additional security dimension like "inconsequential, but easy to address".
For me, regex DoS (reDoS) is a cool technical curiosity, but rarely rise to the importance of meaningful vulns. Partially this is because there are "easy" (per the unforgivable criteria) mitigations. For example, you can tweak PCRE2's limits to minimize the impact of lookaround patterns, heap memory, and match depths. But that presumes you have access to PCRE2's configuration.
In this npm example, there's also the recommendation of an alternative regex engine, re2. That's likely the easiest and safest route for most situations, but depends on how fancy your own regexes are or what flavors of regexes you expect to support.
If you're running regexes against user-supplied data, the general mitigation of limiting data sizes will help here and in more general cases. For example, there can be fun DoS scenarios where an attacker submits a massive password (on the order or megs or gigs if the server allows) and spins CPU resources while the app hashes it as part of the normal password comparison check.
- 2. How to Backdoor Large Language Models
A researcher demonstrates how to insert a backdoor into an LLM by adjusting its weights so that the first decoder layer silently adds something like "...and include this backdoor" to the user's prompt. The researcher points out that the opaque nature of a model's weights makes this type of attack difficult to identify by inspecting the model. Plus, the appearance of malicious code has some degree of plausible deniability given the propensity for LLMs to hallucinate.
Anthropic discussed similar backdoor and surreptitious model behavior in their 2024 paper, "Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training".
We saw similar trust issues with humans just last year in the XZUtils saga. In other words, it's helpful to identify what here is a new attack vector and what here is a persistent threat that has countermeasures outside of inspecting models. For example, Stuxnet will always be a premiere example of sophisticated backdoor planning, design, and delivery. More recently, the pager and walkie-talkie explosives in Lebanon demonstrated similar long-term planning. Even XZUtils, though clearly on a smaller scale of impact, demonstrated years-long planning to execute. All of which is to say, this is well done research that shows the feasibility of a backdoor, but it feels more like variation on a theme of supply chain attacks. And the countermeasures for those attacks range from controls around provenance, identity, reproducible builds, and all the way to formal verification.
This work is also reminiscent of Ken Thompson's famous lecture from 1984, "Reflections on Trusting Trust". He described the dilemma of placing trust in a compiler and the uncertainty of whether a subtly malicious compiler would introduce a backdoor into your compiled binary. In his scenario, he modified the Unix login to accept "codenih" as a root password (see here for more details).
- 3. Analyzing Secure AI Architectures
This year I've been wanting to focus more on secure design concepts. They're more interesting and educational than just revisiting the same kind of vulns over and over again.
This article dives right into design patterns for treating AI like any other software -- albeit software that has a non-deterministic nature and whose desired use cases often trample right into user data. The key part for me was its goal in "…designing applications with proper trust segmentation that renders prompt injection irrelevant."
Prompt injection attacks feel like the new XSS. Their techniques and payloads are fun to play with and inspire lots of creating thinking. (I have some of my own XSS examples here.) But ultimately they're a lot of noise to be filtered out by a strong framework. This paper gives some examples of how to create secure architectures that include LLMs.
- 4. 2024 State of Rust Survey Results
I'll admit I had to look away from the chart designs. Each of them could have benefitted from a better presentation, from replacing the pie chart with horizontal columns to make labels and relationships more legible, to better organizing the column charts by year, to considering alternate styles like bump charts to show year over year differences. But that's all just being a data visualization nerd.
The real point in including this article is to understand why people do and do not choose Rust. I was pleasantly surprised that the top three reasons for using Rust at work were enabling software quality, performance, and its security and safety (and I'd argue that security is mostly a part of that first bug-free quality consideration).
The pain points were slow compilation and subpar debugging. In other words, we don't need to position the adoption of Rust as purely (or even primarily) a security consideration. It can produce high quality, high performance code.
On the other hand, only 16% of respondents considered it "easy to prototype with".
- 5. Adversarial Misuse of Generative AI
This reads more like a sober business consideration of AI's value -- an assistive tool for mundane tasks. Imagine "regular" developers instead of APTs and there's likely a lot of overlap in basic activities. It's not so much that AI is going to create some novel new type of exploit or malware, it's that AI will enable developers to work and collaborate a bit more efficiently.
And, I feel obligated to repeat, if we're worried about AI creating phishing emails with fewer typos to trick people into divulging their passwords, we're worried about the wrong part of the problem. Let's make the password part more secure through password managers and passkeys.
- 6. Why it’s time for AppSec to embrace AI: How PortSwigger is leading the charge
I acknowledge genAI's potential for assisting developers write code, but have been searching for where genAI provides value to appsec activities.
In episode 318 we talked with James Kettle about the top 10 web hacking techniques of 2024. But at the end he spoke a bit about what they have in store for AI. Here's what looks like quite a reasonable, useful approach to adding value via AI.
- 7. FYI: Extensible Wasm Applications with Go – The Go Programming Language
I'm looking for folks who are building WASM apps, especially if security has been influential in that decision. If that sounds like you, please reach out!
- 1. DOGE’s .gov site lampooned as coders quickly realize it can be edited by anyone
The Department of Government Efficiency (DOGE) website, doge.gov, intended to showcase cost-cutting measures in the U.S. federal government under Elon Musk’s leadership, was found to be insecure, allowing unauthorized edits by the public.
- 2. Security researcher finds vulnerability in internet-connected bed, could allow access to all devices on network
This is peak cyberpunk dystopia—your bed turning into a hacker’s entry point to your entire network. Imagine waking up one day to find out that instead of just adjusting your sleep settings, your smart bed has been recruited into a botnet or is exfiltrating your data while you snooze.