AI benefits/risks, DevSecOps

How AI should – and shouldn’t – assist code developers

AI

Demand for new software continues to surge, increasing pressures to generate more products, more rapidly. Given the challenging environment, it should come as no surprise that the vast majority of developers now use artificial intelligence (AI) to better position themselves to cross the finish line with time to spare.

More than nine out of 10 U.S. developers are deploying AI coding tools, citing advantages such as productivity gains (53%), the ability to focus more on building/creating as opposed to repetitive tasks (51%) and the prevention of burnout (41%). It’s also cutting down nearly one-half of the time spent on generating and documenting code. 

The benefits will likely pave the way for even greater AI adoption. In addition to reducing time spent on repetitive - even tedious - work, it can suggest new lines of code and respond to technical questions with solid recommendations. The technology can even offer research assistance and explain processes that may trip up a developer in their quest to solve an ever-growing list of challenges.

But we cannot lose sight of the need to keep secure coding practices front of mind in software development, even when deploying AI tooling. We cannot blindly trust the output, with hallucination still a leading concern when implementing its recommendations. Deciphering security best practices and spotting poor coding patterns - the type that can lead to exploitation – has emerged as a skill that developers must hone, and that companies must invested in at the enterprise level. We cannot replace the critical “people perspective” which anticipates and defends against increasingly sophisticated attack techniques.

Without this perspective, we’ll have more developers around the world creating insecure software than ever before. It’s a situation that carries immense risk; the productivity gains afforded to developers when they use AI coding assistants are a boon to swift code delivery, but a lack of contextual security awareness can ramp- up production of exploitable vulnerabilities. We consider speed a virtue in software development, but we need to share that focus with striving for security best practices at the same time. Developers can achieve this when they are properly upskilled.

In addressing the topic in a recent podcast, I said that we’ve yet to meet a standard with regard to security and AI coding. The technology simply hasn’t received enough training on insecure code to capture the intelligence required to identify the wide range of threats that exist. We may get there in a few years, but we’re not there yet, and until this day comes – we cannot blindly trust AI to code quality products that are protected.

We still need the input of security-skilled developers to drive organizational strategies while producing protected code to do the following tasks:

  • Fix bugs and inconsistencies: While certain AI tools will flag potential vulnerabilities and inconsistencies, humans must still deliver cautionary oversight of this. Detection is only as accurate as the inputting/initial prompting from the developer, and they need to understand how the AI recommendations are applied in the greater context of the project.
  • Focus on complexities and the big picture: AI isn’t ready to fly solo with complicated components, for example, or brainstorming new and creative solutions to DevOps challenges. Again, developers have the technical knowledge to understand the wider goals and potential outcomes and need to keep working on adding security best practices to their priority measures of success.
  • Implement new languages: AI will slow down developers if they’re working with unfamiliar languages or frameworks – an occupational reality that takes time to build up a comfort zone of understanding through training and agile learning.
  • Collaborate through feedback: Developers say feedback makes a positive impact on their work. Clearly, this helps them do a better job. For now, at least, we should continue designating collaboration as a human-to-human process.

We could think of the current state of AI like we did the toys of yesteryear, when manufacturers were able to put out potentially harmful products such as the Easy-Bake Oven and – believe it or not – lawn darts. In recognizing we needed a regulatory presence here, we designated the U.S. Consumer Product Safety Commission (CPSC) to protect children and their families from risk of injury or even death because of these products.

In similar fashion, we can’t go “all in” on AI for coding without regard to the associated hazards of vulnerabilities and threats. We must create oversight that works very much like the CPSC to ensure our products are secure and – within time – “train” AI to look out for all possible risks to identify and eliminate them. But until this day comes, we will always need the human perspective and input.

Pieter Danhieux, co-founder and CEO, Secure Code Warrior

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds