AI and AppSec: How to avoid insecure AI-generated code
AI is fundamentally changing how software is written. Developers are increasingly relying on code generated by leveraging generative AI tools and application security experts are scrambling to understand and mitigate security risks. To counteract this emerging threat, the developer community must take proactive measures and foster a culture of continuous learning that is essential to stay ahead of the ongoing battle for application security.
This webcast explores the intersection of AI and application security, helping you understand the new security challenges while emphasizing proactive measures to identify and address vulnerabilities. It will highlight the importance of:
- Integrating security from the outset of development
- Leveraging AI for efficient vulnerability detection
- Fostering collaboration between developers, security experts, and AI researchers
- Ethical considerations and responsible AI practices are emphasized to build resilient and trustworthy software systems in the face of evolving cyber threats.
Speakers
InfoSec content strategist, researcher, director, tech writer, blogger and community builder. Senior Vice President of Audience Content Strategy at CyberRisk Alliance.
Chris has worked in cybersecurity for over ten years. His goal is to help equip organizations with the skills necessary to protect organizations in the Application Security space. In his spare time, he can be found at security conferences, helping run BSides Cheltenham, or teaching the next generation of software engineers through code clubs.