AI Red Teaming and AI Safety – Amanda Minnich – ESW #371
In this interview we explore the new and sometimes strange world of redteaming AI. I have SO many questions, like what is AI safety?
We'll discuss her presence at Black Hat, where she delivered two days of training and participated on an AI safety panel.
We'll also discuss the process of pentesting an AI. Will pentesters just have giant cheatsheets or text files full of adversarial prompts? How can we automate this? Will an AI generate adversarial prompts you can use against another AI? And finally, what do we do with the results?
Resources:
Guest
Dr. Amanda Minnich is a Senior AI Security Researcher at Microsoft on the Microsoft AI Red Team, where she red teams Microsoft’s foundational models and Copilots for safety and security vulnerabilities. Prior to Microsoft, Dr. Minnich worked at Twitter, focusing on identifying international election interference and other types of abuse and spam campaigns using graph clustering algorithms. She also previously worked at Sandia National Laboratories and Mandiant, where she applied machine learning research techniques to malware classification and malware family identification. Dr. Minnich is heavily involved with tech outreach efforts, especially for women in tech. She received her MS and PhD in Computer Science with Distinction from the University of New Mexico.