The U.S. National Institute of Standards and Technology (NIST) has launched a new program to address the role of AI in cybersecurity and privacy.
The program was announced Thursday and will kick off with the development of a community profile through the National Cybersecurity Center of Excellence (NCCoE) for the “cybersecurity of AI and AI for cybersecurity,” which will help guide implementation of NIST’s Cybersecurity Framework (CSF) 2.0.
The new Cybersecurity, Privacy, and AI program is a response to the rapid spread of AI use into many aspects of business and society, including application of AI to cybersecurity technologies, use of generative AI (GenAI) tools like ChatGPT in the workplace and the use of AI by cybercriminals in areas like phishing and deepfakes.
“All of this highlights the critical need for standards, guidelines, tools, and practices to improve the management of cybersecurity and privacy in the age of AI, ensure the responsible adoption of AI for cybersecurity and privacy protection purposes, and identify important actions organizations must take to adapt their defensive response to AI-enabled offensive techniques,” Katerina Megas, program manager for NIST's AI and cybersecurity programs, wrote in a blog post.
The program announcement addresses several of the risks and benefits of AI when it comes to privacy and security, including the risk of re-identification of private and confidential information from AI training datasets, the use of AI for behavioral analysis and surveillance, the application of AI for cybersecurity tasks like threat hunting and the use of AI to assist users in navigating privacy preferences online.
Community profiles are new feature of NIST’s CSF 2.0 that help tailor the framework to specific shared interest groups such a certain business sectors, business sizes, threat types and technologies types. The AI profile created as part of the Cybersecurity, Privacy, and AI program will focus on three main aspects of AI’s impact on organizations, including:
- Addressing cybersecurity and privacy risks of AI use at organizations by securing AI system components and machine learning (ML) infrastructures and minimizing data leakage.
- Determining how to defend against AI-enabled cyberattacks.
- Using AI in cyber defense activities and to improve privacy protections through AI-powered assistance.
The development of this community profile will help adapt the CSF 2.0 to the context of AI use, and can eventually help adapt other frameworks, such as the Privacy Framework, AI Risk Management Framework and NICE Framework to organization’s use of different AI tools.
The program builds on NIST’s other work related to AI safety and privacy, including the publication on its AI threat guide, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigatoins,” in January 2024, publication of the open-source ML model security testing tool Dioptra in July 2024, and partnership of NIST’s AI Safety Institute with GenAI companies OpenAI and Anthropic last month to enable pre-release testing of the companies’ new models.