COMMENTARY: AI and ML technologies are revolutionizing industries, automating decisions, and optimizing workflows once thought impossible. From applications such as fraud detection in financial services to diagnostic imaging and disease detection in healthcare, we are just scratching the surface of AI/ML’s capabilities. However, the rapid integration of these technologies into business-critical functions introduces novel security risks.
The rise of threats like ML model tampering, data leakage, adversarial prompt injection, and AI supply chain attacks introduces risks that traditional software security methods can’t fully address. Organizations developing or deploying AI-powered technologies must enhance their current practices by integrating new tools and security activities tailored to AI vulnerabilities. Enter Machine Learning Security Operations (MLSecOps)—a comprehensive framework designed to build security into the AI/ML lifecycle.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
While often interchangeable, AI refers to systems mimicking human intelligence, while ML, a subset of AI, helps systems improve autonomously. For example, in fraud detection, AI monitors transactions, while ML adapts to detect new patterns. If data gets compromised, the system fails. Across industries, AI remains only as secure as the data powering it.
MLOps vs. DevOps
Scaling AI/ML models required more than data science skills, giving rise to MLOps—a practice for deploying and maintaining models. Similar to DevOps, it emphasizes automation and continuous integration.
But the two disciplines diverge when it comes to the challenges they address. Unlike traditional software, ML models are constantly retrained and updated as they ingest new data. This introduces the potential for new vulnerabilities, as malicious actors can manipulate training data to corrupt models or steal intellectual property by reverse-engineering models. Here’s where the importance of MLSecOps comes into play.
DevSecOps, a more evolved form of DevOps, integrates security into every phase of the software development pipeline. It emphasizes “secure by design” principles, ensuring that security becomes a foundational element of the software development cycle, not an afterthought. This approach has become the industry standard for securing applications in production environments.
We need a similar approach for ML pipelines, and it’s where MLSecOps comes into play. While MLOps focuses on the operational aspects of deploying and maintaining models, MLSecOps ensures that security gets baked into every phase of the AI/ML lifecycle, from data collection and model training to deployment and monitoring. Just as DevSecOps shifted the industry toward embedding security into traditional software pipelines, MLSecOps was designed to make security an integral part of every step in the MLOps process.
The role of MLSecOps in AI security
The AI/ML attack surface introduces several distinct security threats. The example of model serialization attacks involves the injection of malicious code into an ML model during the serialization process –where the model is saved into a specific format for distribution – effectively turning the model into a modern day Trojan Horse that can compromise systems upon deployment.
Data leakage presents another potential risk that can occur when sensitive information from an AI system gets exposed. Adversarial attacks, such as prompt injection, happen when inputs to Generative AI meant to deceive the models into produce incorrect or harmful outputs. Additionally, AI supply chain attacks pose risks by compromising ML assets or data sources that can impact the integrity of AI systems.
MLSecOps mitigates risks by securing pipelines, scanning models, and monitoring behaviors for anomalies. It also safeguards AI supply chains with thorough third-party assessments.
Moreover, MLSecOps promotes collaboration between security teams, ML practitioners, and operations teams to address these risks holistically. By aligning security with the workflows of data scientists, ML engineers, and AI developers, MLSecOps ensures that ML models remain high-performing, and AI systems are secure and resilient against new and evolving threats.
Implementing MLSecOps
MLSecOps goes beyond adopting new tools: it requires cultural and operational shifts. CISOs must advocate for greater collaboration between security, IT, and ML teams. In many organizations, these groups operate in silos, which can lead to security gaps in AI/ML pipelines.
CISOs can start by conducting an AI/ML security audit to assess current vulnerabilities. From there, organizations can establish security controls for data handling, model development, and deployment that are aligned with MLSecOps principles. Finally, continuous training and awareness are important to maintaining an MLSecOps culture as threats evolve.
AI will continue to play an increasingly important role in business operations. As these technologies mature, so too must our approach to securing them. MLSecOps does not function just as a framework—it’s a necessary evolution in security practices that aligns with the unique challenges throughout the lifecycle of AI technologies..
By adopting this approach to AI security that effectively combines people, processes, and tools, organizations can proactively ensure their systems are high-performing and also secure, resilient, and able to adapt to evolving threats.
Diana Kelley, chief information security officer, Protect AI
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.