COMMENTARY: AI isn't waiting for security teams to catch up. It's running full-steam ahead, without any regard for what may stand in its way.
The recent security issue that hit the news surrounding DeepSeek—where Wiz researchers uncovered extensive vulnerabilities including exposed databases, weak encryption, and susceptibility to AI-model jailbreaking—serves as a stark warning for organizations rushing to adopt AI technologies without proper security controls.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
The vulnerabilities discovered in DeepSeek reveal a pattern in how organizations approach AI security. When Wiz uncovered a publicly-accessible ClickHouse database containing sensitive chat histories and API secrets, it exposed more than just DeepSeek's technical oversights—it revealed the fundamental gaps in how we're securing AI systems.
The discovered vulnerabilities read like a security team's nightmare checklist. Beyond the exposed database, SecurityScorecard's STRIKE team identified outdated cryptographic algorithms and weak data protection mechanisms.
Researchers found SQL injection vulnerabilities that could give attackers unauthorized access to user records. Most concerning, the DeepSeek-R1 model showed alarming failure rates in security tests—91% for jailbreaking and 86% for prompt injection attacks.
DeepSeek is news, but AI threats aren't
DeepSeek isn't an anomaly. It's a canary in the coal mine, warning us about the security challenges that come with rapid AI adoption. The company's practice of collecting user inputs, keystroke patterns, and device data, highlights the complex data privacy implications of AI deployment.
Beyond DeepSeek's specific case, AI has introduced a range of security challenges across the technology landscape, including:
Take control of the company’s AI security
While the AI security landscape may seem daunting, organizations aren't powerless. Develop comprehensive exposure management strategies before rolling out AI technologies. From our experience working with enterprises across industries, here are the essential components of an effective program:
The DeepSeek incident serves as a critical wake-up call for organizations racing to implement AI technologies. As AI systems become increasingly integrated into core business operations, the security implications extend far beyond traditional cybersecurity concerns. Organizations must recognize that AI security requires a fundamentally different approach—one that combines robust technical controls with comprehensive exposure management strategies.
The rapid pace of AI advancement means security teams can't afford to play catch-up. Instead, teams must build security considerations into AI initiatives from the ground up, with continuous monitoring and testing becoming standard practice. The stakes are simply too high to treat AI security as an afterthought.
Organizations need to act now to implement comprehensive exposure management programs that address the unique challenges of AI security. Those that fail to do so risk not just data breaches and regulatory penalties, but potentially catastrophic damage to their operations and reputation. In the evolving landscape of AI technology, we can’t consider security an option. We need to make security fundamental to how we build and deploy AI systems.
Graham Rance, vice president, global pre-sales, CyCognito
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.