As AI tools rapidly infiltrate the enterprise, cybersecurity leaders are sounding the alarm: without foundational governance, visibility into AI usage, and strong internal collaboration, the technology’s benefits may be overshadowed by unanticipated risks.
That’s the consensus from a recent CISO eRoundtable hosted by CyberRisk Collaborative in partnership with Sophos.
The top takeaway? AI governance isn’t optional. With AI’s growing influence over business decisions, customer experiences, and operational efficiency, the lack of a clear, ethical, and secure framework invites both reputational and regulatory peril. Participants emphasized the importance of establishing governance committees, defining ownership, and integrating cybersecurity, compliance, and legal functions into the earliest stages of AI development and procurement.
Visibility also emerged as a weak point. Many organizations struggle to understand the full extent of their AI footprint. From shadow AI experiments to vendor-supplied solutions, the lack of a centralized AI inventory hampers compliance and security efforts. A robust AI registry not only supports internal oversight but also prepares businesses for audits, assessments, and future AI legislation.
On the security front, the report highlights a shifting identity for cybersecurity teams — from reactive defenders to proactive enablers and watchdogs. As generative and predictive models expand, so too do the potential threat vectors. Citing a need for AI-specific playbooks and real-time monitoring tools, participants urged teams to embed controls into the software development lifecycle (SDLC) and third-party integrations.
But policies and tech controls are only part of the equation. User education surfaced as a critical, yet often overlooked, line of defense. With the rise of tools like ChatGPT and GitHub Copilot, employees are experimenting with AI in ways that can inadvertently expose sensitive data or violate compliance mandates. Training initiatives, especially those focused on prompt engineering and acceptable use, were cited as essential for cultivating a culture of responsible AI use.
Finally, cross-functional collaboration is the glue that holds it all together. The most mature organizations represented in the roundtable have already established AI working groups that include stakeholders from HR, legal, data science, and business leadership. These teams vet AI tools, review use cases, and ensure that policy frameworks evolve alongside technology. Without this kind of interdisciplinary governance, organizations risk falling behind or losing control of AI proliferation.