COMMENTARY: The rise of generative AI (GenAI) has reshaped business processes, decision-making, and how we interact with data. While the benefits are clear—improved efficiencies, enhanced creativity, and new growth opportunities—the risks are just as significant, particularly when it comes to security and privacy.
For C-suite leaders, the challenge lies in navigating these complex trade-offs and implementing a framework that balances innovation with responsible governance.
At the heart of the challenge and the risk around AI lies the fundamental uncertainty surrounding how large language models (LLMs) handle data. Proprietary information entered into AI tools may get stored in the cloud or even in foreign jurisdictions with different privacy laws, exposing companies to potential data breaches. The risk escalates when third-party vendors use AI, increasing the chances of unintentional data leaks without proper governance.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
Externally, AI has empowered threat actors, from cybercriminals to nation-states, to enhance their attacks with unprecedented speed and scale, including AI-generated phishing schemes and deepfakes, which pose a significant threat to enterprise security.
Compounding this has been the emergence of AI tools sold on dark web marketplaces, such as black-hat versions of ChatGPT, designed explicitly to help bad actors scale their attacks.
But it’s not just about external attackers—the risk of insiders inadvertently misusing AI tools has become universal. An employee who uploads confidential information into a generative AI model might unintentionally expose the company to data leakage, particularly if the AI service provider’s terms and conditions don’t guarantee data privacy.
C-suite executives must ensure that both their internal workforce and third-party partners are aware of these evolving risks to effectively mitigate them.
As organizations embrace AI, many are finding themselves at a governance crossroads. Some of the world’s largest companies have started putting frameworks in place, but there’s still no standard approach to managing AI usage. C-suite leaders need to focus on developing robust governance structures that align with their risk appetite while ensuring compliance with data protection regulations.
For organizations such as law firms and financial services providers, governance committees are emerging as a critical tool in navigating AI use. These committees often include important stakeholders: CEOs, CISOs, chief risk officers, and legal teams tasked with overseeing how AI gets integrated into business operations.
But governance is not just about creating rules. It’s about developing a culture of responsibility around AI. What’s needed are clear policies that define how AI gets used, who has access to it, and how the company will monitor AI. It’s also vital to remember that outright bans on AI tools can drive employees to use unsanctioned, unmonitored services, creating even greater risks. Instead, organizations must embrace AI responsibly and focus on visibility—ensuring that leaders know what tools are being used, by whom, and for what purpose.
Finding the right balance between privacy and security in the age of AI goes beyond just mitigating technical risks. It requires a holistic approach that considers the ethical implications of AI use. For instance, bias in AI models can lead to discriminatory outcomes, while AI-generated hallucinations—where models produce false or misleading information—can create legal liabilities or influence poor business decisions.
To address these issues, companies are turning to methods like pseudonymization to protect personal identifiable information (PII) when feeding data into AI systems. By removing or masking sensitive information, organizations can reduce the chances of AI models producing biased or harmful outputs while preserving the privacy of individuals.
Another critical aspect of balancing privacy and security is education. Employees need to understand the risks of using AI tools both in their professional and personal lives. C-suite leaders should frame their education initiatives in a way that resonates on a personal level—highlighting how the misuse of AI could affect their families’ privacy and security as well as the company’s.
Recommendations for proactive governance
As the threat landscape evolves, organizations must prioritize their security and resilience. Adopting a proactive stance towards AI governance can help organizations stay one step ahead.
The following list offers a clear flowchart for getting started:
- Define clear use cases: Start by identifying the areas where AI can add value, whether through business efficiency or enhanced employee engagement. Avoid overwhelming employees with an ocean of possibilities—narrow the focus to the most relevant use cases.
- Establish risk appetite: Each organization must assess the risks and rewards associated with AI. C-suite leaders should evaluate these risks against business objectives and set clear boundaries on how the company will deploy AI.
- Align governance and monitoring policies: Develop a governance structure that matches the organization’s defined risk appetite. Put regular monitoring and security controls in place to ensure compliance with governance policies. Real-time monitoring of AI usage can help detect any anomalies before they escalate into significant issues.
- Practice vendor due diligence: When partnering with AI vendors, organizations must ask the right questions about how these tools process data. Ensure that AI vendors are transparent about their practices and have robust privacy measures in place.
- Collaborate and share information wisely: The fight against AI misuse requires collaboration. Engage with industry peers and participate in forums focused on AI risk mitigation. Stay updated on the latest AI-generated threats, such as deepfakes and AI-generated illegal content.
- Offer relevant employee education: Trust and engagement are vital. By making AI education relevant on a personal level, employees are more likely to understand the risks and act responsibly. Empower the workforce to take ownership of AI governance, not just in the workplace, but in their personal lives as well.
As AI continues to transform how businesses operate, the stakes for balancing privacy and security have never been higher. C-suite leaders and CISOs must approach AI governance proactively, putting structures in place that mitigate risk while empowering employees to harness AI’s potential responsibly. In this rapidly evolving landscape, the organizations that strike the right balance between security, privacy, and innovation are well-positioned for long-term success.
Mohan Koo, co-founder and President, DTEX Systems
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.