AI/ML, AI benefits/risks

Seven ways to develop and deploy AI responsibly

Share

COMMENTARY: Trust and transparency in AI aren't optional anymore—they are critical to long-term business success. With AI threats on the rise, security leaders face mounting pressure from two sides: to secure their perimeters from incoming attacks, and to ensure responsible internal use of AI technology.

According to our recent research, security leaders cited AI malware attacks and AI identity fraud significantly increased in prevalence. Yet, only two in five organizations surveyed currently conduct regular AI risk assessments and audits, while a mere 36% have a company AI policy in place.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

This lack of proactive measures creates a significant gap between awareness and action, leaving organizations vulnerable. As companies increasingly rely on AI, the need for transparency becomes more urgent—not just for security, but for building trust with customers and partners. Companies that prioritize transparency and accountability in their AI systems are better positioned for success.

Gartner reports that by 2026, AI models from organizations that operationalize AI transparency, trust and security will achieve a 50% improvement in terms of adoption, business goals and user acceptance. Similarly, a study by the MIT Sloan Management Review also found that organizations with high AI transparency scores outperformed their peers in customer satisfaction by 32%.

To address these growing concerns, both governments and regulatory bodies are stepping in to enforce stricter guidelines and accountability measures for AI usage.

Governments worldwide respond to the potential AI threats

The rapid advancement of AI has prompted an unprecedented global response from governments and regulatory bodies, underscoring its real and immediate risks.

In the United States, the Biden administration made a significant move with Executive Order 14110 in October 2023. This EO marked a turning point in AI governance, mandating thorough risk assessments and setting ambitious goals for responsible AI deployment across federal agencies.

Federal organizations like the National Institute of Standards and Technology (NIST) have introduced AI Risk Management guidance to help organizations navigate AI-related threats. Similarly, The Open Worldwide Application Security Project (OWASP) has created frameworks to educate the industry on the security risks tied to deploying and managing large language models (LLMs).

Meanwhile, the European Union has lead the charge with the AI Act, which took effect this year. This legislation applies to all 27 member states and introduces a groundbreaking approach by classifying AI systems based on their risk levels.

The coordinated global response emphasizes the urgency for organizations to adopt responsible AI practices and comply with evolving regulations. The message from governments worldwide is clear: AI development and deployment must prioritize safety, transparency, and ethical considerations to harness its full potential while mitigating risks.

A framework for compliance and innovation

As AI evolves, companies need to adapt their strategies to stay compliant and build trust with stakeholders. By following a clear framework of best practices, organizations can meet regulatory standards, and also foster innovation and drive business value. Here’s what organizations can do now to ensure responsible AI development and deployment:

  • Prioritize risk assessment: Before launching any AI initiative, evaluate all potential risks to both the organization and customers. Take proactive steps to identify and mitigate any negative impacts from the start. For example, a financial institution developing an AI-driven credit scoring system should implement safeguards to prevent bias, ensuring fair and equitable outcomes.
  • Integrate security and privacy from the ground up: Make security and privacy foundational elements of every AI project. This includes adopting privacy-preserving techniques such as federated learning or differential privacy. Companies should also ensure that any updates or changes to AI systems maintain these protections. For instance, a healthcare provider using AI to analyze patient data should employ robust privacy measures to safeguard individual information while still enabling valuable insights.
  • Control data access and leverage secure integrations: Organizations should define strict controls over the data AI systems can access and implement clear safeguards to manage risks. It’s advisable to avoid training AI models on customer data directly. Instead, use secure integrations through APIs and establish formal data processing agreements (DPAs) with third-party providers to prevent misuse of data. This adds an extra layer of security, ensuring data remains under the organization’s control.
  • Ensure AI transparency and accountability: Transparency lets organizations maintain stakeholder trust. Teams should understand how AI systems make decisions and communicate these processes clearly to customers and partners. Consider using interpretable AI models or developing explainable AI (XAI) tools to provide insights into complex AI-driven decisions.
  • Maintain customer consent and control: Make AI usage transparent to customers. Companies can adopt an informed consent model, allowing customers to opt in or opt out of AI features. Offering easy access to these settings lets customers remain in control of their data and how AI gets applied, ensuring the company’s AI strategy aligns with customer preferences.
  • Commit to ongoing AI monitoring and compliance: Think of AI implementation as an ongoing process that requires regular monitoring and compliance checks. Companies should conduct frequent AI risk assessments, audits, and follow best practices such as ISO 42001 certification. Integrating guidelines from frameworks like NIST AI RMF and adhering to evolving standards like the EU AI Act can further reinforce accountability and reliability in AI systems.
  • Lead by example with internal AI testing: To ensure quality, companies should internally use and test their own AI solutions. This practice lets organizations identify and address potential issues proactively, refining AI systems before deploying them to customers. It also sets a strong example for responsible AI development and builds a culture of transparency and continuous improvement.

Organizations that prioritize transparency, security, and proactive risk management in their AI strategies are positioned to navigate an evolving landscape of threats and regulations. By committing to responsible AI development, companies can safeguard their operations, and also build enduring trust with customers, partners, and regulators—ensuring sustainable success in an AI-driven future.

Iccha Sethi, vice president of engineering, Vanta

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds