Audit + Beyond, AI/ML, Governance, Risk and Compliance, Cloud Security

The New Frontier: Crafting robust AI Governance in a fast-evolving landscape

Share
At Audit+Beyond keynote, Michelle Lee, CEO of Obsidian Strategies

LAS VEGAS - As artificial intelligence (AI) continues to weave itself into every aspect of business, industry leaders are sounding the alarm: robust governance frameworks are no longer optional. Without them, companies risk repeating the chaos of past tech revolutions, from cloud sprawl, BYOD (bring your own device), to the dot-com boom.

Today, the stakes are even higher. AI’s potential to reshape business is immense, but so are the risks, and experts warn if we don’t learn the lessons of IT’s past, we are doomed to repeat them.

Here at AuditBoard’s Audit+Beyond, compliance experts argued it’s time for companies to get serious about setting rules about how AI can and should be used inside the enterprise.

“Every time there’s a new tech, it’s like we’ve got historical amnesia,” said Renee Murphy, Principal Analyst at Verdantix Research. “We act like we’ve never seen this movie before… Remember cloud?”

Her point, the rush to the cloud followed a familiar pattern of build it, break it, and fix it. “That’s what we did with the cloud and spent years reining it back in. Now it’s AI. ‘Let’s build it, we’ll figure out the risks later.’”

Find all of SC Media's coverage from Audit+Beyond here.

Same Story, Different Decade

In the race to harness AI’s transformative power, businesses are facing a familiar dilemma: rapid adoption without sufficient oversight. Experts warn that this could lead to serious pitfalls, including compliance issues, security vulnerabilities, and reputational damage.

During her Audit+Beyond keynote, Michelle Lee, CEO of Obsidian Strategies and former head of Amazon’s Machine Learning Solutions Lab, listed top AI risks ranging from hallucinations, lack of AI governance to bias in learning models. 

Also top of mind are a bevy of new regulations such as the European Union’s AI Act and California’s upcoming AI Transparency Act. The pressure is on for companies to move beyond ad-hoc solutions and establish strategic, integrated governance frameworks that ensure transparency, accountability, and ethical use of AI.

Industry leaders argue that this shift is essential not only to avoid legal penalties but to build sustainable, trustworthy AI systems that can drive long-term innovation.

During a Audit+Beyond roundtable, other tech firms echoed the need for compliance readiness. Laura Thomas from Twilio stressed that while frameworks are essential, there must be room for responsible innovation. “If the regulations are too strict, they could curb innovation, which is exactly what we want to avoid. The key is finding the right balance—allowing companies to innovate while ensuring they are not cutting corners on compliance and security,” said Thomas

Learning from Past Mistakes

Murphy’s comparison to cloud computing is apt. In the early days of cloud, companies rushed to adopt the technology without fully understanding how to manage security or cost, leading to what she called “cloud sprawl.” Organizations saw their infrastructure balloon out of control.

“One year, they had 500 servers; the next, they had 1,500, and no one knew where the budget went,” she said. “AI is following the same pattern. Businesses are deploying it across departments without thinking about how to control it. The difference now is that with AI, we’re talking about systems making decisions autonomously, sometimes without human oversight. That’s a whole new ballgame.”​

Cody Scott, a senior analyst at Forrester, also likened the current wave of AI adoption to the early days of cloud technology, where companies rushed to deploy without considering the long-term implications. “We’ve seen this before—rapid adoption without the proper guardrails,” said Scott. “If we don’t establish clear governance frameworks now, we risk creating another sprawl of unmanageable systems that could compromise security and compliance.”

The challenge isn’t just technological; it’s also regulatory, said Richard Marcus, chief information security officer at AuditBoard. He said the European Union’s AI Act, expected to come into force in 2025, will impose strict requirements on "high-risk" AI systems, mandating transparency, explainability, and rigorous testing.

For companies, this means more than just compliance—it’s about fundamentally rethinking how AI is integrated into their operations. “The EU AI Act isn’t just a compliance issue. It’s a signal that the days of ‘move fast and break things’ are over. If you’re deploying AI without guardrails, you’re setting yourself up for a big fall.”​

The Reality of Regulations—and Reputational Risks

Despite the clear signs from regulators, many companies still seem intent on deploying AI first and asking questions later. Obsidian’s Lee pointed out the contradictions. “Everyone’s excited about what AI can do—predictive analytics, customer personalization, fraud detection. But no one wants to talk about what happens when the predictions are wrong, or the algorithms discriminate,” she said.

“AI is great, but if you’re in finance and your AI just told a million people they’re unqualified for a loan because of biased training data, you’re going to wish you’d thought a little more about governance,”​ Lee said.

Lee’s perspective isn’t just theoretical. Consider the case of Amazon’s AI hiring tool, which was scrapped after it was discovered that the system was biased against women.

“It’s a classic example of how things can go wrong when there’s no oversight,” said Murphy. “You can’t just train a model and hope for the best. Governance means constantly checking, auditing, and adjusting. It’s not glamorous, but it saves you from disaster.”

Breaking Down Silos for Effective AI Governance

One of the critical hurdles in establishing effective AI governance is the issue of siloed departments, which can lead to fragmented risk management.

According to AuditBoard’s Connected Risk Report, released Wednesday, 86% of organizations admitted that fragmented data management hindered their ability to effectively govern AI.

Murphy advocates for collaboration across teams. “If the left hand doesn’t know what the right hand is doing, you’re going to have a mess,” she said. “It’s like trying to run a football team where everyone’s playing a different game of catch. IT, legal, compliance—they all need to be on the same field.”

Breaking down these silos is key to creating what Marcus referred to as “connected risk management”—a unified approach that ensures AI governance is woven into the fabric of the organization. “If you’re serious about AI, you can’t just throw it over to the IT department and hope they figure it out,” he said. “It’s got to be a team effort, from the boardroom down to the developers.”

The Balance Between Innovation and Control

“Everyone wants AI to be a magic wand, but no one wants to be the person to tell the wizard, ‘Hey, you can’t just wave that thing around.’ It’s about finding that balance,” she said. “We need frameworks that allow companies to innovate responsibly, without cutting corners. Otherwise, you’re not building a business—you’re building a house of cards.”​

Karan Sangh, director, IT internal audit at Zillow Group, echoed this sentiment in an Audit+Beyond roundtable, explaining how their organization has been working to create governance frameworks that don’t stifle innovation. “We use guardrails, not roadblocks. The idea is to enable our teams to experiment while ensuring we’re meeting all the necessary ethical and compliance standards,” Sangh said. “The focus is on making sure AI systems don’t reinforce biases or operate in opaque ways that erode trust.”

Preparing for an AI-Driven Future

Businesses must move beyond reactive measures and start treating AI governance as a strategic priority. Murphy offered a cautionary tale. “Look, I’m bullish on AI,” she said. “But we can’t just treat it like a toddler with a sharp knife and hope for the best. It needs supervision. It needs rules. And those rules need to be written now, not after things go sideways. That’s how you avoid another ‘bring your own disaster’ type disaster.”​

As companies continue to explore the capabilities of AI, the real challenge lies in balancing innovation with responsibility. Effective governance will require cross-functional collaboration, strategic planning, and a commitment to compliance and ethical standards.

“The future of AI isn’t just about what the technology can do; it’s about how we manage it,” Murphy concluded. “If we can get this right, the benefits will be tremendous. But if we get it wrong, we’re going to be cleaning up the mess for years.”​

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Tom Spring, Editorial Director

Tom Spring is Editorial Director for SC Media and is based in Boston, MA. For two decades he has worked at national publications in the leadership roles of publisher at Threatpost, executive news editor PCWorld/Macworld and technical editor at CRN. He is a seasoned cybersecurity reporter, editor and storyteller that aims always for truth and clarity.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.