AI/ML, AI benefits/risks

CISOs push for baseline AI business rules

Share

The responsibility for artificial intelligence in the workplace has swiftly rolled uphill to the chief information security officer’s inbox. No cybersecurity strategy drafted today can ignore the business process promise or the potential security perils the technology represents.

To help CISOs wrap their arms around AI’s internal impact on IT, a group of CISOs are developing a “quick guide” that aims to help their peers establish baseline parameters for integrating the tech into their IT stack.       

“Some organizations have a mixed attitude toward AI. Some have blocked it until a governance structure is put in place. Others tell me they are eager to jump on the (AI) bandwagon and start using it,” said Tom Scurrah, VP, content and programs, at the Cybersecurity Collaborative.

Speaking at the InfoSec World 2023 conference, Scurrah told attendees the business use cases vary widely for AI depending on a company’s risk profile. For that reason a first draft of a universal guide outlining the fundamental tenants of AI use cases for CISOs is needed, he said.  

Joining Scurrah on stage were members of CyberRisk Alliance’s Cybersecurity Collaborative industry peer group that included Jason Mortensen, senior IT security architect, Lenovo; Cheryl Nifong, CISO, University of Texas at Arlington; and Greg Berkin, CIO and CISO at Lisata Therapeutics.

Panelists were members of the Collaborative’s recently formed AI Security Task Force. The initiative is focused on CISO roles tied to AI, orienting CISOs to AI fundamentals, developing governance policies around AI and creating AI-safe operating environments. 

Along with a CISO’s “quick guide” to the adoption of AI, the group mission is to create AI acceptable use policy, security controls for existing models, risk assessment/controls checklists and a boardroom template – all designed to make a CISO’s AI journey less uncertain. 

Below is a summary of the perspectives on the business uses of AI from the three panelists.

Jason Mortensen, Senior IT Security Architect, Lenovo

AI represents a competitive advantage for Lenovo, Mortensen said.

“If we’re not using it, our competitors will be, and we’re going to be left behind,” said Mortensen, noting that the security team must act as an enabler and find a way to support this technology. And one way to do that in a secure manner is to keep a register of any company project that utilizes programs like ChatGPT. 

Lenovo asks employees to share a list of expectations when using AI, shares internally developed guidelines and acknowledge the agreement via a digital signature.  

Mortensen said this allows Lenovo to understand an employee’s intent when using AI and match them with the right security personnel and legal guidance.

“It also gives us visibility across the company,” he said. “We want to ensure we don’t have a lot of different people working on the same problem… Or, we want them to collaborate together.” 

Another measure to reduce risk is to forbid use of the consumer-grade versions of tools like ChatGPT. “There’s other solutions. There’s internal solutions we brought in, there’s [the] Azure OpenAI service where you’re basically using GPT. But it’s a confined instance that’s specific to the organization,” Mortensen explained. 

One high-priority concern is data exfiltration. “I don’t want somebody going out to the free ChatGPT [tool] and saying, ‘Here’s our corporate strategy for the next eight quarters. Summarize a plan for me of how we’re going to get there’ – and now that's out there,” said Mortensen. Also on the list for Lenovo: copyright infringement, third-party risk, and determining if employees who feed data into the AI engine are actually authorized to view said data. 

Cheryl Nifong: CISO, University of Texas at Arlington

In a university environment there’s no going back to the pre-ChatGPT days, Nifong said. 

Nifong and university CIO Deepika Chalemela have created an AI executive work council to examine the use of AI at the university. The goal is to develop AI controls and governance.

AI policies for students, he said, will likely soon include personal conduct and plagiarism clauses tied to the use of AI. Guardrails for faculty AI usage might include defining limiting the type of data shared with generative AI tools. 

Nifong said that OpenAI’s new Enterprise ChatGPT version is promising. But presently, faculty members “use AI pushing the need to make sure that the learning models that they’re using… are sandboxed or secured within our borders.”

Security controls needs to be data-centric. “What data are you putting in a generative AI?... And does that data include IP? Does that data include confidential data from our company? That is one of our biggest concerns,” he said.

Like Levono, the university’s AI working group will soon distribute basic guidelines for users as it finalizes a more comprehensive strategy. The university may down the line also incorporate AI usage into its annual compliance training and user awareness training. 

Greg Berkin, CIO, CISO and DPO, Lisata Therapeutics

Pharmaceutical company Lisata Therapeutics has hit the pause button on generative AI. It cites security and discretion as to why. 

“Being a publicly traded, highly regulated pharmaceutical organization, we can't take risks associated with using AI,” Berkin said. He said Lisata has blocked usage of generative AI applications. That position is not likely to change until the company has a clearer picture of the potential privacy and data loss implications. 

“If there’s a shadow of a doubt that there's a security risk associated with it, then there’s no doubt in my mind that we shouldn’t be using it at this point,” Berkin stated pointedly. 

“We need to worry about the safety and efficacy of our data and [our] patient health records,” said Berkin. “I understand people want to use it and help benefit the business. But I’m not taking that risk. I’m not putting my job on the line so someone can write an email faster, develop some model more quickly.” 

Lisata’s cautious approach even extends to AI assistants, like the new Microsoft 365 Copilot tool. “We figured out a way to… sort of disable or delay the release of this to the endpoints until we can better ascertain what it’s going to do from a business standpoint or safety standpoint,” said Berkin. 

Berkin expressed concern about even more software vendors introducing generative AI capabilities into their applications, without their users being truly informed of or properly educated in these features. “[The users] don’t realize that whatever application they’ve been using, it has AI built in,” said Berkin, noting that some vendors are changing terms and conditions so that they can collect, analyze and apply AI to their clients’ data. “You’ve already signed up for it and now they’re just sort of changing the rules that they play by,” he said. 

Still, Lisata’s hard stance against AI might soften over time.  “I’m not saying we’re not going to open up the floodgates, so to speak, down the road,” said Berkin. “Hearing some of the attorneys [at the conference] talk the past couple days, I’m sure they’re happy to take our money and work through the framework and processes and legitimacy and legalese behind [AI]. But we’re not ready for that stuff just yet.” 

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Bradley Barth

As director of multimedia content strategy at CyberRisk Alliance, Bradley Barth develops content for online conferences, webcasts, podcasts video/multimedia projects — often serving as moderator or host. For nearly six years, he wrote and reported for SC Media as deputy editor and, before that, senior reporter. He was previously a program executive with the tech-focused PR firm Voxus. Past journalistic experience includes stints as business editor at Executive Technology, a staff writer at New York Sportscene and a freelance journalist covering travel and entertainment. In his spare time, Bradley also writes screenplays.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.