AI/ML, Identity, AI benefits/risks

Identity security is everything when it comes to using AI

Share
An AI response strategy

CISOs, CIOs, CTOs and CSOs around the globe are extremely upbeat about the potential benefits of artificial intelligence, with 89% seeing it as a "positive" or "very positive" force in a new survey of 125 technology-focused executives conducted on behalf of identity and access management provider Okta.

Nonetheless, the survey revealed plenty of concern over the possible adverse effects of AI, with 71% of respondents voicing worries about cybersecurity, and 74% about data privacy. Thirty-nine percent said they were "moderately" or "extremely" concerned about AI's impact on security.

Many respondents feared that generative AI would make phishing attacks less detectable and more effective. Others worried that their own employees would rush headlong into AI without the proper training to use it wisely and safely.

To that end, 79% of survey respondents viewed IAM solutions as "important" (33%) or "very important" (46%) in defending against AI-driven attacks, as well as for building guardrails for safe usage of AI.

"[IAM] is the only technical control we have to manage who has access to this new technology," said one respondent, a CSO/CISO in the technology industry in the Americas. "We are slowly opening the gates for specific justified needs — not being afraid of AI but by embracing it slowly and methodically."

Excitement and confidence

Despite the constant media chatter about AI-accelerated doom, the executives surveyed were remarkably sunny about the prospects of artificial intelligence — and remarkably confident in their own understanding of AI and their ability to manage it. Almost all said AI was already being used in their organizations.

Asked to select which of five statements best reflected their own outlooks on AI's impact on the world, 31% of respondents chose "very positive," and 58% "positive." Only 9% selected "neither positive nor negative," and 2% chose "negative." None selected "very negative." 

There was some regional variation. Respondents in Europe, the Middle East and Africa (EMEA) were the most upbeat, with 39% calling AI's impact "very positive." Executives in the Americas and the Asia-Pacific region (APAC) were a bit less sanguine, with 29% and 27% choosing that response.

About making AI-related decisions within their own organizations, 92% of all respondents described themselves as "very confident" (32%) or "confident" (60%). Sixty-four percent said that AI was used by many of or nearly all the teams within their companies, while 34% described their organizational use of AI as "limited"; only 2% reported no AI use at all.

Fifty-eight percent said their organizations had developed guidelines for employee use of AI. Another 32% said they planned to develop such guidance in the coming year.

In terms of understanding how AI works, 14% described themselves as "experts" in the field, defined by the questioners as having "a graduate degree in computer science" or "research experience in AI R&D."

Thirty-seven percent of respondents said they had an "advanced" comprehension of AI, evidenced by "experience implementing AI models from scratch" or being "technically fluent in one or more subdomains of AI."

A plurality, 43%, admitted only an "intermediate" understanding of AI. But even that category was defined as being "familiar with technical aspects of AI" or someone who "may have implemented an AI system using API calls."

Underlying worries

Yet underneath this AI boosterism was a hint of hesitation.

"When it comes to AI becoming a bigger part of daily life, 46% of executives say they feel equal parts concern and excitement," states the survey report.

That group barely edged out the 44% who were more excited than worried, but other questions revealed some doubts about an organization's ability to fend off AI-powered attacks.

A slight majority of executives, 51%, described their workforces as "somewhat informed" about AI-driven threats. A similar proportion, 54%, called their organizations "somewhat prepared" to defend themselves against such attacks.

Meanwhile, 17% of the respondents said their companies were "somewhat unprepared" or "very unprepared" to deal with AI-driven attacks. Twenty percent felt their workforces were "somewhat uninformed" or "very uninformed" about such threats.

Concern about the impact of AI on their organization's security was nearly universal. About a third (34%) of respondents characterized themselves as only "slightly concerned" on that topic, and 28% "moderately concerned" and 11% "very concerned" as noted above — but only 3% considered themselves "not at all concerned."

Respondents also expressed worries that improper usage of AI might lead to exposure of company secrets, compliance failure, and erroneous AI-generated data.

"New tech can be dangerous if not adopted appropriately, and company secrets, drift, and hallucinations need to be high on the list of concerns," said a CSO/CISO in the technology sector in the Americas.

Regarding their priorities for AI in their organizations in the next 12 months, 70% of the executives put "improving security and threat detection" on their wish lists — and 30% chose it as their top priority.

AI can certainly assist in those efforts, but the open-ended nature of the question left room for some respondents to imply that threat detection and security need to improve because of the threats posed by AI itself.

"A focus on risk identification, mitigation, and management is needed to ensure that AI adoption works within the boundaries of acceptable risk," said a CIO in the Asia-Pacific healthcare and pharmaceutical sector. "A realistic set of expectations and appropriate metrics for measuring progress and performance are needed."

How IAM can curb misuse of AI

For many of the respondents, identity and access management is the first line of defense against misuse of AI, both inside and outside their organizations. Ninety percent of the executives surveyed deemed the use of IAM when implementing and managing AI as "important," "moderately important" or "very important."

Asked why IAM was so vital to AI use, a plurality of responses involved IAM’s role in making certain that only authorized users could access AI tools.

"IAM is a key pillar in security," said a CSO/CISO in the education field in the Americas. "It should be at the root of all you do. Managing who has access to the proper data and systems is critical."

To create a culture of security and data safety around AI, the authors of the Okta survey make three broad recommendations.

The first is to enhance security in general by reinforcing an organization's defense infrastructure. This might include implementing or beefing up a modern IAM platform and deploying policies such as mandating multi-factor authentication (MFA) and enforcing the principle of least privilege.

Second, the Okta report authors suggest that organizations interface with their peers, with industry and government regulators, and with subject-matter experts "to develop and share insights and best practices for responsible AI collaboration."

Last, and perhaps most important, is to educate and train employees in AI best practices, and to give them hands-on experience with using and experimenting with generative AI tools.

Familiarity with AI, and with its potential pitfalls, will prepare a workforce for the rapid adoption and usage of AI throughout the world.

"People are always your biggest threat in security," said a CTO working in the Asia-Pacific healthcare and pharmaceutical industry. "AI is becoming more and more lifelike as time progresses, and hackers are becoming faster, adaptable, and innovative. Without educating our internal team sufficiently on the threats of AI, I believe we are opening ourselves up to greater risk of a human-enabled security threat."

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.