The use of AI in business across industries has exploded, driving dramatic growth in productivity and innovation. In fact, 75% of knowledge workers are currently using AI to some degree. By 2030, AI-connected global activity is expected to grow by an additional $13 trillion as workers use AI to boost creativity, automate data-intensive tasks, and help manage information overload.
However, without company-wide rules for how, when, and where AI can be used, employees are independently making decisions about how and when to use AI. Sometimes, this takes the form of asking an online AI chatbot to do research or using AI tools built-in to productivity apps, or it could mean employees are running AI tools without getting explicit permission.
“Shadow AI,” like other shadow IT, puts the entire organization at risk due to data corruption and exfiltration while opening up potential inroads for cyber threats. Data protection is a huge issue for business and cybersecurity leaders; in fact, it’s a top concern for 95% of decision makers.
To move beyond these challenges, organizations are looking to AI-powered productivity tools that are built from the ground up to be secure, like Microsoft 365 Copilot, and solutions that can help identify and mitigate these risks, such as Microsoft Purview. Tools such as these are even more effective because they work together to change the conversation from “How do we limit AI?” to “How can we use AI to be more effective and more secure?”
Discover how to help every business user harness the groundbreaking power of AI securely. Download Microsoft’s latest Data Security Index Report now.
AI risks and reactions
The concerns over AI have led to resistance to AI use in the workplace, with nearly half of cybersecurity leaders expecting to continue banning all use of generative AI for the foreseeable future.
In a Microsoft survey of data security professionals, 43% of organizations said that a lack of controls to detect and mitigate risk is a top concern. The data-related concerns about AI use stem from risks that include:
- Protecting sensitive data and intellectual property from internal and external leaks
- Hallucinations and inaccuracies in the output of AI tools
- Allowing AI tools to access and share proprietary data
- Bias or other ethical concerns in AI systems’ training
While these issues must be addressed, it’s just as important to find effective ways to incorporate AI into workflows. With the right strategy and tools, both of these goals — security and productivity — can be accomplished without disrupting the flow of business.
Start with the business drivers, not just the technology
Security, governance, and ultimately, trust in the systems should stem from how best to empower users, and realistic limits can be set. In other words, instead of only considering AI just as a technology, look at what users need to accomplish, and then see if AI is the best way to enable those goals. If so, governance and security guidelines can be updated to reflect how people can most effectively do business.
Here’s a mundane, but realistic test case: email. Microsoft 365 users frequently deal with email overload. For perspective, 85% of emails are read in less than 15 seconds, and on average, people read four emails for every one they send, according to Microsoft’s 2024 Work Trend Index Annual Report. AI can sort content into threads, and can also summarize the content, enabling users to read, understand, and respond faster.
Supporting productivity gains with AI is one part of the equation. The other is knowing where AI is deployed and what information it can access and share. An ecosystem that provides both makes it exponentially easier to manage, deploy, and secure AI-powered solutions across the business.
Unified solutions keep users productive and data from slipping through the cracks
Microsoft 365 and Microsoft Purview, for example, work together to create a secure, yet highly productive business environment. Microsoft Copilot automatically inherits your organization’s security, compliance, and privacy policies for Microsoft 365. This is crucial, because as the organization adopts AI tools that integrate with Microsoft 365, it can be unclear as to just where data can flow across apps.
As organizations adopt AI tools and integrate with Microsoft 365, it can be difficult to know the full journey of data, including which apps it will enter and who may be able to access it. Labeling data according to who can interact with it is key to preventing overexposure. Working in sync, Microsoft 365 and Microsoft Purview:
- Permission models within Microsoft 365 services, including Microsoft Copilot, ensure appropriate access for users. Microsoft Copilot blocks restricted data to avoid content oversharing and data privacy issues.
- Microsoft Purview Information Protection scanner helps uncover oversharing, while Microsoft Purview Data Loss Prevention keeps sensitive data from being pasted into AI prompts.
- Microsoft Purview AI Hub delivers insights into unlabeled files and Microsoft SharePoint sites referenced by Microsoft 365, enabling users to prioritize data risks.
This fully synchronized approach to security reduces the risks of implementing AI while enabling users to see results quickly. This not only protects the organization’s data, systems, and people, it helps ensure compliance — a must for highly regulated businesses, but just as important for any business looking to accelerate processes, react faster to changing conditions, and enable greater productivity, efficiency, and innovation.
Discover how to help every business user harness the groundbreaking power of AI securely. Download Microsoft’s latest Data Security Index Report now.