Application security, Network Security

How AI powers safer video conferencing and collaboration

Share
Video conferencing tools like Zoom grew in importance during the pandemic last year. Today’s columnist, Devin Redmond of Theta Lake, writes about how AI will transform how security teams can more effectively manage these collaboration platforms.

For millions of professionals, COVID-19 dramatically accelerated the changes in the way we work. Video conferencing and collaboration tools such as Microsoft Teams, Zoom, and Cisco Webex have become a staple of our everyday work lives. Many of us spend our days switching between multiple different video conference calls, screenshares, chat channels, and real-time messaging services as we coordinate virtually with our co-workers.

The growth in the usage of these collaboration tools has been astronomical. This sudden shift to working from anywhere and the rise of modern video collaboration platforms has created efficiencies and made it possible for many people to work safely during the global pandemic. However, it has also increased data security, compliance, and legal risk for any organization using these platforms for collaboration. Those tasked with overseeing these security and compliance risks are struggling to keep up.

As our virtual work environments have become more complex, the challenge of protecting sensitive organizational data from accidental or intentional exposure and maintaining regulatory compliance has become more difficult. Video conferencing, file-sharing, app-sharing, screen-sharing, chat, webcams, and digital whiteboards are all potential means for data security or compliance breach. From Zoom-bombing into restricted meetings, to accidentally over-sharing sensitive information on the screen, to embarrassing personnel incidents that can lead to HR complaints and reputational damage, the risks are unquestionably great.

With enterprise organizations logging thousands of hours of meetings on these tools, it’s nearly impossible for security and compliance officers to ensure that everything shown on screen, spoken about, shared in a file upload, or typed in a chat window maintains privacy and compliance. Traditional methods of monitoring communications, such as capturing and then combing through records relying on basic word searches, misses important visual clues and lacks contextual understanding, which can cause missing high-risk incidents, or conversely, flagging false positives. Fortunately, advancements in artificial intelligence (AI), machine learning (ML) and natural language processing (NLP) make it possible for security professionals to effectively manage, monitor and report on complex collaboration applications at scale, while also helping protect end-users from making costly mistakes. AI will transform the way organizations can improve security and compliance for modern collaboration platforms and here are the top three ways they are able to do so:

  • Identify risk at scale.

With the amount of digital content generated across collaboration platforms, it’s humanly impossible for enterprise security and compliance officers to supervise and review everything shared, shown, typed, or said in employee communications. Modern security and compliance solutions designed specifically for these collaboration platforms leverage not only AI and deep learning, but also curated risk detections built by experts to quickly parse through all the data to identify any incident or behavior that could pose a risk for the organization. These detections can leverage and makes sense out of fuzzy matching of non-exact text, along with image analysis, Optical Character Recognition (OCR), transcription and more to identify and flag the incidents that are of concern to a security or compliance officer. By leveraging AI and analyzing various pieces of content from a conversation, these technologies can take into account the context and intent of a situation to more accurately identify which situations are a risk, and which are non-issues. In essence, AI helps quickly and efficiently narrow down from vast volumes of data, to just those incidents that need review.

  • Prioritize risk levels.

Second, AI helps automate the review process by effectively prioritizing and triaging those incidents that are the highest risk and escalates them to security and compliance teams for immediate review. By understanding the context and intent of the conversation as well as what potential bad happened or what good results didn’t happen in the conversation, onscreen, or in the chat, AI-powered solutions can filter the signal from the noise and assign a risk score to specific actions and behaviors within a session. The flagged incidents are displayed on a visual dashboard, making it easy for reviewers to zero in on the exact moment in the recording when the incident takes place, for faster and more efficient responses. Moreover, AI and ML systems learn from previous incidents and become better trained with each event, until eventually, they can pre-recommend the appropriate action to take, based on previous responses. By assigning risk scores, prioritizing incidents for immediate review and recommending the action that needs taking, AI can help security and compliance professionals work more efficiently to identify and address high risk situations before they become a larger issue or lead to a major data breach.

  • Alert end-users to risk in real-time.

The ability of AI to improve security and compliance on video collaboration platforms has nothing to do with streamlining the security professional’s job. Rather, it helps alert and educate the end-user (the organization’s employee) in real-time to risky or potentially noncompliant behavior they may engage in. AI-powered compliance advisors built-in to modern security and compliance tools support employees during audio, video and chat conference sessions with real-time alerts and reminders that reduce risk. For example, when a user participates in a meeting and shares their screen, the system remind them to take precautions to not “overshare” with a message in that meeting that also reminds them that collaboration security and compliance monitoring takes place. With this ability to understand the context of meeting activity, AI-powered compliance assistants can automatically alert users and remind them of data security and compliance best practices when engaging in potentially risky behaviors or conversations.  

The shift to work from anywhere and the growth of modern video conferencing and collaboration tools will continue in the months and years ahead. A recent survey showed 87 percent of workers want the option to continue working from home even after we defeat COVID-19. As such, security and compliance officers at enterprise organizations can’t adequately monitor these complex, unified communications and collaboration platforms without the use of AI and ML. With AI-powered security and compliance solutions purpose-built for monitoring collaboration platforms, organizations can effectively identify, prioritize and respond to risk in real-time to protect their data and their employees in this new world of remote work.

Devin Redmond, co-founder and CEO, Theta Lake  

An In-Depth Guide to Application Security

Get essential knowledge and practical strategies to fortify your applications.