Security Program Controls/Technologies

Four risks to consider before using ChatGPT for security operations

ChatGPT

ChatGPT has made a name for itself these days, it’s seemingly everywhere. Google and Microsoft have released their own versions of large language models (LLMs), and a multitude of other chatbots and complementary technology are in active development. As a result of how much buzz generative AI has received from the media, organizations and people alike, it makes sense that IT security companies are also hoping to benefit from these technologies.

And while this emerging technology has the potential to make software development more convenient, it’s equally as likely to become a source of potential threats and headaches for security-minded organizations.

Here are four potential threats and headaches that security teams can expect from ChatGPT:

1. ChatGPT does not replace security experts

    The tool takes a massive amount of text and other data that it finds online and uses various mathematical models to create its responses. It isn’t a security expert, but it’s good at locating what human security experts have posted. However, ChatGPT lacks the ability to think for itself and it’s often influenced by user decisions, which can change everything related to remediation in security. And while the code generation features are tantalizing, ChatGPT does not code to the same level of sophistication as a seasoned security expert.

    2. ChatGPT isn't very accurate

      Despite the fanfare over passing a law school bar exam and other college tests, ChatGPT isn’t all that smart. Its database only dates back to 2021, although new data gets uploaded all the time. That’s a big issue for delivering up-to-the-minute vulnerabilities for example. But it doesn’t always offer up the right answers because it depends on how the users frame their questions or describes the context of the queries. Users have to take the time to refine their questions and experiment with the chatbots, which will require new skills in how we formulate our queries and develop our own expertise.

      3. ChatGPT can cause extra work for coders

        It cannot serve as a no-code solution or bridge the talent gap as non-experts put in charge of the tech cannot verify the generated recommendations ensuring they make sense. In the end, ChatGPT will create more technical debt as security experts will have to vet any AI-produced code to ensure its validity and security bona fides.

        4. ChatGPT could potentially expose sensitive information

          By its very nature, the inputs to all chatbots are continuously used to retrain and improve the models themselves. ChatGPT can potentially exploit an organization’s vulnerabilities and create a single place for hackers to access the data generated by a chatbot. We have already seen an early compromise where user chat histories were exposed.

          Given these issues, what should IT security managers do to protect their organizations and mitigate risks? Gartner has offered a few specific ways to become more familiar with the chatbots and recommended using Azure’s version for experimentation because it does not capture sensitive information. They also propose putting the right policies in place to prevent confidential data from being uploaded to the bots, such as the policies Walmart enacted earlier this year.

          IT managers should also work on better and more targeted awareness and training programs. One consultant suggests using the chatbots themselves to generate sample training messages. Another technique: generate reports and analysis of cybersecurity threats that security experts can rewrite for the general public.

          As ChatGPT continues to make headlines, we will need be careful with which technology we embrace. In the coming years, investment priorities will likely change so that privacy and compliance teams lean on security teams even more to ensure their privacy controls are compliant with new regulations. ChatGPT may or may not fit into this plan. Either way, security analysts need to weigh the pros and cons of the AI interface and determine if it’s truly worth the risk of integration.

          Ron Reiter, co-founder and CTO, Sentra

          Ron Reiter

          Ron Reiter is a Co-Founder & CTO at Sentra, a cloud data security company. He is an experienced entrepreneur who sold his company to Oracle in 2016 and went on to invest in over a dozen new startups. After serving in Unit 8200, Ron spent 15 years in various managing positions in data engineering, cybersecurity, and cloud infrastructure.

          Get daily email updates

          SC Media's daily must-read of the most current and pressing daily news

          By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

          You can skip this ad in 5 seconds