AI Chatbots and Cybersecurity: How safe is it for your end users to use ChatGPT?

Tuesday, June 20, 2023

AI Chatbots and Cybersecurity: How safe is it for your end users to use ChatGPT?

There are over one billion Chat GPT users as of March 2023 — and that number may increase when you include those using other AI tools like Bard or Jasper AI. At least some of those users are likely your employees, clients and their end users, and they could be engaging in risky cyber behaviors.

When users lack an understanding of the capabilities and risks of advanced systems like AI and large language models, they may be more likely to make a careless mistake. Some users have been found inadvertently exposing confidential data, working with incorrect or inaccurate facts and violating compliance requirements from data privacy to copyright while using AI-tools. All these unintentional errors could lead to negative repercussions or a data breach and could likely be avoided with education.

By discussing and educating your team, clients and end users on the potential risks related to these tools, you’re helping protect your business and those you support. Follow our guide to discussing AI and large language models with the people you support to help mitigate these risks — but first, you’ll need to do a little homework.

Before you jump in … Take some time to decide how you want to approach the conversation. You may consider creating a policy for how your team uses generative AI, or making note of some of the best practices you’d like your team to follow. Consider who you’re speaking with and their current understanding and possible usage of ChatGPT or similar tools before you begin. Varying levels of understanding will impact how you have these conversations. 

Ready? Let’s go.

Start by talking through these five areas of discussion with your team. As you work your way through, you’ll likely learn new things, generate interesting ideas and better understand the policies your team may need or tools that could help them thrive.

1. Cover the basics of how AI and large language models work

Large language models are trained to contextually predict the next word in a sentence using large amounts of data. Describe how these models use their training to respond to user prompts. It’s important to bring up that the LLMs are not being supervised during their training period and could be learning information that’s incorrect, biased or inaccurate out of context.

You may also want to note that the data these models use is typically sourced from publicly available texts online or licensed data providers. While most large language models don’t disclose their sources and we don’t know exactly where their information is coming from, it’s possible that the training data the LLM is using to respond to prompts includes private or sensitive information.

 

2. Be transparent about the risks and the rewards of these tools

Some large language models use user inputted data to train and fine tune their responses. If an LLM is taking in a lot of sensitive data, a hacker could potentially find a prompt that could regurgitate that information. While the usage of inputted data is (hopefully) disclosed in the data privacy policy or consent form you’d sign before using a tool, many users don’t read those forms, leaving them open to risk.

The data your users input into an LLM could put more than their personal information at risk, it could make the company vulnerable as well. If a user enters proprietary information like a draft of a private document or client and user information, there could be real world implications. Alternatively if your users are receiving inaccurate, copyrighted or trademarked information from their AI-tools, that they then use in their work for your business, other legal vulnerabilities could surface.

To avoid creating an environment of fear, you may also want to discuss the positive uses of AI. Many common uses can help simplify tedious tasks, streamline processes or jumpstart creativity. Share a few examples that may work for your users and that you’re comfortable with to encourage safe AI use with your team.

 

3. Share AI best practices

After you discuss the potential risks outlined above, discuss the ways your users can more safely use these tools.

○      Be aware of common AI-tool scams and vulnerabilities. Hackers don’t hesitate to try new tools. And while they’re capitalizing on using AI for advanced social engineering and writing malware scripts, they’re also using AI in more traditional attacks. Chrome extensions that claimed to give quick access to the chatbot were found to be giving bad actors the ability to hijack Facebook accounts to create bots and malicious advertising, while tweets claiming to give users free access to a paid ChatGPT Plus account were found to be installing malware and stealing credentials from those who clicked.

The release of plugins that expand the capabilities of ChatGPT to give it access to the internet and other applications could create more security risks. Ensure your users remain cautious of the tools they’re using, where they click and what they download and stay aware of these new and emerging trends through ongoing education.

○      Never enter personal or sensitive information into LLMs. Even if an AI chatbot has strict privacy rules, it’s best to refrain from entering any private data in your prompts. Whether or not the company behind the chatbot keeps your information private in the long term, anything you enter is still being shared with a company outside of your own, and is best avoided.

How to disable chat history in ChatGPT:

Turn off your chat history in the settings of ChatGPT to stop the chatbot from using your information to train and improve its models. Once disabled, the service will retain new conversations for 30 days and only monitor conversations when necessary (to monitor abuse) before permanently deleting them. You can disable this feature by:

1. Once logged in to your ChatGPT account, select the three-dot menu next to your profile name and image.
2. Select Settings > Data Controls
3. Toggle Chat History & Training off

Once you’ve turned off the feature, your chat history will disappear and you’ll see the option to enable chat history again on the side panel. Note that this setting does not sync across browsers or devices so you’ll need to take this action across your browsers and devices to ensure your chats are fully removed from future training content.

○      Verify, verify, verify. Don’t blindly trust the output of an LLM without validating its information first. Users who create content or learn and share information they’ve learned directly from an LLM could be spreading inaccuracies. Use a plagiarism-checker on content used from an LLM, even if you’ve modified it, to ensure you’re not using copyrighted material.

You may want to discuss how the content these models are trained on is imperfect and how that leads to the model’s outputs being imperfect as well. Share the example of CNET, a media site that published 77 AI-generated stories and ended up removing over half of them after they were found to contain factual errors and potentially plagiarized content.

○      Add originality. The best way to use content from an LLM (after validating it) is to transform it into something unique and personal by essentially using it as a starting point for your independent work. These guidelines should help your team mitigate legal or ethical concerns related to using AI-generated content.

  • Add a personal touch - Differentiate your content from the AI’s output by modifying it to include your voice and style.
  • Make it yours - Include your perspective and experience in the content you create to transform it into a unique work.
  • Be strict about compliance - independently verify and modify all content you use that was originally generated by an LLM. This will help ensure compliance with ethical and legal guidelines.

 

4. Clarify your policies around AI chatbots and LLMs.
Your company may not have an AI or LLM-related policy in place yet. When you do, make sure it aligns with your internal and external customer privacy policy and your terms of use policy to guide users through what data can and cannot be entered into ChatGPT plugins or other LLMs. If you’re not ready to finalize and share your full policy, provide clarity for your team by outlining a few basics:

○      Which uses are not allowed? This could be checking code for errors with AI, or using an LLM to aggregate customer data.

○      Which uses require prior authorization? You may want to allow developers to use AI to create new code, or have your customer service team use it to generate common responses to customer questions.

○      Which uses are generally permitted without prior authorization. There are some uses you may not feel the need to authorize because they’re internal or inconsequential. Think of things like generating ice breakers for a meeting, creating an outline or other non-public facing uses.

 

5. Be ready for questions

Your team will likely have a lot of them! Be prepared to respond to concerns about jobs, questions about specific softwares and AI uses you may not have thought of. Give your team a way to follow up with questions you can’t respond to at the moment and invite them to continue discussing the topic at regular intervals.

 Ensuring that your end users understand their role in safeguarding the business and their own personal data, can build trust with your team and improve overall cybersecurity practices. Ongoing training and automated AI-driven phishing simulations will help your team stay on top of modern attack trends and improve employee education. Build your culture of cybersecurity with HacWare. Visit hacware.com/msp for more.

 

Sources:

Chat GPT

 https://openai.com/policies/api-data-usage-policies

 https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears

 https://www.shrm.org/resourcesandtools/hr-topics/technology/pages/how-to-create-the-best-chatgpt-policies-.aspx

 https://www.forbes.com/sites/bernardmarr/2023/03/01/the-best-examples-of-what-you-can-do-with-chatgpt/?sh=2d2b1201df11

 https://www.theverge.com/2023/1/25/23571082/cnet-ai-written-stories-errors-corrections-red-ventures

 https://www.debevoisedatablog.com/2023/02/07/does-your-company-need-a-chatgpt-policy-probably/

 

 

The term 'ConnectWise' is a trademark of ConnectWise, LLC. These applications use the ConnectWise API but is not a ConnectWise product or service and is licensed separately from ConnectWise products and services.