Is ChatGPT safe?
How good is that feeling when you think of a great idea of AI? You paste in your text to ChatGPT (or other), and it returns some fantastic output. You feel like you’ve just found a cheat code for work. Right? However, you need to be very, very careful about what you provide and where you do it. Because that content you just pasted could be used in someone else’s query. So it leads to the question “Is ChatGPT safe for your business?
How ChatGPT learning works
ChatGPT and other AI models improve over time by exposure to real-world problems and data. Sharing your content with them helps improve their models’ accuracy and problem-solving skills. This means they are using the data you provide to improve their AI.
More information allows them to make their models more powerful, so, by default, they use what you give them to make better AI.
How does ChatGPT collect and use my data?
Open AI, the owners of ChatGPT, talk about what they do with your data on their website.
Here’s a summary.
Third-party sharing: OpenAI has a select group with whom they share data. They promise not to sell your chats to those sketchy data broker companies that would love to spam you with ads.
Sharing with vendors and service providers: They do share some data with third-party vendors to make the service better.
Improving natural language processing ability: This is the key point I mentioned initially. So unless you turn off their ability to train on your data, everything you say to ChatGPT gets stored and used to make the AI smarter.
Data storage and retention: To keep your data safe, OpenAI anonymises your information and follows strict security rules, especially the tough European GDPR standards. Now this is data about you, not the data you enter into the prompt. This is a subtle but very important distinction.
Is ChatGPT safe?
So, if you believe Open AI can protect your data and choose not to share it voluntarily, then yes, it should be considered safe to use. No more or less secure than any other big AI players (Microsoft, Google, Amazon, Facebook, to name just a few). However, we are focusing on the data that you put into the Chat GPT.
If you haven’t turned off ChatGPT’s “train on my content” setting, that data could be exposed to the world.
Does Google Gemini train on my data?
No, it doesn’t. According to Google’s Data Governance page, it states that:
Gemini doesn’t use your prompts or its responses as data to train its models
Real-world examples of pasting into ChatGPT
Samsung trains ChatGPT on software code
The Economist Korea, one of the first to report on the data leaks, an engineer at Samsung pasted buggy source code from a semiconductor database into ChatGPT and asked the chatbot to fix the errors, which resulted in the first data leak.
The second leak occurred when an employee input code to identify defects in specific Samsung equipment into ChatGPT to optimise it.
The third leak happened when an employee used ChatGPT to generate the minutes of an internal Samsung meeting. You can read more about it here.
Amazon data used for AI training
Amazon cautioned its employees against sharing confidential information with ChatGPT after observing that the AI chatbot’s responses closely resembled internal company data.
This incident, one of the earliest of its kind, highlighted the risk of sensitive information being inadvertently used as training data for large language models.
You can read about more Real-world AI incidents in this post, but most of them are related to people tricking AI bots into providing discounts or excess refunds.
So these organisations didn’t do what was required to make ChatGPT safe. So how do we learn from their mistakes.
Practical tips to make ChatGPT safe for your business
Pay for AI
You can turn off the “Improve for everyone” setting, but it’s on by default. You would also need to make sure that each employee does this. The good news is it’s off by default for their Team or Enterprise plans. However, this alone won’t stop potential data leaks.
Use bing.com Copilot (logged in with your M365 account)
If ChatGPT Team or Enterprise is prohibitive for your business, there is still a free option. If you use Microsoft 365 for your business, you can use Copliot, which is part of bing.com. The key is to ensure you are logged in with your business email on Bing.
If you are logged in and then you visit the Copilot page, you should see this shield. 👇
This shield represents Enterprise data protection. What it means is that what you input into chat will not be used to train the AI model.
Use Microsoft Edge instead of Chrome.
We recommend that your business stop using Chrome and switch to Microsoft Edge. The great thing you can do about Microsoft Edge is automatically log each person into bing.com, which means Enterprise data protection will also be automatically switched on. Talk to us about how we can do this for you.
Block AI generative sites
Your business should decide which AI should be used and block all others. This will help prevent data leakage into other AI models your company hasn’t sanctioned.
This can be a time-consuming and manual process, but Microsoft is aware of this and has a solution coming. At the time of writing, it is in Public Preview, but we hope it will become available in early 2025. This will be your best layer of protection against data leakage due to AI.
How Businesses Can Stay Make ChatGPT safe
As always, you cannot have a technical solution that will work 100% of the time in isolation. To make ChatGPT safe you need to implement the above alongside the following business-related changes.
Provide AI to your staff
If your organisation pays for an AI service that meets the staff’s needs, it will be less likely they will seek out other AI systems to use. This may require you to pay for an AI service.
Treat AI Like a Public Forum
Everything you share with AI could become public knowledge. This means don’t enter:
- Sensitive customer information and PII
- Proprietary business data
- Internal trade secrets
- Medical records
- Software Code or internal data sets
Update agreement to reference AI
Update your business agreements:
- Client agreements need clear AI guidelines
- Employee contracts should address AI boundaries
- Confidentiality agreements must specifically cover AI interactions
Your legal documents should reflect the reality of AI usage.
AI Training for your staff
All staff should understand how AI works, its privacy implications, and how to use it safely.
Make sure to cover:
- Actual examples of good and bad AI usage
- Clear guidelines on what constitutes sensitive information
- Straightforward explanations of potential consequences
Company policy to include AI
Develop straightforward policies that help your team navigate AI usage:
- Simple decision frameworks
- Clear escalation paths
- Practical examples and use cases
Lean on your Managed IT provider to make ChatGPT safe
While AI holds immense business potential, it also presents significant data protection and compliance risks. Organisations must tread carefully to ensure that their use of AI aligns with legal and ethical standards.
It’s crucial to implement robust data governance practices, provide training to employees, and consider solutions like paid AI services or Microsoft’s Copilot.
Remember to take a look at our guide for implementing Generative AI in your business
If you have any questions or concerns about AI and data privacy, we encourage you to reach out to us for a chat.
About the author
Yener is the founder and Managing Director of Intuitive IT. Prior to running his own business Yener worked for a number of corporate organisations where he gained invaluable experience and skills, as well as an understanding of how IT can complement and improve business outcomes.