Discussions about AI tools have become the most prominent in the past months. Because of their ability to boost productivity and save time, many employees have already adopted them into their daily work routines. However, before using the benefits of innovative AI tools, your employees should know how to engage with them securely – without jeopardizing your company’s data safety.
AI tools may help us develop ideas, summarize or rephrase pieces of text, or even create a base for a business strategy or find a bug in a code. When using AI, however, we must remember that the data we enter into the tools cease to belong to us as soon as we press the send button.
One of the primary concerns when utilizing large language models (LLMs), such as ChatGPT, is the sharing of sensitive data with large international corporations. These models are online text, enabling them to effectively interpret and respond to user queries. However, every time we interact with a chatbot and ask for information or assistance, we may inadvertently share data about ourselves or our company.
When we write a prompt for a chatbot, the entered data become public. This does not mean chatbots would immediately use this information as a base for replies to other users. But the LLM provider or its partners may have access to these queries and could incorporate them into future versions of the technology.
OpenAI, the organization behind ChatGPT, introduced the option to turn off chat history, which will prevent user data from being used to train and improve OpenAI´s AI models. That way, users get more control over their data. If the employees in your company would like to use tools such as ChatGPT, turning the chat history off should be their first step.
But even with chat history turned off, all the prompt data is still stored on the chatbot servers. By saving all prompts on external servers, there is a potential threat of unauthorized access by hackers. Furthermore, technical bugs can occasionally enable unauthorized individuals to access data belonging to other chatbot users.
So, how do you ensure that your company's employees use platforms such as ChatGPT securely? Here are some mistakes employees often make, and ways to avoid them.
Using client data as an input
The first common mistake employees make when using LLMs is inadvertently sharing sensitive information about their company’s clients. What does that look like? Imagine, for instance, doctors submitting their patients’ names and medical records and asking the LLM tool to write letters to the patients’ insurance companies. Or marketers uploading customer data from their CRM systems prompting the tool to compile targeted newsletters.
Teach employees to permanently anonymize their queries before entering them into chatbots. To protect customer privacy, encourage them to review and carefully redact sensitive details, such as names, addresses, or account numbers. The best practice is to avoid using personal information in the first place and to rely on general questions or queries.
Uploading confidential documents into chatbots
Chatbots can be valuable tools for quickly summarizing large volumes of data, and creating drafts, presentations, or reports. Still, uploading documents to tools such as ChatGPT may mean endangering company or client data stored in them. While it may be tempting to copy documents and ask the tool to create summaries or suggestions for presentation slides, it is not a data-secure way to go.
This applies to important papers, such as development strategies, but also less-essential documents – such as notes from a meeting – may lead to employees uncovering their company’s treasured know-how.
To mitigate this risk, establish strict policies for handling sensitive documents, and limit access to such records with a "need to know" policy. Employees need to manually review the documents before requesting a summary or assistance from the chatbot. This ensures that sensitive information, such as names, contact information, sales figures, or cash flow, is deleted or appropriately anonymized.
Exposing the company's data in prompts
Imagine you are trying to improve some of your company's practices and workflows. You ask ChatGPT to help with time management or task structure and input valuable know-how and other data into the prompt to assist the chatbot in developing a solution. Just like entering sensitive documents or client data into chatbots, including sensitive company data in the prompt is a common, yet potentially damaging, practice that can lead to unauthorized access or leakage of confidential information.
To address this issue, prompt anonymization should be an essential practice. That means no names, addresses, financials, or other personal data should ever be input to chatbot prompts. If you want to make it easier for employees to use tools such as ChatGPT securely, create standardized prompts as templates that can be safely used by all employees, if necessary, such as "Imagine you are [position] in [company]. Create a better weekly workflow for [position] focused mainly on [task]."
AI tools are not just the future of our work; they are already present. As the progress in the field of AI and, specifically, Machine learning is moving forward every day, companies inevitably need to follow the trends and adapt to them. From the data security specialist to the IT generalist position, could you make sure your colleagues know how to use these technologies without risking a data leak?
For 30 years,
has been developing industry-leading IT security software and services for businesses and consumers worldwide. With solutions ranging from endpoint and mobile security, to encryption and two-factor authentication, ESET’s high-performing, easy-to-use products give consumers and businesses the peace of mind to enjoy the full potential of their technology. ESET unobtrusively protects and monitors 24/7, updating defenses in real-time to keep users safe and businesses running without interruption. Evolving threats require an evolving IT security company. Backed by R&D centers worldwide, ESET becomes the first IT security company to earn awards, identifying every single “in-the-wild” malware without interruption since 2003. For more information visit or follow us on , and .