Today, many people rely on neural network-based language models like ChatGPT for their jobs. A Kaspersky survey revealed that 11% of respondents had used chatbots, with nearly 30% believing in their potential to replace jobs in the future. Other surveys indicate that 50% of Belgian office workers and 65% in the UK rely on ChatGPT. Moreover, the prominence of the search term ‘ChatGPT’ in Google Trends suggests a pronounced weekday usage, likely tied to work related tasks.
The growing integration of chatbots in the workplace prompts a crucial question: can they be entrusted with sensitive corporate data? Kaspersky researchers have identified four key risks associated with employing ChatGPT for business purposes.
Data leak or hack on the provider’s side
Although tech majors operate LLM-based chatbots, they are not immune to hacking or accidental leakage. For example, there was an incident in which ChatGPT users could see messages from others’ chat histories.
Theoretically, chats with chatbots might be used to train future models. Considering that LLMs are susceptible to ‘unintended memorisation’, wherein they remember unique sequences like phone numbers that do not enhance model quality but pose privacy risks, any data in the training corpus may inadvertently or intentionally be accessed by other users from the model.
In places where official services like ChatGPT are blocked, users might resort to unofficial alternatives like programs, websites, or messenger bots, and download malware disguised as a non-existing client or app.
Attackers can get into employee accounts, accessing their data through phishing attacks or credential stuffing. Moreover, Kaspersky Digital Footprint Intelligence regularly finds posts on dark web forums selling access to chatbot accounts.
Summarising above, data loss is a significant privacy concern for users and businesses when using chatbots. Responsible developers outline how data is used for model training in their privacy policies. Kaspersky’s analysis of popular chatbots, including ChatGPT, ChatGPT API, Anthropic Claude, Bing Chat, Bing Chat Enterprise, You.com, Google Bard, and Genius App by Alloy Studios, shows that in the B2B sector, there are higher security and privacy standards, given the more significant risks of corporate information exposure. Consequently, the terms and conditions for data usage, collection, storage, and processing are more focused on safeguarding compared to the B2C sector. The B2B solutions in this study typically do not automatically save chat histories, and in some cases, no data is sent to the company's servers, as the chatbot operates locally in the customer's network.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.