Four privacy risks in using ChatGPT for business

Issue 6 2023 AI & Data Analytics, Information Security

Today, many people rely on neural network-based language models like ChatGPT for their jobs. A Kaspersky survey revealed that 11% of respondents had used chatbots, with nearly 30% believing in their potential to replace jobs in the future. Other surveys indicate that 50% of Belgian office workers and 65% in the UK rely on ChatGPT. Moreover, the prominence of the search term ‘ChatGPT’ in Google Trends suggests a pronounced weekday usage, likely tied to work related tasks.

The growing integration of chatbots in the workplace prompts a crucial question: can they be entrusted with sensitive corporate data? Kaspersky researchers have identified four key risks associated with employing ChatGPT for business purposes.

Data leak or hack on the provider’s side

Although tech majors operate LLM-based chatbots, they are not immune to hacking or accidental leakage. For example, there was an incident in which ChatGPT users could see messages from others’ chat histories.

Theoretically, chats with chatbots might be used to train future models. Considering that LLMs are susceptible to ‘unintended memorisation’, wherein they remember unique sequences like phone numbers that do not enhance model quality but pose privacy risks, any data in the training corpus may inadvertently or intentionally be accessed by other users from the model.

In places where official services like ChatGPT are blocked, users might resort to unofficial alternatives like programs, websites, or messenger bots, and download malware disguised as a non-existing client or app.

Attackers can get into employee accounts, accessing their data through phishing attacks or credential stuffing. Moreover, Kaspersky Digital Footprint Intelligence regularly finds posts on dark web forums selling access to chatbot accounts.

Summarising above, data loss is a significant privacy concern for users and businesses when using chatbots. Responsible developers outline how data is used for model training in their privacy policies. Kaspersky’s analysis of popular chatbots, including ChatGPT, ChatGPT API, Anthropic Claude, Bing Chat, Bing Chat Enterprise, You.com, Google Bard, and Genius App by Alloy Studios, shows that in the B2B sector, there are higher security and privacy standards, given the more significant risks of corporate information exposure. Consequently, the terms and conditions for data usage, collection, storage, and processing are more focused on safeguarding compared to the B2C sector. The B2B solutions in this study typically do not automatically save chat histories, and in some cases, no data is sent to the company's servers, as the chatbot operates locally in the customer's network.




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Security industry embraces mobile credentials, biometrics and AI
AI & Data Analytics Access Control & Identity Management Integrated Solutions
As organisations navigate an increasingly complex threat landscape, security leaders are making strategic shifts toward unified platforms and emerging technologies, according to the newly released 2025 State of Security and Identity Report from HID.

Read more...
AI for retail risk management
Surveillance Retail (Industry) AI & Data Analytics
As businesses face mounting challenges in a volatile economic environment, Ares-i remains an essential tool for proactively identifying, assessing, and mitigating risks that threaten operational stability and customer satisfaction.

Read more...
Top five AIoT trends for 2025
Hikvision South Africa IoT & Automation AI & Data Analytics Facilities & Building Management
Hikvision highlights that with technological advances, AIoT (AI-powered Internet of Things) is transforming industries not just by enhancing security, but also by making the world smarter and more efficient.

Read more...
Threats, opportunities and the need for post-quantum cryptography
AI & Data Analytics Infrastructure
The opportunities offered by quantum computing are equalled by the threats this advanced computer science introduces. The evolution of quantum computing jeopardises the security of any data available in the digital space.

Read more...
The impact of AI on video analytics
Secutel Technologies AI & Data Analytics
Artificial intelligence (AI) has become part of any discussion around security, primarily in video surveillance, but its scope is expanding across more solutions and into integrated systems – where it holds the potential to offer the greatest impact.

Read more...
AI's promise for African economies
AI & Data Analytics
When it comes to transforming the economic potential of Africa, artificial intelligence (AI) is anticipated to contribute up to $1,5 trillion to the gross domestic product (GDP) of the continent.

Read more...
The algorithm of trust
Information Security AI & Data Analytics
Artificial intelligence is not just enhancing threats, it is providing organisations with a smart line of agile and detectable defence.

Read more...
Avoiding the trap of deepfake scams
AI & Data Analytics IoT & Automation
As cybersecurity technology evolves to block traditional attacks, cybercriminals are increasingly turning to social engineering—manipulative psychological tactics that exploit human trust and emotion—to achieve their goals.

Read more...
New AI advisor for robot selection
News & Events Industrial (Industry) AI & Data Analytics
Igus’ new AI chatbot has been added to its online platform to enable companies with little previous experience and technological expertise to quickly and reliably put together Low-Cost Automation (LCA) solutions to become more competitive.

Read more...
Identity is a cyber issue
Access Control & Identity Management Information Security
Identity and access management telemetry has emerged as the most common source of early threat detection, responsible for seven of the top 10 indicators of compromise leading to security investigations.

Read more...