The Security Risks of ChatGPT

Issue 2/3 2023 Editor's Choice, Information Security, News & Events, Integrated Solutions, Security Services & Risk Management

ChatGPT, developed by the artificial intelligence lab OpenAI, is a humanoid chatbot causing a global sensation. It is now the fastest growing app in history, hitting 100 million active users in just two months – much faster than the nine months it took previous record-holder TikTok to reach that mark.


Byron Horn-Botha.

This powerful, open-source tool can do whatever you ask, including writing school essays, drafting legal agreements and contracts, solving complex mathematical problems, and passing the medical licensing exam. It also has the potential to revolutionise the way businesses operate. With ChatGPT, you can generate reports quickly and handle customer service requests efficiently. The tool can even write code for your next product offering, conduct market analysis and help build your company website.

Although ChatGPT offers many benefits to businesses, it also poses urgent security questions. One of the critical risks associated with this technology is the power it gives cyber criminals with no coding experience to create and deploy malicious software. With ChatGPT, anyone with bad intentions can quickly develop and unleash malware that wreaks havoc on companies.

Security firm Check Point Research reported that, within weeks of ChatGPT’s release, individuals in cybercrime forums, including those with limited coding skills, used it to create software and emails for espionage, ransomware attacks, and malicious spamming1. Check Point said it is still too early to tell if ChatGPT will become the go-to tool among dark web dwellers. Still, the cybercriminal community has demonstrated a strong interest in ChatGPT and is already using it to develop malicious code.

In one example, reported by Check Point2, a malware creator revealed in a cybercriminal forum that they were using ChatGPT to replicate well-known malware strains and techniques. As evidence, the individual shared the code for a Python-based information stealer that they developed using ChatGPT. The stealer searches, copies, and transfers 12 common file types from a compromised system, including Office documents, PDFs, and images.

Increases everyone’s exposure to hacking.

Bad actors can use ChatGPT and other AI writing tools to make phishing scams more effective. Traditional phishing messages are often easily recognisable because they are written in clumsy English, but ChatGPT can fix this. Mashable tested ChatGPT’s ability by asking it to edit a phishing email. Not only did it quickly improve and refine the language, but it also went a step further and blackmailed the hypothetical recipient without prompting.

While OpenAI says it has strict policies and technical measures to protect user data and privacy, the truth is that these may not be enough. ChatGPT scrapes data from the web, potentially data from your own company, which brings security risks. For instance, data scraping can result in sensitive information, such as trade secrets and financial data exposure to competitors. There can also be reputational damage if the information obtained through data scraping is inaccurate. Moreover, when data is scraped, it can open systems to vulnerabilities that malicious actors can exploit.

Given that the attack surface has dramatically expanded due to the advent of ChatGPT, what impact does this have on your security posture? Previously, small and mid-sized businesses may have felt secure, thinking that they were not worth the trouble of hacking. However, with ChatGPT making creating malicious code at scale easier, everyone’s exposure to cybercrime has significantly increased.

ChatGPT demonstrates that while the number of security tools available to protect you may be increasing, these tools may not be able to keep pace with emerging AI technologies that could increase your vulnerability to security threats. Given the spiralling threat of cybercrime, every business needs to be aware of the potential risks posed by ChatGPT and other advanced AI systems, and take steps to minimise those risks.

Measures you can take to protect yourself.

Your first step is to understand just how vulnerable you are. Penetration testing, also known as pen testing, can help protect your data by simulating a real-world attack on your company’s systems, networks, or applications. This exercise aims to identify security vulnerabilities that malicious actors could exploit so you can close them. By exposing your weaknesses in a controlled environment, pen testing enables you to fix those weaknesses, improve your security posture and reduce the risk of a successful data breach or other cyberattacks. In the new world of ChatGPT, penetration testing can play a crucial role in helping you safeguard your data and ensure its confidentiality, integrity, and availability.

Companies must also double down on their data resilience strategy and have a solid data protection plan. A data resilience plan outlines the steps a business should take to protect its critical data and systems and how it will restore normal operations as quickly and efficiently as possible if a data breach occurs. It also provides a roadmap for responding to cyber threats, including detailed instructions for securing systems, backing up data, and communicating with stakeholders during and after an incident. By putting a data resilience plan in place, businesses can minimise the impact of cyber threats and reduce their risk of data loss, helping to ensure their organisation’s ongoing success and survival.

Another way of stopping ChatGPT-enabled script kiddies and criminals is through immutable data storage. Immutability means data is converted to a write-once, read many times format, and cannot be deleted or altered. There is not any way to reverse the immutability, which ensures that all your backups are secure, accessible, and recoverable. Even if attackers gain full access to your network, they will still not be able to delete the immutable copies of your data or alter the state of that data.

By putting the proper protection in place, organisations can realise the many benefits of ChatGPT, while defending themselves against those who use the tool for malicious purposes.

For more information, contact Arcserve Southern Africa, [email protected], www.arcserve.com

1 www.securitysa.com/*gpt1

2 www.securitysa.com/*gpt2




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Highest increase in global cyberattacks in two years
Information Security News & Events
Check Point Global Research released new data on Q2 2024 cyber-attack trends, noting a 30% global increase in Q2 2024, with Africa experiencing the highest average weekly per organisation.

Read more...
From QR code to compromise
Information Security News & Events
A new attack vector involves threat actors using fraudulent QR codes emailed in PDF attachments to bypass companies' phishing security measures by requiring users to scan the code with their mobile phones.

Read more...
Boost revenue streams for MNOS
News & Events Security Services & Risk Management Financial (Industry)
ReveNet has introduced its new solution, designed to safeguard and potentially boost revenue streams in an increasingly challenging landscape for MNOS. The new platform combines advanced analytics and is built on trust, transparency, and sustainability principles.

Read more...
Here’s to a SMART 2025
SMART Security Solutions Editor's Choice News & Events
This is the final news brief from SMART Security Solutions for 2024, and the teams would like to take this opportunity to thank our readers, advertisers and partners and wish everyone a safe and secure festive season.

Read more...
Organisations fear AI-driven cyberattacks, but lack key defences
Kaspersky Information Security News & Events Training & Education
A recent Kaspersky study reveals that businesses are increasingly worried about the growing use of artificial intelligence in cyberattacks, with 56% of surveyed companies in South Africa reporting a rise in cyber incidents over the past year.

Read more...
Smart surveillance and cyber resilience
Axis Communications SA Surveillance Information Security Government and Parastatal (Industry) Facilities & Building Management
South Africa’s critical infrastructure sector has to step up its game regarding cybersecurity and the evolving risk landscape. The sector has become a prime target for cybercriminals on top of physical threat actors, and the consequences of an incident can be far-reaching.

Read more...
ONVIF launches new online learning initiative
Training & Education Surveillance News & Events
ONVIF has released the first course in a new online learning initiative designed to promote greater knowledge and understanding of ONVIF's workings. The first “Introduction to ONVIF” course is now available.

Read more...
NIS2 compliance amplifies skills shortages and resource strain
Information Security Security Services & Risk Management
A new Censuswide survey, commissioned by Veeam Software reveals the significant impact on businesses as they adapt to this key cybersecurity directive, with 95% of EMEA businesses siphoning other budgets to try and meet compliance deadline.

Read more...
Axis announces ARTPEC-9 SoC
Axis Communications SA Surveillance News & Events
Axis Communications has announced the 9th generation of its system-on-chip (SoC). ARTPEC-9 builds on and refines the capabilities and features of previous generations of the company’s in-house designed SoC, including exceptionally low bitrate, AI-powered analytics, quality imaging, and enhanced cybersecurity.

Read more...
Enhanced remote video management
Duxbury Networking Surveillance News & Events
Duxbury Networking has announced the release of Milestone Systems’ XProtect 2024 R2 advanced video management software (VMS), offering improved remote management, optimised video performance, and further enhancing the operational efficiency of businesses across various sectors.

Read more...