AI presents people and companies with benefits and risks

SMART Access & Identity 2023 Editor's Choice, AI & Data Analytics

Artificial intelligence (AI) and all its derivatives have become a staple in the news over the past few years. Just like video analytics in its early stages, AI promoters oversold the benefits and the technology generally failed to deliver on its promises.

Times have changed though, and AI capabilities have advanced significantly, with even more dramatic advances in the pipeline. In the access control and identity management markets, AI has only recently been seen as a boon to access control with its ability to manage secure access in both the physical and digital worlds. This trend will only continue to grow, although slowly as physical access control installations tend to have a longer lifespan than other technologies. The most development we will see is in integrating access and building management technologies.

In the digital access and identity management world, AI has moved much faster in areas such as facial recognition, even via a mobile. These advances will only continue, including identification at a distance via surveillance cameras as well as behavioural recognition and identification – technologies that have already been developed and deployed. Big Brother will be watching closer than ever in future.

So, while AI will have many positive benefits that will enhance not only the access and identification markets, operational efficiency and situational awareness beyond human capabilities, we will also see the technologies used to create and deploy some of the most dangerous cyber threats.

Following are two opinions on the latest AI newsmaker, the ChatGPT chatbot. Many people have been victimised by companies claiming their chatbots provide support to customers, much to the frustration of said customers when they discover the AI is only used to keep people from talking to a real person and getting their problems or questions resolved. By all accounts, this chatbot is the real thing, using deep learning techniques to generate text and conversations often indistinguishable from those created by actual humans. It has taken the news world by storm and is gaining a lot of attention, while also feeding its own learning algorithms with all the people ‘playing’ with it.

In the world of access control, the abilities of ChatGPT present a real threat to individuals and companies already struggling to defend access and identity against phishing attacks.


Cybersecurity dangers behind impressive new technology

By Anna Collard, SVP content strategy & evangelist at KnowBe4 Africa.


Anna Collard.

It is now possible to use a publicly available artificial chatbot to generate a complete infection chain, possibly beginning with a spear phishing email written in entirely convincing human-like language and eventually causing a complete takeover of a company’s computer systems.

Researchers at Checkpoint recently created such a plausible phishing email as a test. They only used ChatGPT, a chatbot that uses deep learning techniques to generate text and conversations that can convince anyone that a real person wrote it.

In reality, there are many potential cybersecurity dangers wrapped up in this impressive technology, developed by OpenAI and currently available online free.

Here are just a few of them:

1. Social engineering: ChatGPT’s powerful language model can be used to generate realistic and convincing phishing messages, making it easier for attackers to trick victims into providing sensitive information or downloading malware.

2. Scamming: The generation of text through ChatGPT’s language models allows attackers to create fake ads, listings and many other forms of scamming material.

3. Impersonation: ChatGPT can be used to create a convincing digital copy of an individual’s writing style, allowing attackers to impersonate their target in a text-based setting, such as in an email or text message.

4. Automation of attacks: ChatGPT can also be used to automate the creation of malicious messages and phishing emails making it possible for attackers to launch large-scale attacks more efficiently.

5. Spamming: The language model can be fine-tuned to produce large amounts of low-quality content, which can be used in a variety of contexts, including as spam comments on social media or in spam email campaigns.

All five points above are legitimate threats to companies and all Internet users that will only become more prevalent as OpenAI continues to train its model.

If the list above managed to convince you, the technology succeeded in its purpose, although in this instance not with malicious intent. ChatGPT wrote all the text from points one to five with minimal tweaks for clarity. The tool is so powerful it can convincingly identify and word its own inherent dangers to cybersecurity.

However, there are mitigating steps individuals and companies can take, including new-school security awareness training. Cybercrime is moving at light speed. A few years ago, cybercriminals used to specialise in identity theft, but now they take over your organisation’s network, hack into your bank accounts, and steal tens or hundreds of thousands of rand.

An intelligent platform like ChatGPT created with the best intentions, only adds to the burden on Internet users to always stay vigilant, trust their instincts and always know the risks involved in clicking on any link or opening an attachment.


A marvel for some, a cybersecurity threat for most

By Stephen Osler, co-founder and business development director at Nclose.


Stephen Osler.

Companies are lining up to invest in ChatGPT, but what risks come with this ingenious tool for creating human-like text?

Microsoft is reportedly investing $10 billion in OpenAI, the owner of the somewhat controversial large language model chatbot ChatGPT. It uses a deep learning technique to generate text and conversations often indistinguishable from those created by actual humans.

ChatGPT has dazzled amateurs and industry experts ever since its launch at the end of November last year. Given a prompt, ChatGPT can answer complex questions, provide suggestions and even debug programming code all while sounding extremely human.

It is very hard to believe a machine-learning algorithm created the text. Prompted with the question of how it works, ChatGPT explained it uses “a large dataset of text” so that it has “knowledge about a wide range of topics and can respond to a wide variety of questions and prompts”.

On the surface, this might sound like an amazing invention that can be used to explain complex concepts in simple terms. Brainstorm creative ideas, or even automate certain actions like customer support, write memos or keep minutes of meetings. However, it also poses a serious threat to cybersecurity.

Because ChatGPT can be used to write code, malicious actors have already started using it to build malware, sites on the dark web and other tools for enacting cyberattacks. Someone with no prior knowledge of coding can theoretically use ChatGPT to produce dangerous malware. OpenAI, the developer of ChatGPT, gave assurances that it has put restrictions in place to restrict the use of the bot to create malware, but on various online forums, users are boasting about still being able to find workarounds.

The average Internet user can do little to stop these actors from creating malicious software using ChatGPT. It just means that everyone should be extra vigilant of possible malware attacks.

The risk is especially prevalent in phishing emails, a method often used by criminals to serve malware embedded within links or attachments. Usually, poor spelling and incoherent grammar are the most telling traits of a scam email. “Everyone knows to be on the lookout for dodgy wording in an email and then to never click on links or open attachments in such emails, but ChatGPT brings with it a new threat: perfectly written, human-sounding emails that can be very, very convincing.”

This means a bad actor can ask ChatGPT to write a perfectly harmless-looking email that can either contain a link that downloads a malicious file or convinces the reader to supply compromising information like login details.

Cybercriminals have gotten very good at convincing the untrained eye that they are trustworthy. With the advent of ChatGPT it will become even harder to spot the fakes from the real deal. Internet users should always double, no triple check, if the recipient of an email is indeed the correct one. Always make sure you are receiving from and sending an email to the correct recipients.

Legitimate email address is something that an AI language bot like ChatGPT cannot fake. At least not yet.



Credit(s)




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

From the editor's desk: The beginning of the end
Technews Publishing News & Events
            As we come to the final issue of SMART Security Solutions, we can look back on a tough year: long decision-making cycles, squeezed budgets and the expectation of miracles on a shoestring. SMART Security ...

Read more...
IQ and AI
Leaderware Editor's Choice Surveillance AI & Data Analytics
Following his presentation at the Estate Security Conference in October, Craig Donald delves into the challenge of balancing human operator ‘IQ’ and AI system detection within CCTV control rooms.

Read more...
AI and automation are rewriting the cloud security playbook
Technews Publishing AI & Data Analytics
Old-school security relied on rules-based systems that flagged only what was already known. AI flips the script: it analyses massive volumes of data in real-time, spotting anomalies that humans or static rules would miss.

Read more...
Onsite AI avoids cloud challenges
SMART Security Solutions Technews Publishing Editor's Choice Infrastructure AI & Data Analytics
Most AI programs today depend on constant cloud connections, which can be a liability for companies operating in secure or high-risk environments. That reliance exposes sensitive data to external networks, but also creates a single point of failure if connectivity drops.

Read more...
Toxic combinations
Editor's Choice
According to Panaseer’s latest research, 70% of major breaches are caused by toxic combinations: overlapping risks that compound and amplify each other, forming a critical vulnerability to be exploited.

Read more...
Who has access to your face?
Access Control & Identity Management AI & Data Analytics
While you may be adjusting your privacy settings on social media or thinking twice about who is recording you at public events, the reality is that your facial features may be used in other contexts.

Read more...
Is your entrance security secure?
SMART Security Solutions Centurion Systems Technews Publishing News & Events Access Control & Identity Management Smart Home Automation
While Centurion Systems may be known as a leader in gate and door motors in 72 countries, the company has developed more than hardware and now offers an automation ecosystem for access control security.

Read more...
The impact of AI on security
Technews Publishing Information Security AI & Data Analytics
Today’s threat actors have moved away from signature-based attacks that legacy antivirus software can detect, to ‘living-off-the-land’ using legitimate system tools to move laterally through networks. This is where AI has a critical role to play.

Read more...
Continuum launches centralised access and identity management
Editor's Choice Access Control & Identity Management Integrated Solutions Facilities & Building Management
Continuum Identity is a newly launched company in the identity management and access control sector, targeting the complexity of managing various Access and Identity Management (AIM) systems.

Read more...
Who has access to your face?
Access Control & Identity Management Residential Estate (Industry) AI & Data Analytics
While you may be adjusting your privacy settings on social media or thinking twice about who is recording you at public events, the reality is that your facial features may be used in other contexts,

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.