Across the globe, organisations of all sizes are exploring how artificial intelligence (AI) and generative AI, in particular, can benefit their businesses. While they are still trying to figure out how best to use AI, cybercriminals have fully embraced it.
Whether they are creating AI-enhanced malware (that enables them to steal sensitive data more efficiently, while evading anti-virus software) or using generative AI tools to roll out more sophisticated phishing campaigns at scale, the technology looks set to have a massive impact on cybercrime. As an example of how significant AI’s impact has already been, SlashNext’s State of Phishing Report 2023 puts the 1 265% increase in phishing emails largely down to targeted business email compromises using AI tools.
For businesses, this increase in cybercrime activity comes with significant risks. Those risks do not just include the compromising of customer data either. Cyberattacks also come with reputational and financial risks and even legal liabilities. Therefore, organisations must do everything in their power to ready themselves for the onslaught of cybercriminals using AI tools. That includes ensuring that their own AI use is safe and responsible.
Massively enhanced innovation, automation, and scalability
Before examining how organisations can do so, it is worth discussing what cybercriminals get from AI tools. For the most part, it is the same thing as legitimate businesses and other entities are trying to get out of it: significantly enhanced innovation, automation, and scalability.
When it comes to innovation and automation, cybercriminals have built several kinds of AI-enhanced automated hacking tools. These tools allow them to, amongst other things, scan for system vulnerabilities, launch automated attacks, and exploit weaknesses without innovation. Automation can, however, also be applied to social engineering attacks. Whilst a human-written phishing email is more likely to be opened, an AI-written version takes a fraction of the time to put together.
All of that adds up to a situation where cybercriminals can launch many more attacks more frequently. That means more successful breaches, with more chances to sell stolen data or extort businesses for money in exchange for the return of that data.
Those increased breaches come at a cost. According to IBM’s 2023 Cost of a Data Breach Report, the average breach cost in South Africa is now ZAR 49,45 million. However, that does not take into account reputational damage and lost consumer trust. Those costs also do not account for the legal trouble an organisation can find itself in if it has not properly safeguarded its customers’ data and violated relevant data protection legislation or regulations such as the Protection of PersonaI Information Act or the GDPR.
Education, upskilling, and up-to-date policies
It is clear then that cybercriminals' widespread adoption of AI tools has significant implications for entities of all sizes. What should organisations do in the face of this mounting threat?
A good start is for businesses to ensure that they are using cybersecurity tools capable of defending against AI-enhanced attacks. As any good cybersecurity expert will tell you, however, these tools can only take you so far.
For organisations to give themselves the best possible chance of defending themselves against cyberattacks, they must also invest heavily in education. That does not just mean ensuring that employees know about the latest threats but also inculcating good organisational digital safety habits. This would include enabling multi-factor authentication on devices and encouraging people to change passwords regularly.
It is also essential that businesses keep their policies up to date. This is especially important in the AI arena. There is a very good chance, for example, that employees in many organisations are logging in to tools like ChatGPT using their personal email addresses and using such tools for work purposes. If their email is then compromised in an attack, sensitive organisational data could find itself in dangerous hands.
Make changes now
Ultimately, organisations must recognise that AI is not a looming cybersecurity threat, but an active one. As such, they must start putting everything they can in place to defend against it. That means putting the right tools, education, and policies in place. Failure to do so comes with risks that no business should ever consider acceptable.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.