As cyber threats continue to escalate, it’s becoming increasingly clear that artificial intelligence (AI) and machine learning (ML) are playing a role in the arsenal of modern hackers. However, according to Martin Potgieter, CIO at Nclose, it’s important to avoid getting caught up in the hype and instead focus on separating fact from fiction.
“There’s definitely a use case for AI in cybercrime,” says Potgieter. “ChatGPT, for all its failings and limitations, can be used by attackers for smart social engineering. It’s creative enough to support that attack vector and I think that’s probably one of the first places where AI will really gain traction in this space. However, we are still some ways away from full-blown attacks powered by super-intelligence that nobody can detect.”
This is because high-end, equally intelligent managed detection and response (MDR) systems are capable of detecting sophisticated threats. AI is still the dumb cousin that can’t quite make the leaps in ingenuity that human hackers can. However, that limitation doesn’t diminish its ability to inject natural language into social engineering attacks. The technology is very capable of supporting the orchestration of truly devious attacks that bypass human defences. People still expect dodgy emails to have equally dodgy writing. This is the weakest link in any security system – and where AI is set to truly shine.
“At this point, AI is proving itself to be another useful tool in the hacker’s arsenal,” says Potgieter. “It will introduce a whole new level of phishing and is likely going to move from this foundation into more complicated and intelligent attacks. This is where security teams and companies should be looking right now – into defences that limit or inhibit the success of social engineering attacks.”
In today’s digital landscape, combatting cyber threats requires a human touch. Organisations need to make their employees aware of the risks and engage in constant training to ensure they can recognise and report potential threats, rather than inadvertently clicking on malicious links. Continuous education is more important than ever. As the threats become more sophisticated, so must the training and awareness.
But it’s not just hackers who can leverage AI for their benefit. As organisations work to bolster their security posture, it’s important to invest in tools and technologies that are just as capable of getting smarter in their defence and detection abilities.
“AI is as capable of providing the defenders with intuitive and intelligent functionality, and already solutions being put in place today are using AI and ML to enhance their abilities,” says Potgieter. “It’s important not to get too distracted with AI and ML buzzwords and panic. The best way forward is with security solutions and teams that have an innate understanding of how to balance the technology with reality.”
Traditional security isn’t about to be overrun by machines. AI is not going to step inside the system and tear it to the ground, leaving every organisation gasping in its wake. The reality of AI and ML capabilities today is that they are still limited, constrained by their own complexity and that they are, at best, tools that help hackers in their attacks.
“To protect your business, you don’t need to tear down your existing systems or spend significant sums on different types of AI defences, you need only ensure that your security service provider is aware, agile and prepared,” concludes Potgieter. “Attacks that use AI and ML are just an evolution of cyberattacks that we can defend against. All it takes is skills and expertise and a hefty dose of reality.”
© Technews Publishing (Pty) Ltd. | All Rights Reserved.