We have become accustomed to passwords, fingerprint readers, and antivirus software. We use one-time PINs to authenticate transactions and activate VPN software to stay safe on public networks. We are well aware of threats like identity theft, ransomware, and phishing emails, which fuel a cybercrime scourge that cost the world just shy of $10 trillion in 2024.
However, cybersecurity continues to be marginalised as a technology topic. This is beginning to change, and three trends indicate that we are progressing towards recognising cybersecurity as more of a social and business imperative.
Risk, not response, will lead security
Cyberthreats continue to dominate risk registers and publications such as the Allianz Risk Barometer Report and IRMSA's Risk Report 2024. Companies and their leaders are very aware of and even proactive about cyberthreats.
However, an increasingly costly arms race between cybercrime and cybersecurity is made worse by a tendency to focus on buying and deploying security solutions in response to every security problem. This is a zero-sum game in which the customer eventually loses due to cost or complexity.
Instead, experts advocate risk as the departure point, determining the most mission-critical assets in an organisation, then building security to focus on those and spreading from there. Analyst firm, Gartner, has coined this as continuous threat exposure management (CTEM).
While this approach seems obvious, it is a radical departure from cybersecurity tactics and sales models, but it works very well, and we will see more support for risk-prioritised cybersecurity in 2025.
AI will amplify data governance issues
After many years as a technical or fringe business practice, artificial intelligence (AI) has become the focus of most business leaders (whether they understand it or not). Data quickly became synonymous with AI, which needs that information to function.
This marriage has amplified data problems. Companies could skirt the data issue. As long as they adhered to regulations and felt satisfied with their usable data, they did not bother too much with managing, cleaning, and securing most of their data.
AI has tipped that cart over. For example, data leakage used to be about disgruntled employees stealing customer lists or someone irresponsibly sending financial statements via a private email address. Now, companies worry that someone will feed their year-end spreadsheet into ChatGPT. The catch is that the person does this to improve productivity – they want to do a better job faster – and who would discourage that?
Better data management and integration are rising from technology black sheep to top executive priorities, a trend highlighted by numerous surveys, including the MIT Technology Review. This trend is inseparable from data governance and security. During 2025, businesses will increasingly prioritise data security and governance as a competitive strategy, rather than just a compliance exercise.
Deepfakes will make users more sceptical and aware
AI's impact will not only concentrate on data. I believe it will also affect user behaviours and attitudes. Humans are the weakest link in security and information. Our prejudices and distractions make it easier for criminals to goad us into harmful actions. The classic example is clicking on a dangerous link in a phishing email or falling for a romance scam on social media.
Until recently, we trusted our critical thinking, regardless of its accuracy. We assume we are above average when it comes to judging threats and opportunities, and we quickly believe that the bad things that happen to others will not happen to us. This attitude has been a gift to cybercriminals, leading to over 91% of all cyberattacks starting with a phishing email. Training people to become security-aware is crucial, but it is not enough. People do not make these mistakes because they are stupid. It is much more complicated than that.
People make these mistakes because they are overconfident in their ability to spot a scam or an attack. I believe that generative AI—specifically deepfake content—will dramatically affect that. AI-generated content means we can no longer trust what we see or hear.
There are early signs of this growing scepticism, such as a majority of people feeling conscious concern about deepfakes. I anticipate we will see this concern grow in 2025 and beyond, and perhaps we can weaponise it against the scourges of fake news and cybercrime.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.