According to Gartner augmented cybersecurity leadership ties human talent to technology capabilities to balance organisational growth aspirations and cyber risk. Gartner says future security and risk management leaders will be AI-enabled, human-centric decision-makers who effectively steer through turbulent times.
The global research house found organisations are increasingly focused on personalised engagement as an essential component of an effective security behaviour and culture program (SBCP). They list key findings, including:
• When cybersecurity efforts are harmonised with business changes, the agency of the cybersecurity leader is improved.
• Accelerated digital transformation is now dependent on predictable operations; however, fragmented responsibility leads to higher costs, drops in quality, exposure to threat actors and non-compliance with regulations.
• Cybersecurity leaders and their teams are suffering from widespread burnout and attrition, which erodes effectiveness and increases organisational cyber risk.
• New laws and precedents expose cybersecurity leaders to personal liability, similar to that of more traditional officer roles.
AI can bring many advantages, including negating the noise from massive volumes of data and preventing human error due to fatigue, etc. This also speaks to skills attrition in companies that leave huge gaps in analytic capabilities – AI can fill those gaps. Herein lies the root of resistance, people do not understand that human intervention will always be required, even in an age where Quantum computing is rapidly expanding. Sycamore is one example – this is Google’s Quantum computer that can do calculations in seconds, whereas the supercomputer Frontier would take 47 years.
A Forrester report notes generative AI exploded into consumer awareness with the release of Stable Diffusion and ChatGPT, driving enterprise interest, integration, and adoption. The report details the departments most likely to adopt generative AI, their primary use cases, threats, and what security and risk teams will need to do if they are to defend against this emerging technology.
According to this report, discussions around generative AI are dominated by interest, anxiety, and confusion. The release of these platforms went viral almost immediately, garnering wide attention and speculation, along with plenty of concerns from security researchers. Forrester advises security and risk teams to adapt to how their enterprise plans to use generative AI, or they will find themselves unprepared to defend it.
Resistance to augmented AI in security software
Forrester says today’s security leaders worry about the impact on their security team first. They agree it will change how security programs operate, but it will change workflows for other enterprise functions well before that happens. They go on to note that, unfortunately, many CISOs tune out news about new technologies, considering it a distraction. The caveat with that approach is that it can lead to tomorrow’s emergency when the security program learns, for example, that the marketing team plans to use a large language model (LLM) to produce marketing copy and expects it to do so securely.
They advise us to think in terms of code, not natural language, and note that one of the interesting ways to subvert or make unauthorised use of generative AI is finding creative ways to structure questions or commands. While bypassing safety controls online is reportedly fun for hobbyists, those same bypasses could allow generative AI to leak sensitive data such as trade secrets, intellectual property, or protected data.
It is noted that security and risk professionals know the danger and complexities inherent in managing suppliers. Emerging technologies create new supply chain security and third-party risk management problems for security teams and introduce additional complexity given that the foundational models are so large that detailed auditing of them is impossible.
It is widely acknowledged that for AI success, you need to deploy modern security practices. Many security technologies that will secure your firm’s adoption of generative AI already exist within the cybersecurity domain. Two examples include API security and privacy-preserving technologies. These technologies are introducing new controls to secure generative AI.
Static application security testing (SAST), machine-learning–assisted auditing of SAST results unlock and reproduce contextual awareness and security expertise, thereby eliminating the need for human auditing. There are many positives attached to this. SAST analyses an application’s source code, bytecode, or binary code for security vulnerabilities. The US government agency, the National Institute of Standards and Technology (NIST), notes that static analysis tools are one of the last lines of defence to eliminate software security vulnerabilities during development or after deployment. The detailed discussion around SAST will be the topic of my next article on the benefits of augmented AI in security.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.