At the present time, risk and threat analysis is often based on a qualitative classification system, for example the use of risk matrices. Risks are identified and grouped according to criteria such as likelihood of occurrence and impact if the risk event occurs. These criteria are typically ranked in categories, for example ‘low’, ‘medium’ and ‘high’.
This is a useful starting point, particularly in requiring decision-makers to identify the key risks facing a particular entity or location.
Once this classification has been completed, a host of follow-up questions present themselves. Some examples are:
• What are the relationships and dependencies between the risks? How strong are these relationships?
• Does the occurrence of one risk event make another more likely? How much more likely?
• What factors are associated with particular risk events happening? How strong is the influence of a particular factor on a given outcome?
• What if we can’t afford to tackle all of the high impact/high probability risks? Which risk events should be targeted first, given there is only a limited budget available?
A more quantitative approach
In order to be able to address these sort of questions, a more quantitative approach is needed. This will require the use of various mathematical models that will provide a numerical output, as well as data to drive the analysis.
In the world of insurance, actuaries have been studying risk for centuries, using just such an approach to understand the cover that insurance companies can safely provide to their policyholders. In this context, ‘safely’ means that the insurer will be able to pay all future claims when these are made. The threat of becoming insolvent otherwise has provided a strong incentive to develop a more granular understanding of risk. Regulation to protect policyholders relying on this cover also plays a part.
In the field of probability and statistics, the scientific study of uncertainty has become ever more sophisticated as the possibilities afforded by soaring computer power and increased availability of data. Decades of academic research have developed powerful modelling tools that have been used to analyse all sorts of situations, from determining whether a new medicine is truly effective, based on clinical trials, to what economic factors have the most effect on the share market.
Risk models
There are a wide range of different types of models that offer many different ways of exploring risk. Some seek to link ‘explanatory’ factors with a particular outcome, answering questions such as: how strong is the influence of each factor? Can some factors be ignored, once others have been taken into account? Other models look at how certain variables change over time and whether there are trends and patterns in how they do so. Yet more models can be used to understand how risk events are distributed over a geographic area: is there clustering? Is this a coincidence, or a genuine effect?
All models share the feature that only those effects and relationships that can be clearly justified by the data/evidence in a rigorous manner are accepted. This scientific process of elimination can lead to some surprising results: commonly-held beliefs and ‘commonsense’ conclusions do not always stand up to the test. This suggests interesting new avenues for further analysis and sometimes a novel or powerful approach for managing a given risk that would not otherwise have been identified.
Of course, the use of models involves judgement and subjectivity and it is important not to overstate their accuracy. They are important tools to assist decision makers in thinking about risk: they are not crystal balls that can predict the future. However, with the increasing amounts of data becoming available in a form that can be processed and analysed they offer a powerful way of exploring and understanding risk and the related uncertainties.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.