It has always been difficult to quantify the actual impact of CCTV on crime, whether it be internal company premises or public areas. There are various attempts to try to define effective measures, but there are few situations and measures that provide a clear indication of effectiveness.
CCTV operations will typically reply that if they were not there, the crime would be much worse. This is probably true, but still does not define if the client is getting value for money, whether it be management, the council, or the party to whom a service is being contracted to. Even if crime is being detected or there is a low crime level, to what extent can the CCTV operation take the credit? One needs to have some kind of standing against an indication of delivery performance to know you are getting value for money.
Based on some experience, I have listed some fairly practical measures that you can use in comparing whether your CCTV seems to be going in the right direction. If you can answer positively to most of these, then you are likely to be getting the results you want. If not, or you do not know, then these are areas that need to be examined more closely.
1. Are your control room monitors being looked at? It sounds a simple question, but I have seen some control rooms where the screens are being looked at about 30% of the time. This is often because of people chatting to each other, taking breaks, not focusing on the monitoring task etc. This is often a supervisory issue – people cannot look 100% of the time, but the viewing process does need to be managed. In some cases management exaggerate this by implementing additional functions which operators have to do and take them away from the viewing task. You could potentially double or triple your viewing capacity by better control room organisation.
2. Do the people viewing screens have the ability to do this? Not all personnel have good visual analysis skills. In some research studies involving operators detecting incidents in actual video footage, we have found performance ranging from 0% to about 85%. We had to leave out a relatively large number of personnel because the number of false alarms was so excessive it indicated they had no clue of what was going on, and this was exaggerated by a failure to pick up any incident conditions – some people just could not handle viewing video material despite holding operator positions. Is there a basis to evaluate the quality of personnel being supplied and their fitness for purpose, as well as a demonstrated history of performance? If not, how did those people happen to be there?
3. Are personnel being sufficiently trained, including been able to demonstrate such skills in simulated incidents? Knowing what to look for and how to go about this is a critical component of threat awareness. The other simple question in this respect is, “do people know what they are watching”? Asking a couple of simple but pertinent questions is easy enough to determine this. In addition, do operators know the cameras and their locations, can they identify or track people, and can they use the software to carry out tasks effectively? Finally, are there procedures to follow for certain types of events or incidents and do operators know these?
4. Have the important cameras been determined for the right time of day, and are these cameras given precedence in viewing? Consistent with this, what degree of critical areas are covered according to a risk analysis of where incidents are most likely to occur?
5. Are the monitor views conducive to actually seeing what is supposed to be happening? Often a large number of camera views get put on monitors, but although they look impressive and help people choose cameras, they allow you to see virtually nothing of the scene itself. Either have fewer views, use spot monitors, or have large screens which still enable you to intelligently view the view the camera is providing.
6. Is there anything being detected by live surveillance? If so, what proportion of incidents are detected by surveillance, what proportion are responded to on request by surveillance, what percentage are reviewable, what percentage are only reviewed after the event, and what percentage of reported incident conditions are missed entirely by the camera system. These are the facts of performance, but are frequently not even reported.
7. If things are being proactively detected, what proportion of subtle incidents are being picked up versus obvious ones? For example, it is easy to see people fighting in the middle of a street, but not so easy to identify subtle drug dealing on the side.
8. What degree of incidents are prevented by appropriate intervention, and is there demonstrable behaviour or situational conditions to demonstrate this? I think this is an undervalued way of reflecting the value of CCTV – if you have suspicious behaviour and by deploying a security or police officer and the problem goes away, you have delivered a very positive return on the CCTV system by stopping crime. Operators need to be recognised for this.
9. What kind of reporting quality is there? Can you make sense of what has been reported and does it have all the relevant particulars that you would need? This goes for both verbal and written or computer based reporting.
10. What quality of evidence is collected using CCTV cameras? This is not always something that the operator can influence due to technical factors. However, where operators can control cameras, have they got the relevant evidence recorded in an effective manner?
11. What is the communication quality at the interface between security guards, response personnel and response agencies? Are the right questions asked and the right information given by your operator?
12. What kind of infrastructure is in place for reporting – ie, following of critical response procedures, is this information collated, made sense of and reported, and is the information turned into intelligence so that actions can be taken to be proactive about such incidents? Is action then taken in response to this? If there is none of this, how do you know your CCTV is providing value?
These points represent, to some degree, the increased levels of sophistication of what should be happening in a control room. Asking yourself these questions about your CCTV operation, whether as a manager, client, contractor or installer is a good way to see whether the scheme is providing a return on investment.
Dr Craig Donald is a human factors specialist in security and CCTV. He is a director of Leaderware which provides instruments for the selection of CCTV operators, X-ray screeners and other security personnel in major operations around the world. He also runs CCTV Surveillance Skills and Body Language, and Advanced Surveillance Body Language courses for CCTV operators, supervisors and managers internationally, and consults on CCTV management. He can be contacted on +27 (0)11 787 7811 or [email protected]
Tel: | +27 11 787 7811 |
Email: | [email protected] |
www: | www.leaderware.com |
Articles: | More information and articles about Leaderware |
© Technews Publishing (Pty) Ltd. | All Rights Reserved.