In the physical security industry, the perception is often that cybersecurity is someone else’s problem. From the end-user’s perspective, the integrator is required to install and configure equipment that meets their security policy. From the integrator’s point of view, the end user is responsible for specifying their specific cybersecurity requirements and if they don’t specify anything, that may be what they get. For many manufacturers, cybersecurity is left up to the integrator and the end-user to consider and configure during installation and maintenance of the equipment.
The truth is that everyone owns a part of delivering a cybersecure system. The news is constantly reporting on new cyber attacks for security and the Internet of things (IoT) products. The more IoT devices you have, the more targets you offer. As the recent distributed denial of service (DDoS) attacks demonstrate, these are not just targets, but are potential platforms from which hackers can launch cyber attacks against others. A physical security system needs to be specified, designed, configured and maintained in the manner that meets a company’s cybersecurity policy and reduces its vulnerability to attacks.
As an integrator or end user, you need to configure, deploy and maintain the system in a secure manner. To do that you need to not only understand the capabilities of the manufacturer’s equipment, but the processes they have in place to facilitate the product’s security posture. The key areas to learn more about are the manufacturer’s processes around:
• Secure development.
• Independent testing.
• Response support.
Secure development
Does the manufacturer develop their products using a secure development lifecycle (SDLC) process? You may find that if they conduct a security assessment at all, many manufactures don’t conduct it until after the product has been developed. At that point it’s hard to fix security-related issues in the product itself – the fixes tend to be workarounds or restrictions placed on how the product is installed, configured or used.
A secure development process starts with security and includes it in every step of the development process. As a result, the product is designed to:
• Include the necessary security features and capabilities.
• Reduce the attack surface.
• Minimise product vulnerabilities.
There are a number of different models that can be used, but in general, the process is something like the following:
• Requirements – include security requirements and risk assessment.
• Design – identify design requirements from security perspective, architecture and design reviews and threat modelling.
• Coding – coding best practices and static analysis.
• Testing – vulnerability scanning, penetration testing and fuzzing.
• Release – open security issues are part of the release decision.
The requirements phase starts the process. In addition to the typically product requirements the development team would include security requirements and controls that maximise the protection of device or software in its operating environment while still providing product functionality and usability. The product team would also conduct a risk assessment thinking about how someone might try to exploit the software or device and what they need to do in the product’s design to reduce or mitigate that risk.
In the design phase, the requirements are reviewed by the product team and are clarified as necessary. Then the architecture and the overall design of the device or software is established. Threat modelling is conducted and architecture and design are iterated as necessary to reduce the attack surface and to prevent or mitigate threats.
During the coding phase, the team is careful to avoid the accidental introduction of vulnerabilities. Like other quality issues, coding defects are typically the cause of many commonly exploited software vulnerabilities. A team will typically use a combination of training, static analysis tools and code reviews to reduce the vulnerabilities that result from these coding defects. There should also be strict source code control – secured servers, revision tracking to log changes and appropriate access control to ensure that only authorised personnel can modify code. When found, vulnerabilities should be entered into the tracking systems like functional defects for evaluation and resolution.
The testing phase would target in-depth vulnerability scanning and penetration testing to validate the security of the product. Other techniques such as fuzzing may also be used to test for problems that result from invalid, unexpected or random input injection that is a common approach used by hackers. Similar to the coding phase, vulnerabilities and defects should be entered into the tracking systems for evaluation and resolution.
Release of the device or software cannot be approved until after the assessment and evaluation of open security issues. Critical security vulnerabilities would prevent release of the software or device.
Independent testing
Does the manufacturer have the product tested by an independent party? You may find that many manufacturers don’t have anyone outside of the company to evaluate the product. It is good practice to have an independent third party look for and identify weaknesses and then demonstrate if they are exploitable, leading to a compromise of the software or device. The testing is typically conducted by an independent party using a team with extensive experience as security researchers, ethical hackers and penetration testers.
There are three approaches to penetration testing:
• Black box testing where the third party has no information about the environment. They approach testing in a similar way to how an attacker would. This is perhaps more realistic, but it also means that it can take longer to find the vulnerabilities that do exist.
• White box testing requires the third party to have complete information of the environment. They typically have access to product documentation, source code and other information. The additional information often allows them to find vulnerabilities more quickly than black box testing.
• Grey box testing is a hybrid type test where some, but not all information is available to the third party.
Once vulnerabilities are identified and demonstrated, the testing team may provide recommendations to help the manufacturer fix the issue. The manufacturer may resubmit the software or device for validating a fix and creation of a final report. The report provides transparency and independent evaluation of the software or device for integrators and end users.
Response support
Does the manufacturer have a process to respond to critical issues with installed products? You can’t stop thinking about cybersecurity once the product is released. Cybersecurity is constantly evolving with new attack techniques being developed and new vulnerabilities being discovered and exploited all the time. The manufacturer should, on a continual basis, perform vulnerability scans on the software or device using a variety of tools, databases and security sources looking for and resolving new vulnerabilities as they are uncovered.
For example, in the United States there is a government repository of standards-based vulnerability information known as the National Vulnerability Database (NVD). The NVD is updated whenever a new vulnerability is added to the Common Vulnerabilities and Exposures (CVE) dictionary. The product should be checked against the updated database. In addition to this ongoing testing, security researchers may report suspected vulnerabilities to the manufacturer.
If a vulnerability is found, the manufacturer should use a method such as the Common Vulnerability Scoring System (CVSS), which is an open industry standard to assess the severity of the vulnerability. The score is a value from 0 to 10 that is derived from several factors which take into account the ease and impact of an exploit. Then, depending on the score, a manufacturer should take action as specified in their vulnerability policy. For example, with a CVSS score of:
• 9: The manufacturer may issue a patch or update for the current version of the affected product as soon as is reasonably practicable.
• <9 to 7: The manufacturer may include a fix for the vulnerability in the next release or firmware update.
• < 7: The manufacturer may include fix for the vulnerability in the next major release or firmware update.
For those vulnerabilities that are critical, a quick response is required to minimise the opportunity for hackers to exploit it. A response team comprised of security, development and quality assurance engineers who are knowledgeable about the software or device should evaluate how to address these critical security issues. The team should determine what immediate resolution may be available to mitigate the vulnerability.
For example, attack vectors often rely on specific protocols, so an immediate resolution may be to temporarily disable that protocol until a patch can be developed and made available. The team should create an advisory to notify integrators and end users about the vulnerability and its resolution. If a more permanent solution is required, the team should begin to develop, test and Q/A the necessary patch or update. When it’s available, the advisory is updated, letting integrators and end-users know that the patch is available. This notification can be critical because some industry regulations require that end users evaluate and install critical security patches within a specified number of days of them being available.
Conclusion
As compared to the traditional network IT supply chain, physical security manufacturers, integrators and end users have often lagged behind in supporting, configuring and deploying cybersecure systems. No one group on its own can guarantee the cybersecurity of a system, but by working together to cover all requirements, capabilities and responsibilities, it can be possible to provide a robust cyber- security solution.
To help everyone to be successful, cyber- security will need to start with the manufacturer. It’s more challenging to add security to a device or software which wasn’t originally designed with a security mindset. As an integrator or end user, you need to have an understanding how the manufacturer follows cybersecurity best practices to the design, testing and support of their products. There are many detailed questions you may want to ask to understand the manufacturer’s practice, but the following three questions can help to start the conversation:
• What are your development practices and where is security included in the process?
• What evidence can you share?
• What are your practices around independent testing of products? Do you have any reports that you can share?
• What do you do to stay up to date with current vulnerabilities and how do you alert integrators and the end user to critical issues? Do you have some examples that you can share?
With these questions, you can get an understanding of the capabilities and processes the manufacturer has in place to develop, test and support the products you need. Then as an integrator and end user you can better know how to configure, deploy and maintain the system in a secure manner.
For more information contact Tyco Security Products, +27 (0)82 566 5274, [email protected], www.tycosecurityproducts.com
Email: | [email protected] |
www: | www.johnsoncontrols.com/security |
Articles: | More information and articles about Johnson Controls - (Tyco Security Products) |
© Technews Publishing (Pty) Ltd. | All Rights Reserved.