This is part four in a series on the FERC summary of the results of their 2018 CIP audits. The earlier pieces are here:
- The introductory piece about why that document was worthy of your attention
- Automated inventory and why it’s the cornerstone of a strong security assurance program
- Remote Configuration Capture and Control for Serial Assets
“Consider incorporating file verification methods, such as hashing, during manual patching processes and procedures, where appropriate.”
1.2.5. Verification of software integrity and authenticity of all software and patches provided by the vendor for use in the BES Cyber System – from CIP-013.
During my time as the chair of a NERC working group dedicated to addressing the Boreas vulnerability. It’s been on my mind recently because it was during the time the recently-departed Mike Assante was the NERC CISO, and it was the best chance that I had to work with him over the years.
Boreas was the software-based follow-up to the Aurora vulnerability (note that there was never a “C” in the sequence; that’s significant) and had plenty of similarities:
- It was announced with minimal industry input by a government agency.
- It was very broad and would require a whole set of practices to address properly.
- It was blindingly obvious to those who actually worked with the equipment in question.
The idea behind Boreas, in its entirety, was that if you have a device run by firmware, and someone manages to replace that firmware, they can subvert the function of that device. This is true and obvious, but it did need to be addressed.
This occurred in the aftermath from the Aurora saga. The cybersecurity industry was subjected to heavy-handed Congressional oversight and was required by FERC to produce mountains of paperwork to address a problem that boiled down to checking settings and locking out the ability to change them.
Because Mike had political skills, we quickly formed a task force and spent months working up a legitimate set of materials designed to address the described vulnerability. We realized Boreas failed to catch the imagination of the regulatory class, so we quietly stuck those materials in a drawer in case anyone ever asked. They never did.
None of those materials were earth-shattering, but they did represent a reasonable approach to the overall problem of prevention of software tampering – change management, change monitoring, and pre-installation software verification – and software tampering is still a legitimate concern. None of these approaches are particularly specific to OT or firmware. They are common to all forms of software.
In the OT world, software publishers are hardware vendors at heart. They are more cavalier than their IT cousins in formal software security practices, which makes some of the recommended practices for patch verification harder to pull off. Pre-installation software verification is the focus of FERC here.
Despite pre-installation software verification being a logical, if small, part of a comprehensive vulnerability management program, it’s still inconsistently applied. It relates to supply chain protection, and to most, that still feels like something out of their control. Working on closing the last chinks in the CIP armor, the time has come to fill this one in.
Unlike other lessons learned from the report, this one is related to a current piece of an as-yet-unimplemented standard – the section from CIP-013 mentioned above. The standard goes beyond the lesson learned from the report. It applies whether patching is manual or automated, since the automated checking is not as difficult to build into your cybersecurity program once you know you want it.
There are a few ways to verify that a patch (yes, a new revision of firmware is a patch, even if the nomenclature is a bit different) is the same, whether you’re doing it manually or having software do it for you.
If a vendor is exhibiting best practices, check for the digital signature in the software. This is where VSC helps streamline and automate the storage, distribution and installation of vendor signed software.
This same process could be performed manually as part of a patch management process, but the ability to procure and secure authentic updates from a vendor and drop them into your automated inventory, instantly shows which systems are in scope. This is a significant increase in accuracy and a huge time savings as well. It’s also good practice for the vendors to digitally sign their code, so widespread implementation of this practice is reverberated back up the supply chain to improve practices there, especially if it’s done in support of CIP-013 compliance.
The second method is tougher to implement because it is more computationally expensive and requires action for each patch rather than for each vendor. Many vendors provide hash values for each file that they release, and these hashes can be used as last-minute verification before installation that the proper software is being installed.
The mechanics, whether for automated or manual verification, are the same as for the digital signature approach. This approach is more labor-intensive as each patch requires a database entry rather than each vendor, but it can provide a slightly stronger signal that the software is legitimate, since signature spoofing is theoretically possible.
There’s one final potential control that could be implemented as a form of verification. If you use an automated change monitoring involving file signatures across the board, you can periodically compare the contents of each system to a trusted good installation. This would not prevent installation of a bad image, but, just as an IDS can’t trigger until bad traffic is already present on the network but still provides detection value, there’s value in knowing that a false software version hasn’t been installed by circumventing your change management controls.