Embedded OT Vulnerabilities: An Asset Owner Perspective
What should asset owners be aware of with embedded OT systems and buried vulnerabilities, and what remediation tactics are available?
Learn MoreSubscribe to stay in the loop with the latest OT cyber security best practices.
Let’s face it… Supply chains have been compromised several times in the past. GitHub was exploited to push malicious changes, certificates were stolen and used to sign malware, integration SDKs/machines were lost, and development systems are among the dirtiest of them all. Have you ever seen a security practitioner’s home network? Or a developer’s virtual machines? Sorry…
Being optimistic, the Solarwinds event is an unfortunate cyber security incident, but it may lead to positive change at an increased rate and awareness at multiple levels of organizations. Prior to the SolarWinds hack, many experts recognized that supply chain compromises are a reality.
We intentionally waited several weeks before publishing this blog to avoid the immediate “kicking of the dog while it was down” without the proper information. Although we still do not have complete information, the dust has settled enough to begin to draw some implications and recommendations.
Briefly, here is what has occurred without the wild theories and potentially disconnected events, a bit about code signing, and what can an asset owner do – aside from blind trust.
On December 13, 2020 SolarWinds – an American company that develops software for businesses to manage their networks, systems, and information technology infrastructure which often configures or monitors an asset owner’s environment – was compromised. This prompted CISA to produce an Executive Directive. Since then, there have been a number of hypotheses and developments shared, and I predict there will continue to be supplementary guidance published into the foreseeable future.
The SolarWinds Orion platform product (affected versions are 2019.4 through 2020.2.1 HF1) was compromised via the vendor’s software development process by inserting signed malicious components into the product and distributing it unknowingly by authorized distribution channels. The tampered product was installed as a standard business process by customers who had invested in SolarWinds and trusted their product’s integrity.
To quote the official CISA advisory notes, “aside from the installation of a compromised software critical software that poses a high likelihood of in an active attack, this has already impacted government agencies and organizations (some of which are critical infrastructure or industrial asset owners)”.
The tactics used by the malicious parties permit an attacker to gain access to network traffic management systems, compromise assets beyond the initial host of compromise, and gain nearly invisible unfettered access to an unsuspecting asset owner’s network.
Given the nature of the affected software, it is highly likely other assets adjacent or connected to the SolarWinds Orion infrastructure within your environment are also gravely compromised (including their credentials). The affected software may have also compromised the integrity and trustworthiness of backups from earlier in 2020, provided an installation of the affected software occurred in your environment.”
Similarly, around that time, other organizations such as FireEye or Microsoft were compromised as part of a related and coordinated campaign.
According to public information, this is what is most important to know:
This incident demonstrates many challenges that could also easily be extended to other vendors such as Cisco (e.g., when they were rumored to have devices tampered with in transit), Microsoft, or even the FOSS community (e.g., targeting NPMs):
The easiest companies to target are the ones with the most ROI for attackers. To me, supply chain is merely another word for third-party risk; this should be on nearly every company’s conscious assuming a reasonably developed risk register. And the same goes for insider risks.
Code signing is a mechanism where you imprint your organization’s digital signature via cryptographic primitives to state that a piece of software came from your organization. In theory, code signing to the end customer means the code is exactly as a vendor built it, and it can be trusted (assuming the signing keys are not stolen or the process was compromised). However, bugs still may exist in it, and potentially disgruntled or malicious persons may have installed backdoors IF the artifact was insecurely built or contains weaknesses before it was signed.
Code signing is an important part of any supply chain security program. However, even if another SolarWinds type event occurred, code signing while being part of a security solution, is not infallible. In other words, security takes getting a lot of things right, and even then, it might not be right even if it includes:
But wait, didn’t this incident bypass code signing, and the malicious “thing” wound up in our environments anyway, so isn’t code signing useless? Well, it did (apparently), but code signing is a useful security tool that has vast benefits, but it was not the cause of the SolarWinds incident. The vendor didn’t have control of their development pipeline, and as with any technology or solution, there is always a case for why you use it and take active measures to implement it properly:
The case for signed firmware – An aviation PKI case study from S4x20
The above image shows (from the left) an example of multiple contributors (or feeds) winding up in a product, and ultimately, winding up in multiple hands. For each step, there are potential risks or threats, but the PKI for signing and the inputs to that subprocess of an SDLC need to be validated to ensure security makes it’s way all the way down to the right. If it doesn’t, it renders the cryptographic primitives created by signing moot. However, even with signing, there are still bad guys lurking with Installers on the Internet and poisoning Google results, so a secure supply chain affects many aspects of an organization.
The SolarWinds software incident is evidence that even large, well-funded companies are at risk of compromise. As a result, asset owners need to build a comprehensive program of security including configuration management, patch management, network segmentation, incident response, etc. to ensure any potential compromise can be limited in effect as much as possible.
In the case of the Solarwinds attack, there is nothing the asset owner could have done to prevent the compromise. Aside from the vendor ensuring tight control of their product development process, the asset owner cannot be expected to ensure a secure product. They may still have a holistic security program to reduce the likelihood of a successful compromise and impact by:
One of the key elements of a robust cyber security program is incident response. One of the biggest fallacies of the remediation notes by SolarWinds, Microsoft, and CISA is that an “update” may remediate this type of issue had a compromised binary been found. What I mean by this is that the compromised software is often a delivery mechanism for more “bad things” (e.g., it’s a method for dropping additional malware), so no software patch could truly know what the original malware may have left or added to the affected system… but it would at least stop the original compromised binaries from adding bad things through the same vector.
Therefore, a patch in this case, even if it’s the most impressively engineered solution, cannot ensure the validity and integrity of your industrial environment because the original attackers (or copycats) may have added other malicious “treats” to your systems. This means no update will absolve you of that risk. Yes – you read that right – you still need to tear down your boxes, rebuild them, harden them, reprovision accounts, configurations, and software.
This does not mean you shouldn’t patch the problem, nor is it an excuse to avoid updates in the future, but it means systems affected by a supply chain compromise require more than a patch, and they need to be:
If you think this is bad, any vulnerability relating to an embedded device with an insecure update process can NEVER be considered secure if it was exploited because you (from an asset owner perspective) can never ensure device integrity after the fact. Let us count our blessings.
Unfortunately, this tough love is for all parties involved, and I do not wish this on any vendor or asset owner. However, there were lessons learned here, and I think that is the silver lining and lesson learned. For our customers, please know Verve does not use SolarWinds, and we have quite well-defined processes for security across the board.
What should asset owners be aware of with embedded OT systems and buried vulnerabilities, and what remediation tactics are available?
Learn MoreFrom an asset owner's perspective: Defining firmware and discovering embedded vulnerabilities to protect devices from exploitation.
Learn MoreLearn how to protect OT embedded devices and firmware in OT/ICS cyber security environments.
Learn More