Let’s face it… Supply chains have been compromised several times in the past. GitHub was exploited to push malicious changes, certificates were stolen and used to sign malware, integration SDKs/machines were lost, and development systems are among the dirtiest of them all. Have you ever seen a security practitioner’s home network? Or a developer’s virtual machines?  Sorry…

Being optimistic, the Solarwinds event is an unfortunate cyber security incident, but it may lead to positive change at an increased rate and awareness at multiple levels of organizations. Prior to the SolarWinds hack, many experts recognized that supply chain compromises are a reality.

We intentionally waited several weeks before publishing this blog to avoid the immediate “kicking of the dog while it was down” without the proper information. Although we still do not have complete information, the dust has settled enough to begin to draw some implications and recommendations.

Briefly, here is what has occurred without the wild theories and potentially disconnected events, a bit about code signing, and what can an asset owner do – aside from blind trust.

 

What was the SolarWinds incident?

On December 13, 2020 SolarWinds – an American company that develops software for businesses to manage their networks, systems, and information technology infrastructure which often configures or monitors an asset owner’s environment – was compromised. This prompted CISA to produce an Executive Directive.  Since then, there have been a number of hypotheses and developments shared, and I predict there will continue to be supplementary guidance published into the foreseeable future.

The SolarWinds Orion platform product (affected versions are 2019.4 through 2020.2.1 HF1) was compromised via the vendor’s software development process by inserting signed malicious components into the product and distributing it unknowingly by authorized distribution channels.  The tampered product was installed as a standard business process by customers who had invested in SolarWinds and trusted their product’s integrity.

To quote the official CISA advisory notes, “aside from the installation of a compromised software critical software that poses a high likelihood of in an active attack, this has already impacted government agencies and organizations (some of which are critical infrastructure or industrial asset owners)”. 

The tactics used by the malicious parties permit an attacker to gain access to network traffic management systems, compromise assets beyond the initial host of compromise, and gain nearly invisible unfettered access to an unsuspecting asset owner’s network.

Given the nature of the affected software, it is highly likely other assets adjacent or connected to the SolarWinds Orion infrastructure within your environment are also gravely compromised (including their credentials). The affected software may have also compromised the integrity and trustworthiness of backups from earlier in 2020, provided an installation of the affected software occurred in your environment.”

 

Similarly, around that time, other organizations such as FireEye or Microsoft were compromised as part of a related and coordinated campaign.

How can an asset owner avoid or mitigate risk in the future?

According to public information, this is what is most important to know:

  • The software development cycle of one of the most prevalent software companies (SolarWinds) in the United States was compromised to insert malicious code into a signed version of the software.
  • Some organizations were actively compromised and acted upon and others were left sitting in a compromised state.
  • The attackers demonstrated a determined attack can get through relatively sophisticated processes.

This incident demonstrates many challenges that could also easily be extended to other vendors such as Cisco (e.g., when they were rumored to have devices tampered with in transit), Microsoft, or even the FOSS community (e.g., targeting NPMs):

  • Companies developing products must utilize a secure development lifecycle (SDLC) to securely engineer, develop, and maintain a product.
  • Even if a vendor may use an SDLC, it is never completely foolproof. A determined attacker is always a risk.

 

The easiest companies to target are the ones with the most ROI for attackers. To me, supply chain is merely another word for third-party risk; this should be on nearly every company’s conscious assuming a reasonably developed risk register. And the same goes for insider risks.

 

4 elements for asset owners to address supply chain risks:

  1. Assume any software installation can be used as an attack vector with the goal of managing that supply chain with multiple layers to validate and verify security. You cannot trust it implicitly.
  2. Focus on identifying and protecting your most valuable assets and ensure they have extra protection.
  3. Build a programmatic cyber security program that includes multiple levels of security and robust incident response to a supply chain incident, assuming that a software product may be compromised.
  4. Add multiple layers of security in supply chain management.

 

What is code signing and what is its role in the SolarWinds incident?

Code signing is a mechanism where you imprint your organization’s digital signature via cryptographic primitives to state that a piece of software came from your organization.  In theory, code signing to the end customer means the code is exactly as a vendor built it, and it can be trusted (assuming the signing keys are not stolen or the process was compromised). However, bugs still may exist in it, and potentially disgruntled or malicious persons may have installed backdoors IF the artifact was insecurely built or contains weaknesses before it was signed.

Code signing is an important part of any supply chain security program. However, even if another SolarWinds type event occurred, code signing while being part of a security solution, is not infallible.  In other words, security takes getting a lot of things right, and even then, it might not be right even if it includes:

  • People-Process-Technology (PPT)
  • Technological/architectural diversity
  • Secure development pipelines
  • Organizational security and training
  • Incident and sudden teardown readiness when a product/component is compromised

But wait, didn’t this incident bypass code signing, and the malicious “thing” wound up in our environments anyway, so isn’t code signing useless?  Well, it did (apparently), but code signing is a useful security tool that has vast benefits, but it was not the cause of the SolarWinds incident. The vendor didn’t have control of their development pipeline, and as with any technology or solution, there is always a case for why you use it and take active measures to implement it properly:

signed firmware verve

The case for signed firmware – An aviation PKI case study from S4x20

The above image shows (from the left) an example of multiple contributors (or feeds) winding up in a product, and ultimately, winding up in multiple hands. For each step, there are potential risks or threats, but the PKI for signing and the inputs to that subprocess of an SDLC need to be validated to ensure security makes it’s way all the way down to the right. If it doesn’t, it renders the cryptographic primitives created by signing moot. However, even with signing, there are still bad guys lurking with Installers on the Internet and poisoning Google results, so a secure supply chain affects many aspects of an organization.

What does this mean if you are a product development company?

  • To date, malicious code has made its way into the above product development pipeline for distribution. But it’s very likely it was missed by peer review and signed anyway. However, tight control over a supply chain is expected by customers. It’s a shift left philosophy, but code signing is just part of a security strategy; it is not something to rely solely upon.
  • Always identify and protect the “crown jewels” of your development infrastructure.
    • Pragmatically, this incident implies that if you have “crown jewels”, you should protect them adequately with respect to their scope and power. If your organization uses management platforms to control and monitor the security of your organization, you should not expect them to be a silver bullet, but as part of a holistic security program. Software or hardware in any shape or form is/can be a risk unless consciously managed.
  • Build a programmatic security effort.
    • Train developers to have awareness and implement security, leverage best practices and code enhancement strategies, take care of keys/credentials (even those of your developers), maintain and secure the systems used in the development process (including workstations or virtual machines), and be aware your customers may make you a target.

 

The SolarWinds software incident is evidence that even large, well-funded companies are at risk of compromise. As a result, asset owners need to build a comprehensive program of security including configuration management, patch management, network segmentation, incident response, etc. to ensure any potential compromise can be limited in effect as much as possible.

In the case of the Solarwinds attack, there is nothing the asset owner could have done to prevent the compromise. Aside from the vendor ensuring tight control of their product development process, the asset owner cannot be expected to ensure a secure product. They may still have a holistic security program to reduce the likelihood of a successful compromise and impact by:

  • Ensuring security processes are followed and evidenced by sufficient certification and have legal documentation
  • Helping mitigate the circumstances by thinking critically of solutions being deployed or by monitoring for anomalous conditions.
  • Performing security validation testing not just at factory or site levels, but continuously and periodically.
  • Choosing solutions that are secured and designed by functional purpose (e.g., the management zone should never have communicated across zonal boundaries in this case).
  • Preparing security and management teams to act on any anomalous conditions vs. implicitly trusting (e.g., no management server should be beaconing to random DNS domains).
  • Having the means, training, and processes to quickly recover or nip an issue in the bud.

One of the key elements of a robust cyber security program is incident response. One of the biggest fallacies of the remediation notes by SolarWinds, Microsoft, and CISA is that an “update” may remediate this type of issue had a compromised binary been found.  What I mean by this is that the compromised software is often a delivery mechanism for more “bad things” (e.g., it’s a method for dropping additional malware), so no software patch could truly know what the original malware may have left or added to the affected system… but it would at least stop the original compromised binaries from adding bad things through the same vector.

Therefore, a patch in this case, even if it’s the most impressively engineered solution, cannot ensure the validity and integrity of your industrial environment because the original attackers (or copycats) may have added other malicious “treats” to your systems. This means no update will absolve you of that risk.  Yes – you read that right – you still need to tear down your boxes, rebuild them, harden them, reprovision accounts, configurations, and software.

This does not mean you shouldn’t patch the problem, nor is it an excuse to avoid updates in the future, but it means systems affected by a supply chain compromise require more than a patch, and they need to be:

  • Isolated
  • Archived for forensic purposes
  • Accounts/credentials reset for all affected assets (and or controlled by compromised software)
  • Rebuilt from a backup before the compromise (including adjacent ones)
  • Treat the event as an incident – it is vulnerability, credential, response, and recovery management all in one
  • Chase down legal, procurement, and legal for post-mortem steps

 

If you think this is bad, any vulnerability relating to an embedded device with an insecure update process can NEVER be considered secure if it was exploited because you (from an asset owner perspective) can never ensure device integrity after the fact.  Let us count our blessings.

Unfortunately, this tough love is for all parties involved, and I do not wish this on any vendor or asset owner.  However, there were lessons learned here, and I think that is the silver lining and lesson learned. For our customers, please know Verve does not use SolarWinds, and we have quite well-defined processes for security across the board.

 

 

 

 

 

 

 

 

Related Resources

Blog

Embedded OT Vulnerabilities: An Asset Owner Perspective

What should asset owners be aware of with embedded OT systems and buried vulnerabilities, and what remediation tactics are available?

Learn More
Whitepaper

Protecting Embedded Systems

From an asset owner's perspective: Defining firmware and discovering embedded vulnerabilities to protect devices from exploitation.

Learn More
Blog

Protecting Embedded Systems in OT Cyber Security

Learn how to protect OT embedded devices and firmware in OT/ICS cyber security environments.

Learn More

Subscribe to stay in the loop

Subscribe now to receive the latest OT cyber security expertise, trends and best practices to protect your industrial systems.