Systems management is required to ensure security and reliability for operational technology (OT) systems. While IT has a long-standing history of robust and successful policies and procedures to follow, they do not typically extend very well into the world of OT.
This is due to a number of reasons that range from how diverse OT systems can be (relays and PLCs to server class HMIs and everything in between) to the fact that IT tools and procedures are typically built for homogenous, robust computing environments.
Systems Management: IT vs. OT
OT is neither homogeneous or robust, so most IT practices desired for OT environment need to be tuned specifically to those very specific systems. Even more challenging is that within OT environments, there are multiple systems (from different vendors, different vintage, etc.) that require different handling between the tools used and the manner and frequency in which the actions are carried out.
Not only does OT need its own set of rules, it often needs multiple sets per specific endpoint.
Organizations interested in extending IT practices into OT environments need to agree on the desired end result, but be flexible in the chosen path or process.
Most OT environments struggle significantly when adapting IT policies (or even best practice activities) into OT because the IT tools just don’t fit. So most OT teams only execute a small percentage of assets/practices and must rely on manual intervention and creativity for further coverage.
For example, patching using an automated tool like SCCM or WSUS is decent for Windows based assets but does not cover Linux/Unix apps. Add the fact that many OEMs only support a subset of patches and now OT teams using SCCM have to filter out and individually manage specific subsets of patches only for specific target systems.
This results in functions normally associated with ITSM (asset inventory, provisioning management, patch management, configuration management, disaster recovery and incident response) being unmanaged or applied at a local or business unit level without the same level of rigor, process or consistency you would see in IT.
Context is King for OT Systems Management
The great news is that more and more OT environments are embracing automated and aggregated tool sets to provide context and maximize results. Adopting agent-based and real-time, agentless profiling and management tools on OT endpoints, coupled with additional contextual data (like asset location, criticality, owner, etc), OT security practitioners apply laser-like focus and filtering of information.
This focus allows for accurate, efficient and consistent application of not only first pass security measures (like patching) but in the application, tracking and reporting of second pass (i.e., compensating controls) security measures.
The solution lies in building a robust, contextual, 360-degree view of OT assets reported in a single pane of glass across all assets. The rich endpoint data coupled with metadata (ex. operational impact) coupled with third party data (ex – vulnerabilities, backup, patch, whitelisting data) gives OT practitioners context specific to their exact assets.
Knowing, for example, that a certain class of asset is critical to operations while others are considered supporting assets allows for a more reasonable and sustainable backup program. In this case, critical assets may get a daily full backup and weekly offsite storage and supporting assets would get a weekly full backup and monthly offsite storage.
But knowing where to draw the line between the two is where efficiencies are gained by designing the protective activity to the level of real need and allow the teams to only do as much as they really need to.
Similarly, once deployed, the routine maintenance of patching and/or software security or feature updates becomes much more manageable. For example, when BlueKeep vulnerabilities emerged, those with robust, real-time asset profiles prioritized which assets needed patching sooner, which could wait and which ones would be tested on – rather than the typical method many use today, which is a manual, linear approach to patching.
The ability to remediate highest risk (i.e., most impactful to safe operations and those with less or faulty compensating controls first) is a huge leap in cyber security maturity.
We must also not forget that this type of approach gives you the context to make smart, informed decisions based on empirical, near real-time data and also provides the mechanism to make changes.
When a traditional OT environment can’t patch for BlueKeep vulnerabilities, the next step is to accept risk or manually tune the endpoints to disable remote desktop. With an agent-based approach, the remote desktop service is disabled instantly (either fleet-wide or selectively), applying an effective, interim protection while patching is deployed.
In essence, an OT systems management approach with robust endpoint profiles and the ability to remediate provides the following five benefits:
- Insight into all hardware and software in the network to ensure vulnerabilities are identified quickly
- Properly updated and configured systems to reduce opportunities for cyber attacks
- Operationally-efficient systems update to provide automation on key operational tasks
- Consistent reporting and monitoring across IT and OT for simplified progress documentation
- Effective advanced security controls built with proper visibility and access to the underlying endpoints and network data
OT Systems Management: Oil & Gas Pipelines Case Study
To see how this works, let’s discuss an oil and gas pipeline client with a geographically-distributed environment (long haul SCADA). Prior to deploying their real-time asset inventory with aggregation of all assets and 360-degree context, they had little-to-no awareness or visibility into risk or how best to reduce it when identified.
With a more robust OTSM platform, they take any new threat or vulnerability and immediately filter it to match very specific corporate risk and operational constraint parameters to arrive at contextual risk.
For example, they can filter their entire list of known vulnerabilities to just look at critical risks on high impact assets by location, owner, facility type and location. They further filter that subset of risk to highlight assets that failed their last backup or that don’t have whitelisting in lockdown mode.
This very simple analysis explicitly targets how many and where (and what type) assets are:
- Subject to a critical risk
- Are high impact operational assets
- Don’t have a recent backup to ‘fall back on’
- Don’t have anti-malware to protect
The result is a focus on ten or twenty risks to act on, as opposed to the list of hundreds or thousands they typically see. Providing context, filtering and acting is what will make OT systems management a success for most operational entities.
Without context, OT systems management is a struggle.