This is the final piece in our series on Vulnerability Management (VM) in OT, rounding out the cycle of a vulnerability management program by examining the monitoring phase, which is especially difficult in OT cyber security. Our other posts covered:
Vulnerability and risk monitoring
Monitoring can refer to monitoring the web for new threats, monitoring assets for changes, reporting on and demonstrating current risk rankings, or monitoring compliance with expected or tolerated risk levels.
This blog will focus on aspects of monitoring as they practically apply to an OT environment, as well as a look at challenges with current methods and tool sets relative to the agent-based approach. The intent is to show how an OT-specific tool built by OT, for OT, makes significant improvements over current, poorly-adapted IT tools.
Monitoring new threats
There is no shortage of threat intelligence feeds, risk registers or capable threat hunters to share cyber risks you should worry about. The challenge for OT in using this data to improve insight is in taking the threat details and applying them to your specific assets.
This is a challenge on two fronts:
First, how many of that specific asset (by OS, software, running service, etc.) do you have, where are they and are they important? Remember in OT, not all assets are created equal. A significant risk on a critical asset is a big deal. But on the other hand, a network-based attack vector for assets in layer two of your architecture might remain safe if delayed or deferred.
Second, how do you monitor assets for expected changes? What if the changes are local to that asset and not communicated? What if that asset is not monitored? Passive detection tools need to hear specifics from the asset. Real-time reporting on specific parameters allows for near instant visibility, full asset coverage (by asset type, location) and comprehensive monitoring (all aspects of the asset, not just that which gets transmitted).
During the discussion about vulnerability scanners, we identified challenges with scanning, including the details identified, the assets in scope, and the timing or frequency of scanning. These restrictions result in a subset of asset data and risk profiles that age immediately at the conclusion of a scan. The alternative is an agent-based approach, providing significantly greater asset detail which can be refreshed as frequently as every ten minutes.
When adding reporting and monitoring to your risk management, it is clear that a real-time, comprehensive asset profile relative to known risks is significantly more accurate, relevant and useful. As you tune assets to reduce unnecessary ports, services, and software, the closed loop nature of an agent-based approach means your risk profile per asset updates in real-time as you remediate risks through software patching or applying compensating controls.
Vulnerability scanning tools and passive listening tools provide significant insight and intelligence into pure cyber risk. But when these vulnerability management tools are used in OT environments during real-world challenges, their value wanes.
Using an agent-based approach provides detailed asset characteristics in real-time to significantly improve vulnerability management capabilities. Closed loop insight, granular visibility and control and real-time asset status are just the start.
From identification to remediation to reporting, there is a better solution for vulnerability management in OT.