Quantifying Cyber Risk
Driving empirical analysis of OT cyber security risk in providing effective OT asset management practices.Read the Story
Delicately stated, most individuals working in both business and IT domains are well aware of the concept of risks, and OT operators are even more aware of direct (potentially life taking) risks/impacts to operations. But at the end of the day, risk is a shared endeavor across both business/IT and operations/OT. Both have their challenges, and both have differing requirements to arrive at resolution or reduction through a balanced approach.
Regrettably, this balance is even harder to achieve due to cyber security marketing factors, and as stated by the CISA director Chris Krebs, “vendors need to stop spreading Fear-Uncertainty-and-Doubt (FUD) to asset owners”, or this concept of a slick hacker enveloped within a hoodie issuing mass destruction at the whims of a keyboard. Perhaps there is some truth that armies of dedicated and mischievous individuals with a desire to disrupt your OT organization exist, but if we take stock in how engineers built these facilities, there is a certain amount of risk reduction and assurances for the truly dangerous processes.
In the spirit of technological evolution and the advent of easy connectivity, the risk landscape in many traditional and manual or analog environments was transformed by the adoption of commodity technology that makes our lives easier. Evolution or not, it’s not an insurmountable challenge to secure these optimized environments or address new risks in retrospect, but the option of risk mitigation by way of replacement is not always a solution to balance risk equations.
This is particularly true in OT and IT environments (government or private) where legacy systems exist and cannot be ripped and replaced because:
To reduce risk and return balance to an organization’s cyber security program, we need to protect against the common cyber security vectors that are most likely to occur types of incidents at a minimum. We should engineer a solution based on consequences improving our chances as defenders and administrators to:
By acknowledging the high level of identifying and managing vulnerabilities that may be exploitable in those systems (either by a technologically enabled vector or abetted by a human), we need to look at managing exposure. Indeed, vulnerability and risks both need to be managed, but one cannot do so without looking at the actual exposure of those vulnerabilities. This is the fragile game of balance – risk, cost, reduction, and vulnerability/exposure.
Exposure is almost synonymous with mitigating or compensating controls to reduce the overall risk of a threat occurring, potentially the impacts, and also the shielding of a system with vulnerabilities. This shielding is like blackout curtains blocking sunlight; a window is a hole in an external structure allowing sunlight to penetrate into the room separated from the outside, when we the owner (operator) wants.
There would be some additional threats to consider such as unauthorized access, prevention of internal error (child falling through), reduction of ideal environment conditions (temperature etc), and gaps in monitoring (sensor/alarm is disabled if open).
Those are simple things to manage, and IT or OT systems overlap controls, which has been proven over and over by those who specializes in those environments. But still the question remains, looking at the overall surface, how do I reduce the risk of exposure? Clearly in our window example, those threats and the inherent vulnerabilities of a traditional window are amplified if you are a basement or ground-level suite, vs. the second or third story.
If we look at RDP vulnerabilities such as BlueKeep, it looks absolutely terrifying on the surface with high CVSS scores, easy exploitation, and “wormable” attributes. Fortunately, if you add additional compensating controls such as protect direct access, host hardening (NLA), limiting any RDP sessions to those that originate from a hardened jumpbox, use VPNs, have monitoring, and multiple layers of firewalls, then this vulnerability isn’t so extreme.
If logic follows, the presence of a vulnerability does not necessarily equate to it being exploitable, nor an asset being vulnerable at all. This is especially true with BlueKeep (and some of the other RDP vulnerabilities). If Network Layer Authentication (NLA) is on, or the RDP service is disabled altogether, a system with the vulnerability may not be vulnerable at all because of limited exposure or conditions that render it moot.
This circles us back to the original statement eloquently articulated by Mr. Krebs: how do we create actionable security that actually secures systems without selling needless equipment or solutions through fear? By examining vulnerabilities, having a detailed asset catalog, and leveraging heuristics showing indicators of best practices while looking at validated system information demonstrating actual locations of a system (with compensating controls), we arrive at something reasonable and not FUD-driven, using exposure.
Exposure is a conversation the ICS and critical infrastructure community needs to have. Asset owners are becoming aware (kudos to the community for that), but accountability and responsibility for selling concrete solutions falls upon the vendor (as it should). Asset owners are also becoming frustrated by unsubstantiated product claims towards security and need solutions that help them solve problems today!
For more information, get in touch with us!