Antivirus vs. Whitelisting
Are antivirus signatures dead? Hear an ICS security professional's advice on using the right cyber security tool at the right time.
Learn MoreSubscribe to stay in the loop with the latest OT cyber security best practices.
We are often asked by owner/operators about using application whitelisting tools in their environments. Most seem hesitant to leave behind traditional antivirus tools for a superior approach of whitelisting. My hope is to alleviate fears around whitelisting and outline a proven safe and successful approach when deploying this whitelisting technology.
If you are not familiar with application whitelisting, it is an alternative approach to malware prevention. A whitelist of approved executables and applications that are allowed on your systems is created and all other executables are denied. This essentially locks down your system to a known state. It’s the opposite of how antivirus or blacklisting works, where signature files are used to detect malware on your system.
Antivirus is a reactive approach that attempts to clean an infected file after it reaches the asset, assuming it knows the file is infected. AV signatures are always one step behind the bad guys because new signature files are produced after a known malware infection is discovered.
This practice also requires updating signature files periodically, needing either a connection to the internet through a proxy or using removable media to transfer the files. Either method introduces additional risk to your environment. Therefore, whitelisting is perfect for OT environments as they are mostly static, minimizing the amount of time spent maintaining the whitelist. Best of all, there are no signature files to update.
Creating a deployment plan for your application whitelisting journey is essential for success. The first step is to gather a robust and accurate asset inventory. Without a comprehensive inventory, you cannot create an effective plan. Once the inventory is clear, there are several factors to consider when planning your deployment. The first is deciding which computers can be protected. Most application whitelisting tools are compatible with Windows XP SP3 and newer operating systems. If you are using older systems like Windows NT and Server 2000 (I know you are out there!), these systems will not work with whitelisting. If you are deploying whitelisting as part of a larger cyber security program, you should consider upgrading these antiquated systems or isolating them on the network.
Once you have gathered your list of computers, it’s time to determine how to deploy the agents. Some whitelisting tools have remote agent deployment methods. In this case, you can use system management tools that may already exists in your environment such as BigFix, SCCM, Active Directory or Verve Security Center. If you fall into the category of not having any deployment method, you need to manually visit each computer to install the agent. If this is the case, it is worthwhile to schedule time with the operators to plan access to their console without disrupting their duties.
In most cases, OT operators also require proper change management procedures prior to deployment. During the planning stage, you should schedule the appropriate change management processes. We are often asked whether using a whitelisting agent will negatively impact the operations of the HMI or server on which it is deployed. For those that we work with such as CarbonBlack, there is minimal impact on performance. The key to effective and safe operations is in ensuring the whitelist includes the key applications necessary for a safe operation.
This directly leads to the final part of planning, choosing which machines to use in your “soak” test. These machines should be a representative sample of your operations environment. They will serve as the first computers you lock down to verify the accuracy of your whitelist. Select computers that will not disrupt operations. The whitelist may not be correct on the first attempt and may require tuning.
Begin your deployment process by using a deployment tool or manually installing the agents on each endpoint. For automated remote deployment, the agent should be pushed to a limited number of computers. This will minimize the risk of network congestion – “low and slow” as they say in the OT industry.
ALWAYS DEPLOY WHITELISTING AGENTS IN DISABLED MODE!! I mention this because most whitelisting agents have several modes of protection, but the most basic “monitor” mode (not actively blocking) will typically have some sort of tamper protection enabled. I have found myself in a situation before where communications were not setup correctly and the agent wasn’t able to check-in with the whitelisting server. When tamper protection is enabled, it typically requires rebooting into safe mode to remove the agent or make changes to it. Reboots are bad in operations so ALWAYS deploy in disabled mode.
Simulation mode in application whitelisting refers to the agents’ ability to simulate lockdown mode without actually blocking anything. A log is created to show all blocked files and allows you to create an effective rule set based on this information.
After you have deployed all of your agents, start moving them into simulation mode. Again, you should follow the “low and slow” approach. The agents will go through an initialization phase where it is cataloging all current executables on the system to increase resource utilization on the endpoint.
A good rule of thumb is to move four or five computers at a time into simulation mode and never move two or more computers that serve the same function simultaneously. It is recommended to leave the computers in simulation mode for a few weeks. This will aid in creating an accurate list of file execution blocks and whitelists.
Alerts should also be configured to track blocked files which will help during the lockdown phase to verify everything is functioning normally.
When all endpoints are in simulation mode, it is time to create whitelists. There are several methods for this. The simplest is to use a current snapshot of your executable files for the baseline and build out the list from there. In this approach, each computers’ whitelist is created based on the current snapshot of the files on that computer meaning anything on the computer currently will be allowed to execute and anything new will be denied by default. This is the easiest approach but not always the safest approach.
In the case of a computer already having a virus infection, this will whitelist the malware. The preferred method is to start with publisher whitelisting and applying these whitelists globally. This approach means approving known signed publishers of files in the environment. For example, approve known good publishers like Microsoft, Adobe, OEM applications like Emerson (if you are running an Emerson control system), and any other relevant software publishers.
This is a good time to decide if you are going to create different whitelists based on computer function. It is recommended that you build a specific whitelist for each computer type. For example, operator workstations and HMIs are in one group and domain controllers in another. This will give you granularity when determining what files to approve and for which assets.
Once the publishers are approved and the groups are created, you should review the blocked files from the simulation soak test. Then, create rules for each blocked file and assign that rule to the appropriate whitelist group(s). Now create a “trusted directory” for use in the following scenario. You are preparing to patch software on a computer and the whitelist is active.
To the whitelisting agent, all these files transferred to the computer are new, unapproved files and will be blocked during execution. To get around this, you can create a trusted directory anywhere on the network that you can copy installer files into beforehand and the whitelisting server will automatically approve those files. The best part is, you can turn this on and off from the server side so that it is not active until its needed.
We have made it to the final step. It’s time to put the agents in lockdown mode. During the planning phase, you designated which computers to test lockdown mode on first. After moving these computers into lockdown mode, it is recommended to let them sit for a few days or even a week to verify your whitelist is working properly. This is where configured alerting is useful. If files are blocked, you will know immediately, and can edit the whitelist to fix any issues. After successful verification of the whitelist, move the remaining computers into lockdown mode. Again, follow the “low and slow” approach here. Move computers a few at a time and let them soak. Do this until every computer is in lockdown mode.
Congratulations, you have successfully deployed application whitelisting. For the next few weeks, pay close attention to any denied executions and determine if any should be whitelisted. At this point, any new file added to the computer is considered unapproved and will be denied unless you specifically allow it.
This five-step approach works great for malware prevention without the added risk of transferring signature files in and out of a critical network. Please contact us for expertise in deploying whitelisting in OT systems.
Are antivirus signatures dead? Hear an ICS security professional's advice on using the right cyber security tool at the right time.
Learn MoreSecure application operations in OT/ICS with OEM-agnostic application whitelisting.
Learn MoreCompensating controls in OT security are versatile strategies when patching isn't an option. They offer a multi-layered defense.
Learn More