top of page
Search
brencronin

Vulnerability Management - Scanning System Design for On-Premise Scanning

Updated: Oct 15, 2023


The importance of Scanner Location & Continuous Vulnerability Monitoring


In vulnerability scanning, it's a fundamental requirement for the scanner to have access to the targets it needs to scan. This may seem obvious, but it underpins several essential concepts in vulnerability scanning. Before delving into these concepts related to the scanner's location, it's beneficial to gain some historical context regarding vulnerability scanning.


In the early days of IT, vulnerability scanning wasn't a routine practice. Organizations would often conduct vulnerability scans only once a year or every few years, typically when they hired consultants to perform a "security assessment." These consultants would arrive with a laptop equipped with various security tools, including a vulnerability scanner. The results of the vulnerability scan would form a part of their final deliverable or report to the customer. In unfortunate cases, the entire deliverable would consist solely of the vulnerability scans.

I share this historical context because the original vulnerability scanning model involved physically moving a scanner around the network and periodically scanning different parts of the network. Now it is a standard practice to have scanners permanently connected to the network to enable the organization to have a continuous vulnerability monitoring capability.


The rise of continuous monitoring and distributed scanning


Moving a scanner around the network not only requires significant labor but also proves impractical for delivering prompt feedback to stakeholders. Many organizations recognized the need to build larger IT teams and realized they could handle scanning internally, eliminating the reliance on external consultants. By setting up their scanning servers, they achieved the ability to scan more frequently and maintain a continuous view of vulnerabilities within their networks. However, several challenges emerged in this context:

  1. The issue of network segmentation/zones.

  2. Geographic constraints.

  3. Managing the scan workload effectively.

These issues required attention and solutions to enhance vulnerability scanning practices.


The diagram below illustrates these challenges. On the right side, we encounter the network segmentation/zone issue. The organization logically separates its assets into different network/segment zones to enhance security. This practice is commendable, as it prevents the creation of a vulnerable, flat network. However, when it comes to scanning these assets, a problem arises. Scanners require extensive access to perform their tasks effectively. At a minimum, a hole in the firewall must be opened to enable the scanner to reach all systems in the other zone. If you wish to conduct port enumeration, the firewall also needs to permit numerous open ports for communication between the scanner and the target systems. The rules opened in the firewall to facilitate this can become so extensive that they mimic having no firewall protection from the scanner system to the scanned targets at all.


On the left side of the diagram, we face the geography problem. Many organizations have assets located in remote locations that they wish to scan. Scanners generate a considerable amount of network traffic as they initiate checks and receive responses, making the scanning process noisy and bandwidth intensive.


Lastly, there's the issue of workload capacity. A single scanner can only handle a limited amount of work at any given time.


The optimal solution to this challenge involves strategically placing scanners within specific segments and geographic locations, with a focus on scanning the assets within their respective areas of responsibility. This essentially pertains to the design of the scanning system.

Now, the question arises: if an organization has 50 network segments/zones or geographic locations, should they add 50 scanners to their network? The answer, as always, is "it depends." When making this decision, it's advisable to revisit the three primary questions that form the basis of designing a distributed scanner system:

  • Is it an acceptable risk for a scanner located in a more secure backend zone to scan a less secure zone, which might necessitate piercing through firewalls for the two zones to reach each other?

  • Given geographic constraints, do you want all of the scanning traffic to traverse your network?

  • Even if the answers to the preceding questions are affirmative, consider the workload issue. Is it acceptable to run all of the scanning activities from a single server?

The choice will depend on the specific circumstances, including security requirements, network architecture, and the capacity of the scanning system.


Additional considerations related to distributed scanning include:

  • Results by Proxy-Based Vulnerability Assessment - Some organizations assess vulnerabilities on certain network assets indirectly. For instance, at a remote site, there might be 5 Windows workstations and a switch. The administrators consistently maintain the same OS version/patch level as at the main site. Vulnerability scanning is frequently performed on an asset with the same OS version/patch level at the main site. Security personnel can extrapolate the vulnerabilities at the remote site based on the vulnerabilities identified in assets with the same OS version/patch level at the main site.

  • Temporary Firewall Rule Relaxation - In some cases where scanning across firewall zone boundaries is necessary, organizations temporarily relax firewall rules during scans. After scanning is complete, more restrictive rules are reinstated. This approach, however, requires additional coordination and I don't recommend it.

  • Hybrid Scanning - A hybrid approach can also be employed. There is no reason why distributed scanners cannot handle multiple zones. For example, an organization might have four internal network segments/zones and two DMZ segments/zones. The design decision could involve two internal scanners sharing the workload of scanning the internal networks, while a separate DMZ scanner scans both DMZ segments.

The choice among these approaches should align with the organization's specific requirements, security policies, and network architecture.


Scanner agents


Another challenge arises when dealing with remote users who have devices you wish to scan but are not consistently connected to the network. This is often the case for remote workers. Placing a separate scanner on their home wireless network is not really a viable option, and attempting to scan from a central location has a ton of complexities that make this approach impractical.


One potential solution is to scan the device when it connects to the network via a VPN, but achieving consistent results with this approach can be complex.


An alternative solution offered by many scanning vendors involves the use of an agent that resides on the remote device. This agent can effectively communicate system vulnerability information, providing a more reliable means of assessing vulnerabilities on devices used by remote users.


Scan Construction


Now, let's delve into the specifics of a scan. Each scan comprises several major components, which serve as the blueprints or instructions for how the scan will be executed:



  • The scan's targets - These are the specific assets or systems you intend to scan.

  • The choice of scanner - You must select the scanner you want to employ for conducting the scan against the designated targets.

  • The scan schedule - You should determine when you want to initiate the scan, specifying the timing and frequency.

  • Required credentials - To ensure the scan's success, you need to provide the necessary credentials, such as usernames and passwords.

  • The scan policy - Define the policy or set of rules that you wish to apply during the scan.

  • Scan speed - Determine the speed at which the scan should be conducted.

  • Type of scanning - Specify the nature of the scanning, which could include discovering and enumerating hosts, assessing vulnerabilities, ensuring compliance, or a combination of these.

  • Plugins selection - Identify the specific plugins or checks to be employed during the scan to assess different aspects of the target systems.

  • Scan results storage or reporting - Decide where you want the scan results to be stored or how you want them to be reported and presented.



A solution called the Scan Orchestrator


In the preceding section, we explored the construction of scans and the various components that make up each scan. However, managing all of these settings across multiple distributed scanners in complex environments can be quite challenging. This is where a scan orchestrator comes into play. The key functions of a scan orchestrator include:

  • Pushing scanner plugins to distributed scanners - This allows for the efficient distribution of scanner plugins to all the distributed scanners.

  • Initiating scan jobs on schedules to be executed by distributed scanners - The orchestrator can automate the scheduling of scan jobs, ensuring they run on distributed scanners at the appropriate times.

  • Centralizing credentials for scanning - It streamlines the management of credentials, ensuring they are consistent and centralized for all scanning activities.

  • Centralizing scanning policies - The orchestrator simplifies the management of scanning policies by centralizing them, thus ensuring uniformity in policy enforcement across distributed scanners.

  • Centralizing scan results storage, query, and reporting - It offers a centralized repository for storing, querying, and generating reports on scan results, making it more convenient and efficient to track and assess vulnerabilities and compliance across the entire network.

One might question whether this approach contradicts the purpose of distributed scanner segmentation. The answer is no. This is because the scan orchestrator doesn't actually perform the scans; therefore, it doesn't need access to all the systems in the remote segments. Its role is to communicate with the scanner located within the remote segment. For instance, if a remote segment consists of 100 systems along with a single scanner, the scan orchestrator communicates solely with that single scanner, not the 100 individual systems. The responsibility of scanning the 100 systems within the remote segment falls upon the single scanner situated in that segment.


The network traffic flow between the scan orchestrator and the distributed scanners encompasses periodic plugin updates, instructions to initiate scans, and the return of scan results. Furthermore, these communications between the scan orchestrator and the distributed scanners are typically encrypted to ensure security. Scanning vendors such as Tenable (Nessus Security Center), Qualys, and Rapid7 offer these scan orchestrators as a paid-for product to their customers. Scan orchestration can also be custom-built, using scripts like Bash or Python for scan coordination. Regardless of the approach chosen, whether it's through a commercial solution or a custom-built one, there is a cost associated with implementing scan orchestration. This function is particularly crucial for larger environments.


Other ongoing scanning management considerations


When system administrators apply patches to a system and need to confirm whether the patch successfully resolved the vulnerability, they typically conduct a verification scan or a rescan. To save time for both the administrator performing the patch and the scanner, it's often more efficient to run a targeted scan on the specific system that received the patch. This approach prevents system administrators from investing additional time in patching numerous systems, only to discover that the applied patch did not effectively resolve the vulnerability.


Furthermore, because scanners possess credentialed access to many of an organization's systems, they can serve an additional cybersecurity function of conducting more precise system checks. This includes verifying a single vulnerability, identifying systems with specific ports open, assessing systems with particular settings, and examining various other system artifacts with precision and accuracy. This is important when you need to quickly determine if your organization is exposed to some zero-day vulnerability.




8 views0 comments

留言


Post: Blog2_Post
bottom of page