Conduct Security Control Testing
Organizations must manage the security control testing that occurs to ensure that all security controls are tested thoroughly by authorized individuals. The facets of security control testing that organizations must include are vulnerability assessments, penetration testing, log reviews, synthetic transactions, code review and testing, misuse case testing, test coverage analysis, and interface testing.
Vulnerability Assessment
A vulnerability assessment helps to identify the areas of weakness in a network. It can also help to determine asset prioritization within an organization. A comprehensive vulnerability assessment is part of the risk management process. But for access control, security professionals should use vulnerability assessments that specifically target the access control mechanisms.
Vulnerability assessments usually fall into one of three categories:
Personnel testing: Reviews standard practices and procedures that users follow.
Physical testing: Reviews facility and perimeter protections.
System and network testing: Reviews systems, devices, and network topology.
The security analyst who will be performing a vulnerability assessment must understand the systems and devices that are on the network and the jobs they perform. The analyst needs this information to be able to assess the vulnerabilities of the systems and devices based on the known and potential threats to the systems and devices.
After gaining knowledge regarding the systems and devices, the security analyst should examine existing controls in place and identify any threats against these controls. The security analyst can then use all the information gathered to determine which automated tools to use to search for vulnerabilities. After the vulnerability analysis is complete, the security analyst should verify the results to ensure that they are accurate and then report the findings to management, with suggestions for remedial action. With this information in hand, the analyst should carry out threat modeling to identify the threats that could negatively affect systems and devices and the attack methods that could be used.
Vulnerability assessment applications include Nessus, Open Vulnerability Assessment System (OpenVAS), Core Impact, Nexpose, GFI LanGuard, QualysGuard, and Microsoft Baseline Security Analyzer (MBSA). Of these applications, OpenVAS and MBSA are free.
When selecting a vulnerability assessment tool, you should research the following metrics: accuracy, reliability, scalability, and reporting. Accuracy is the most important metric. A false positive generally results in time spent researching an issue that does not exist. A false negative is more serious, as it means the scanner failed to identify an issue that poses a serious security risk.
Network Discovery Scan
A network discovery scan examines a range of IP addresses to determine which ports are open. This type of scan only shows a list of systems on the network and the ports in use on the network. It does not actually check for any vulnerabilities.
Topology discovery entails determining the devices in the network, their connectivity relationships to one another, and the internal IP addressing scheme in use. Any combination of these pieces of information allows a hacker to create a “map” of the network, which aids him tremendously in evaluating and interpreting the data he gathers in other parts of the hacking process. If he is completely successful, he will end up with a diagram of the network. Your challenge as a security professional is to determine whether such a mapping process is possible, using the same tools as the attacker. Based on your findings, you should determine steps to take that make topology discovery either more difficult or, better yet, impossible.
Operating system fingerprinting is the process of using some method to determine the operating system running on a host or a server. By identifying the OS version and build number, a hacker can identify common vulnerabilities of that OS using readily available documentation from the Internet. While many of the issues will have been addressed in subsequent updates, service packs, and hotfixes, there might be zero-day weaknesses (issues that have not been widely publicized or addressed by the vendor) that the hacker can leverage in the attack. Moreover, if any of the relevant security patches have not been applied, the weaknesses the patches were intended to address will exist on the machine. Therefore, the purpose of attempting OS fingerprinting during assessment is to assess the relative ease with which it can be done and identifying methods to make it more difficult.
Operating systems have well-known vulnerabilities, and so do common services. By determining the services that are running on a system, an attacker also discovers potential vulnerabilities of the service of which he may attempt to take advantage. This is typically done with a port scan, in which all “open,” or “listening,” ports are identified. Once again, the lion’s share of these issues will have been mitigated with the proper security patches, but that is not always the case; it is not uncommon for security analysts to find that systems that are running vulnerable services are missing the relevant security patches. Consequently, when performing service discovery, check patches on systems found to have open ports. It is also advisable to close any ports not required for the system to do its job.
Network discovery tools can perform the following types of scans:
TCP SYN scan: Sends a packet to each scanned port with the SYN flag set. If a response is received with the SYN and ACK flags set, the port is open.
TCP ACK scan: Sends a packet to each port with the ACK flag set. If no response is received, then the port is marked as filtered. If an RST response is received, then the port is marked as unfiltered.
Xmas scan: Sends a packet with the FIN, PSH, and URG flags set. If the port is open, there is no response. If the port is closed, the target responds with a RST/ACK packet.
The result of this type of scan is that security professionals can determine if ports are open, closed, or filtered. Open ports are being used by an application on the remote system. Closed ports are open ports but there is no application accepting connections on that port. Filtered ports are ports that cannot be reached.
The most widely used network discovery scanning tool is Nmap.
Network Vulnerability Scan
Network vulnerability scans perform a more complex scan of the network than network discovery scans. These scans will probe a targeted system or network to identify vulnerabilities. The tools used in this type of scan will contain a database of known vulnerabilities and will identify if a specific vulnerability exists on each device.
There are two types of vulnerability scanners:
Passive vulnerability scanners: A passive vulnerability scanner (PVS) monitors network traffic at the packet layer to determine topology, services, and vulnerabilities. It avoids the instability that can be introduced to a system by actively scanning for vulnerabilities.
PVS tools analyze the packet stream and look for vulnerabilities through direct analysis. They are deployed in much the same way as intrusion detection systems (IDSs) or packet analyzers. A PVS can pick a network session that targets a protected server and monitor it as much as needed. The biggest benefit of a PVS is its ability to do its work without impacting the monitored network. Some examples of PVSs are the Nessus Network Monitor (formerly Tenable PVS) and NetScanTools Pro.
Active vulnerability scanners: Whereas passive scanners can only gather information, active vulnerability scanners (AVSs) can take action to block an attack, such as block a dangerous IP address. They can also be used to simulate an attack to assess readiness. They operate by sending transmissions to nodes and examining the responses. Because of this, these scanners may disrupt network traffic. Examples include Nessus and Microsoft Baseline Security Analyzer (MBSA).
Regardless of whether it’s active or passive, a vulnerability scanner cannot replace the expertise of trained security personnel. Moreover, these scanners are only as effective as the signature databases on which they depend, so the databases must be updated regularly. Finally, scanners require bandwidth and potentially slow the network.
For best performance, you can place a vulnerability scanner in a subnet that needs to be protected. You can also connect a scanner through a firewall to multiple subnets; this complicates the configuration and requires opening ports on the firewall, which could be problematic and could impact the performance of the firewall.
The most popular network vulnerability scanning tools include Qualys, Nessus, and MBSA.
Vulnerability scanners can use agents that are installed on the devices, or they can be agentless. While many vendors argue that using agents is always best, there are advantages and disadvantages to both, as presented in Table 6-1.
Table 6-1 Server-Based vs. Agent-Based Scanning
Type |
Technology |
Characteristics |
Agent-based |
Pull technology |
Can get information from disconnected machines or machines in the DMZ Ideal for remote locations that have limited bandwidth Less dependent on network connectivity Based on policies defined in the central console |
Server-based |
Push technology |
Good for networks with plentiful bandwidth Dependent on network connectivity Central authority does all the scanning and deployment |
Some scanners can do both agent-based and server-based scanning (also called agentless or sensor-based scanning).
Web Application Vulnerability Scan
Because web applications are highly used in today’s world, companies must ensure that their web applications remain secure and free of vulnerabilities. Web application vulnerability scanners are special tools that examine web applications for known vulnerabilities.
Popular web application vulnerability scanners include QualysGuard and Nexpose.
Penetration Testing
The goal of penetration testing, also known as ethical hacking, is to simulate an attack to identify any threats that can stem from internal or external resources planning to exploit the vulnerabilities of a system or device.
The steps in performing a penetration test are as follows:
Document information about the target system or device.
Gather information about attack methods against the target system or device. This includes performing port scans.
Identify the known vulnerabilities of the target system or device.
Execute attacks against the target system or device to gain user and privileged access.
Document the results of the penetration test and report the findings to management, with suggestions for remedial action.
Both internal and external tests should be performed. Internal tests occur from within the network, whereas external tests originate outside the network and target the servers and devices that are publicly visible.
Strategies for penetration testing are based on the testing objectives defined by the organization. The strategies that you should be familiar with include the following:
Blind test: The testing team is provided with limited knowledge of the network systems and devices that use publicly available information. The organization’s security team knows that an attack is coming. This test requires more effort by the testing team, and the team must simulate an actual attack.
Double-blind test: This test is like a blind test except the organization’s security team does not know that an attack is coming. Only a few individuals in the organization know about the attack, and they do not share this information with the security team. This test usually requires equal effort for both the testing team and the organization’s security team.
Target test: Both the testing team and the organization’s security team are given maximum information about the network and the type of attack that will occur. This is the easiest test to complete but does not provide a full picture of the organization’s security.
Penetration testing is also divided into categories based on the amount of information to be provided. The main categories that you should be familiar with include the following:
Zero-knowledge test: The testing team is provided with no knowledge regarding the organization’s network. The testing team can use any means available to obtain information about the organization’s network. This is also referred to as closed, or black-box, testing.
Partial-knowledge test: The testing team is provided with public knowledge regarding the organization’s network. Boundaries might be set for this type of test. This is also referred to as gray-box testing.
Full-knowledge test: The testing team is provided with all available knowledge regarding the organization’s network. This test is focused more on what attacks can be carried out. This is also referred to as white-box testing.
Penetration testing applications include Metasploit, Wireshark, Core Impact, Nessus, Cain & Abel, Kali Linux, and John the Ripper. When selecting a penetration testing tool, you should first determine which systems you want to test. Then research the different tools to discover which can perform the tests that you want to perform for those systems and research the tools’ methodologies for testing. In addition, the organization needs to select the correct individual to carry out the test. Remember that penetration tests should include manual methods as well as automated methods because relying on only one of these two will not yield a thorough result.
Table 6-2 compares vulnerability assessments and penetration tests.
Table 6-2 Comparison of Vulnerability Assessments and Penetration Tests
|
Vulnerability Assessment |
Penetration Test |
Purpose |
Identifies vulnerabilities that may result in compromise of a system. |
Identifies ways to exploit vulnerabilities to circumvent the security features of systems. |
When |
After significant system changes. Schedule at least quarterly thereafter. |
After significant system changes. Schedule at least annually thereafter. |
How |
Use automated tools with manual verification of identified issues. |
Use both automated and manual methods to provide a comprehensive report. |
Reports |
Potential risks posed by known vulnerabilities, ranked using base scores associated with each vulnerability. Both internal and external reports should be provided. |
Description of each issue discovered, including specific risks the issue may pose and specifically how and to what extent it may be exploited. |
Duration |
Typically several seconds to several minutes per scanned host. |
Days or weeks, depending on the scope and size of the environment to be tested. Tests may grow in duration if efforts uncover additional scope. |
Log Reviews
A log is a recording of events that occur on an organizational asset, including systems, networks, devices, and facilities. Each entry in a log covers a single event that occurs on the asset. In most cases, there are separate logs for different event types, including security logs, operating system logs, and application logs. Because so many logs are generated on a single device, many organizations have trouble ensuring that the logs are reviewed in a timely manner. Log review, however, is probably one of the most important steps an organization can take to ensure that issues are detected before they become major problems.
Computer security logs are particularly important because they can help an organization identify security incidents, policy violations, and fraud. Log management ensures that computer security logs are stored in sufficient detail for an appropriate period of time so that auditing, forensic analysis, investigations, baselines, trends, and long-term problems can be identified.
The National Institute of Standards and Technology (NIST) has provided two special publications that relate to log management: NIST SP 800-92, “Guide to Computer Security Log Management,” and NIST SP 800-137, “Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations.” While both of these special publications are primarily used by federal government agencies and organizations, other organizations may want to use them as well because of the wealth of information they provide. The following section covers NIST SP 800-92, and NIST SP 800-137 is discussed later in this chapter.
NIST SP 800-92
NIST SP 800-92 makes the following recommendations for more efficient and effective log management:
Organizations should establish policies and procedures for log management. As part of the planning process, an organization should
Define its logging requirements and goals.
Develop policies that clearly define mandatory requirements and suggested recommendations for log management activities.
Ensure that related policies and procedures incorporate and support the log management requirements and recommendations.
Management should provide the necessary support for the efforts involving log management planning, policy, and procedures development.
Organizations should prioritize log management appropriately throughout the organization.
Organizations should create and maintain a log management infrastructure.
Organizations should provide proper support for all staff with log management responsibilities.
Organizations should establish standard log management operational processes. This includes ensuring that administrators
Monitor the logging status of all log sources.
Monitor log rotation and archival processes.
Check for upgrades and patches to logging software and acquire, test, and deploy them.
Ensure that each logging host’s clock is synchronized to a common time source.
Reconfigure logging as needed based on policy changes, technology changes, and other factors.
Document and report anomalies in log settings, configurations, and processes.
According to NIST SP 800-92, common log management infrastructure components include general functions (log parsing, event filtering, and event aggregation), storage (log rotation, log archival, log reduction, log conversion, log normalization, and log file integrity checking), log analysis (event correlation, log viewing, and log reporting), and log disposal (log clearing.)
Syslog provides a simple framework for log entry generation, storage, and transfer that any operating system, security software, or application could use if designed to do so. Many log sources either use syslog as their native logging format or offer features that allow their log formats to be converted to syslog format. Each syslog message has only three parts. The first part specifies the facility and severity as numerical values. The second part of the message contains a timestamp and the hostname or IP address of the source of the log. The third part is the actual log message content.
No standard fields are defined within the message content; it is intended to be human-readable and not easily machine-parsable. This provides very high flexibility for log generators, which can place whatever information they deem important within the content field, but it makes automated analysis of the log data very challenging. A single source may use many different formats for its log message content, so an analysis program would need to be familiar with each format and be able to extract the meaning of the data within the fields of each format. This problem becomes much more challenging when log messages are generated by many sources. It might not be feasible to understand the meaning of all log messages, so analysis might be limited to keyword and pattern searches. Some organizations design their syslog infrastructures so that similar types of messages are grouped together or assigned similar codes, which can make log analysis automation easier to perform.
As log security has become a greater concern, several implementations of syslog have been created that place greater emphasis on security. Most have been based on IETF’s RFC 3195, which was designed specifically to improve the security of syslog. Implementations based on this standard can support log confidentiality, integrity, and availability through several features, including reliable log delivery, transmission confidentiality protection, and transmission integrity protection and authentication.
Security information and event management (SIEM) products allow administrators to consolidate all security information logs. This consolidation ensures that administrators can perform analysis on all logs from a single resource rather than having to analyze each log on its separate resource. Most SIEM products support two ways of collecting logs from log generators:
Agentless: The SIEM server receives data from the individual hosts without needing to have any special software installed on those hosts. Some servers pull logs from the hosts, which is usually done by having the server authenticate to each host and retrieve its logs regularly. In other cases, the hosts push their logs to the server, which usually involves each host authenticating to the server and transferring its logs regularly. Regardless of whether the logs are pushed or pulled, the server then performs event filtering and aggregation and log normalization and analysis on the collected logs.
Agent-based: An agent program is installed on the host to perform event filtering and aggregation and log normalization for a particular type of log. The host then transmits the normalized log data to the SIEM server, usually on a real-time or near-real-time basis for analysis and storage. Multiple agents may need to be installed if a host has multiple types of logs of interest. Some SIEM products also offer agents for generic formats such as syslog and Simple Network Management Protocol (SNMP). A generic agent is used primarily to get log data from a source for which a format-specific agent and an agentless method are not available. Some products also allow administrators to create custom agents to handle unsupported log sources.
There are advantages and disadvantages to each method. The primary advantage of the agentless approach is that agents do not need to be installed, configured, and maintained on each logging host. The primary disadvantage is the lack of filtering and aggregation at the individual host level, which can cause significantly larger amounts of data to be transferred over networks and increase the amount of time it takes to filter and analyze the logs. Another potential disadvantage of the agentless method is that the SIEM server may need credentials for authenticating to each logging host. In some cases, only one of the two methods is feasible; for example, there might be no way to remotely collect logs from a particular host without installing an agent onto it.
SIEM products usually include support for several dozen types of log sources, such as OSs, security software, application servers (e.g., web servers, email servers), and even physical security control devices such as badge readers. For each supported log source type, except for generic formats such as syslog, the SIEM products typically know how to categorize the most important logged fields. This significantly improves the normalization, analysis, and correlation of log data over that performed by software with a less granular understanding of specific log sources and formats. Also, the SIEM software can perform event reduction by disregarding data fields that are not significant to computer security, potentially reducing the SIEM software’s network bandwidth and data storage usage.
Typically, system, network, and security administrators are responsible for managing logging on their systems, performing regular analysis of their log data, documenting and reporting the results of their log management activities, and ensuring that log data is provided to the log management infrastructure in accordance with the organization’s policies. In addition, some of the organization’s security administrators act as log management infrastructure administrators, with responsibilities such as the following:
Contact system-level administrators to get additional information regarding an event or to request that they investigate a particular event.
Identify changes needed to system logging configurations (e.g., which entries and data fields are sent to the centralized log servers, what log format should be used) and inform system-level administrators of the necessary changes.
Initiate responses to events, including incident handling and operational problems (e.g., a failure of a log management infrastructure component).
Ensure that old log data is archived to removable media and disposed of properly once it is no longer needed.
Cooperate with requests from legal counsel, auditors, and others.
Monitor the status of the log management infrastructure (e.g., failures in logging software or log archival media, failures of local systems to transfer their log data) and initiate appropriate responses when problems occur.
Test and implement upgrades and updates to the log management infrastructure’s components.
Maintain the security of the log management infrastructure.
Organizations should develop policies that clearly define mandatory requirements and suggested recommendations for several aspects of log management, including log generation, log transmission, log storage and disposal, and log analysis. Table 6-3 gives examples of logging configuration settings that an organization can use. The types of values defined in Table 6-3 should only be applied to the hosts and host components previously specified by the organization as ones that must or should log security-related events.
Table 6-3 Examples of Logging Configuration Settings
Category |
Low-Impact Systems |
Moderate-Impact Systems |
High-Impact Systems |
Log retention duration |
1–2 weeks |
1–3 months |
3–12 months |
Log rotation |
Optional (if performed, at least every week or every 25 MB) |
Every 6–24 hours or every 2–5 MB |
Every 15–60 minutes or every 0.5–1.0 MB |
Log data transfer frequency (to SIEM) |
Every 3–24 hours |
Every 15–60 minutes |
At least every 5 minutes |
Local log data analysis |
Every 1–7 days |
Every 12–24 hours |
At least 6 times a day |
File integrity check for rotated logs? |
Optional |
Yes |
Yes |
Encrypt rotated logs? |
Optional |
Optional |
Yes |
Encrypt log data transfers to SIEM? |
Optional |
Yes |
Yes |
Synthetic Transactions
Synthetic transaction monitoring, which is a type of proactive monitoring, is often preferred for websites and applications. It provides insight into the availability and performance of an application and warns of any potential issue before users experience any degradation in application behavior. It uses external agents to run scripted transactions against an application. For example, Microsoft’s System Center Operations Manager uses synthetic transactions to monitor databases, websites, and TCP port usage.
In contrast, real user monitoring (RUM), which is a type of passive monitoring, captures and analyzes every transaction of every application or website user. Unlike synthetic monitoring, which attempts to gain performance insights by regularly testing synthetic interactions, RUM cuts through the guesswork by seeing exactly how users are interacting with the application.
Code Review and Testing
Code review and testing must occur throughout the entire system or application development life cycle. The goal of code review and testing is to identify bad programming patterns, security misconfigurations, functional bugs, and logic flaws.
In the planning and design phase, code review and testing include architecture security reviews and threat modeling. In the development phase, code review and testing include static source code analysis, manual code review, static binary code analysis, and manual binary review. Once an application is deployed, code review and testing involve penetration testing, vulnerability scanning, and fuzz testing.
Formal code review involves a careful and detailed process with multiple participants and multiple phases. In this type of code review, software developers attend meetings where each line of code is reviewed, usually using printed copies. Lightweight code review typically requires less overhead than formal code inspections, though it can be equally effective when done properly. Code review methods include the following:
Over-the-shoulder: One developer looks over the author’s shoulder as the author walks through the code.
Email pass-around: Source code is emailed to reviewers automatically after the code is checked in.
Pair programming: Two authors develop code together at the same workstation.
Tool-assisted code review: Authors and reviewers use tools designed for peer code review.
Black-box testing, or zero-knowledge testing: The team is provided with no knowledge regarding the organization’s application. The team can use any means at its disposal to obtain information about the organization’s application. This is also referred to as closed testing.
White-box testing: The team goes into the process with a deep understanding of the application or system. Using this knowledge, the team builds test cases to exercise each path, input field, and processing routine.
Gray-box testing: The team is provided more information than in black-box testing, while not as much as in white-box testing. Gray-box testing has the advantage of being nonintrusive while maintaining the boundary between developer and tester. On the other hand, it may uncover some of the problems that might be discovered with white-box testing.
Table 6-4 compares black-box, gray-box, and white-box testing.
Table 6-4 Black-Box, Gray-Box, and White-Box Testing
Black Box |
Gray Box |
White Box |
Internal workings of the application are not known. |
Internal workings of the application are somewhat known. |
Internal workings of the application are fully known. |
Also called closed-box, data-driven, and functional testing. |
Also called translucent testing, as the tester has partial knowledge. |
Also known as clear-box, structural, or code-based testing. |
Performed by end users, testers, and developers. |
Performed by end users, testers, and developers. |
Performed by testers and developers. |
Least time-consuming. |
More time-consuming than black-box testing but less so than white-box testing. |
Most exhaustive and time-consuming. |
Other types of testing include dynamic versus static testing and manual versus automatic testing.
Code Review Process
Code review varies from organization to organization. Fagan inspections are the most formal code reviews that can occur and should adhere to the following process:
Plan
Overview
Prepare
Inspect
Rework
Follow-up
Most organizations do not strictly adhere to the Fagan inspection process. Each organization should adopt a code review process fitting for its business requirements. The more restrictive the environment, the more formal the code review process should be.
Static Testing
Static testing analyzes software security without actually running the software. This is usually provided by reviewing the source code or compiled application. Automated tools are used to detect common software flaws. Static testing tools should be available throughout the software design process.
Dynamic Testing
Dynamic testing analyzes software security in the runtime environment. With this testing, the tester should not have access to the application’s source code.
Dynamic testing often includes the use of synthetic transactions, which are scripted transactions that have a known result. These synthetic transactions are executed against the tested code, and the output is then compared to the expected output. Any discrepancies between the two should be investigated for possible source code flaws.
Fuzz Testing
Fuzz testing is a dynamic testing tool that provides input to the software to test the software’s limits and discover flaws. The input provided can be randomly generated by the tool or specially created to test for known vulnerabilities.
Fuzz testers include Untidy, Peach Fuzzer, and Microsoft SDL File/Regex Fuzzer.
Misuse Case Testing
Misuse case testing, also referred to as negative testing, tests an application to ensure that the application can handle invalid input or unexpected behavior. This testing is completed to ensure that an application will not crash and to improve the quality of an application by identifying its weak points. When misuse cast testing is performed, organizations should expect to find issues. Misuse testing should include testing that looks for the following:
Required fields must be populated.
Fields with a defined data type can only accept data that is the required data type.
Fields with character limits allow only the configured number of characters.
Fields with a defined data range accept only data within that range.
Fields accept only valid data.
Test Coverage Analysis
Test coverage analysis uses test cases that are written against the application requirements specifications. Individuals involved in this analysis do not need to see the code to write the test cases. Once a document that describes all the test cases is written, test groups refer to a percentage of the test cases that were run, that passed, that failed, and so on. The application developer usually performs test coverage analysis as a part of unit testing. Quality assurance groups use overall test coverage analysis to indicate test metrics and coverage according to the test plan.
Test coverage analysis creates additional test cases to increase coverage. It helps developers find areas of an application not exercised by a set of test cases. It helps in determining a quantitative measure of code coverage, which indirectly measures the quality of the application or product.
One disadvantage of code coverage measurement is that it measures coverage of what the code covers but cannot test what the code does not cover or what has not been written. In addition, this analysis looks at a structure or function that already exists and not those that do not yet exist.
Interface Testing
Interface testing evaluates whether an application’s systems or components correctly pass data and control to one another. It verifies whether module interactions are working properly and errors are handled correctly. Interfaces that should be tested include client interfaces, server interfaces, remote interfaces, graphical user interfaces (GUIs), application programming interfaces (APIs), external and internal interfaces, and physical interfaces.
GUI testing involves testing a product’s GUI to ensure that it meets its specifications through the use of test cases. API testing tests APIs directly in isolation and as part of the end-to-end transactions exercised during integration testing to determine whether the APIs return the correct responses.