- Perimeter Security
- Internal Security
- Boundary Devices
- Enforcement Tools
- Cryptographic Devices
- What Next?
Boundary Devices
Beyond perimeter security devices and devices that provide internal security, other devices provide myriad additional services, such as acting as load balancers, proxies, and access points that improve network functionality. Many of these devices were developed for faster connectivity and to eliminate traffic bottlenecks; others were developed for convenience. As with all devices that touch the network, proper placement and security features are important considerations in their implementation.
Proxies
A proxy server operates on the same principle as a proxy-level firewall: It is a go-between for the network and the Internet. Proxy servers are used for security, logging, and caching. Various types of proxy servers exist, including forward, reverse, and transparent proxy servers, as well as caching, multipurpose, and application proxy servers.
In a caching proxy server, when the proxy server receives a request for an Internet service (usually on port 80 or 443), it passes through filtering requirements and checks its local cache for previously downloaded web pages. Because web pages are stored locally, response times for web pages are faster, and traffic to the Internet is substantially reduced.
The web cache can also be used to block content from websites that you do not want employees to access, such as pornography, social media, or peer-to-peer networks. You can use this type of server to rearrange web content to work for mobile devices. This strategy also provides better utilization of bandwidth because it stores all your results from requests for a period of time.
A caching server that does not require a client-side configuration is called a transparent proxy server. In this type of server, the client is unaware of a proxy server. Transparent proxies are also called inline, intercepting, or forced proxies. The proxy redirects client requests without modifying them. Transparent proxy servers are implemented primarily to reduce bandwidth usage and client configuration overhead in large networks. Transparent proxy servers are found in large enterprise organizations and ISPs. Because transparent proxies have no client overhead and can filter content, they are ideal for use in schools and libraries.
Most proxy servers today are web application proxies that support protocols such as HTTP and HTTPS. When clients and the server cannot directly connect because of some type of incompatibility issue, such as security authentication, an application proxy server is used. Application proxies must support the application for which they are performing the proxy function and do not typically encrypt data. On the other hand, multipurpose proxy servers, also known as universal application level gateways, are capable of running various operating systems (such as UNIX, Windows, and Macintosh) and allowing multiple protocols to pass through (such as HTTP, FTP, NNTP, SMTP, IMAP, LDAP, and DNS). They also can convert between IPv4 and IPv6 addresses. These proxies can be used for caching, converting pass-through traffic, and handling access control. They are not restricted to a certain application or protocol.
Depending on the network size and content requirements, either a forward or reverse proxy is used. Forward and reverse proxies add a layer of security to the network by controlling traffic to and from the Internet. Both types of proxy servers are used as an intermediary for requests between source and destination hosts. A forward proxy controls traffic originating from clients on the internal network that is destined for hosts on the Internet. Because client requests are required to pass through the proxy before they are permitted to access Internet resources, forward proxy servers are primarily used to enforce security on internal client computers and are often used in conjunction with a firewall. Forward proxies can also be implemented for anonymity because they do not allow direct client access to the Internet.
Reverse proxy servers do just the opposite. A reverse proxy is a server-side concept for caching static HTTP content when the server accepts requests from external Internet clients. The primary purpose of a reverse proxy is to increase the efficiency and scalability of the web server by providing load balancing services. Full reverse proxies are capable of deep content inspection and often are implemented as a method for enforcing web application security and mitigating data leaks.
Proxy servers are used for a variety of reasons, so their placement depends on usage. You can place proxy servers between the private network and the Internet for Internet connectivity or internally for web content caching. If the organization is using the proxy server for both Internet connectivity and web content caching, you should place the proxy server between the internal network and the Internet, with access for users who are requesting the web content. In some proxy server designs, the proxy server is placed in parallel with IP routers. This design allows for network load balancing by forwarding all HTTP and FTP traffic through the proxy server and all other IP traffic through the router.
Every proxy server in your network must have at least one network interface. Proxy servers with a single network interface can provide web content caching and IP gateway services. To provide Internet connectivity, you must specify two or more network interfaces for the proxy server.
Load Balancers
Network load balancers are reverse proxy servers configured in a cluster to provide scalability and high availability.
Load balancing distributes IP traffic to multiple copies of a TCP/IP service, such as a web server, each running on a host within the cluster. This is used for enterprise-wide services, such as Internet sites with high traffic requirements, web, FTP, media streaming, and content delivery networks or hosted applications that use thin-client architectures, such as Windows Terminal Services or Remote Desktop Services.
As enterprise traffic increases, network administrators can simply plug another server into the cluster. If server or application failure occurs, a load balancer can provide automatic failover to ensure continuous availability.
Load balancing strategies work by scheduling via algorithms. Scheduling strategies are based on which tasks can be executed in parallel and where to execute these tasks. These common algorithms are used:
Round-robin: Traffic is sent in a sequential, circular pattern to each node of a load balancer.
Random: Traffic is sent to randomly selected nodes.
Least connections: Traffic is sent to the node with the fewest open connections.
Weighted round-robin: Traffic is sent in a circular pattern to each node of a load balancer, based on the assigned weight number.
Weighted least connections: Traffic is sent to the node with the fewest open connections, based on the assigned weight number.
Each method works best in different situations. When servers that have identical equipment and capacity are used, the round-robin, random, or least connections algorithms work well. When the load balancing servers have disproportionate components such as processing power, size, or RAM, a weighted algorithm allows the servers with the maximum resources to be utilized properly.
Session affinity is a method in which all requests in a session are sent to a specific application server by overriding the load balancing algorithm. Session affinity is also called a sticky session. This ensures that all requests from the user during the session are sent to the same instance. Session affinity enhances application performance by using in-memory caching and cookies to track session information.
Some load balancers integrate IP load balancing and network intrusion prevention into one appliance. This provides failover capabilities in case of server failure, distribution of traffic across multiple servers, and integrated protection from network intrusions. Performance is also optimized for other IP services, such as Simple Mail Transfer Protocol (SMTP), Domain Name Service (DNS), Remote Authentication Dial-In User Service (RADIUS), and Trivial File Transfer Protocol (TFTP).
To mitigate risks associated with failures of the load balancers themselves, you can deploy two servers in what is called an active/passive or active/active configuration. In active/passive configuration, all traffic is sent to the active server. The passive server is promoted to active if the active server fails or is taken down for maintenance. In active/active configuration, two or more servers work together to distribute the load to network servers. Because all load balancers are active, they run almost at full capacity. If one of the load balancers fails, network traffic runs slow and user sessions time out. Virtual IPs (VIPs) are often implemented in the active/active configuration. A VIP has at least one physical server assigned but more than one virtual IP address assigned, usually through a TCP or UDP port number. Using VIPs spreads traffic among the load balancing servers. VIPs are a connection-based workload balancing solution, so if the interface cannot handle the load, traffic bottlenecks and becomes slow.
Access Points
No network is complete without wireless access points. Most businesses provide wireless access for employees and guests alike. With this expected convenience comes security implications that must be addressed to keep the network safe from the vulnerabilities and attacks described in Chapter 2, “Attack Types.” This section covers basic access point types, configurations, and preventative measures an organization can implement to mitigate risk and reduce the attack surface.
Access Point Types
Wireless local-area network (WLAN) controllers are physical devices that communicate with each access point (AP) simultaneously. A centralized access controller (AC) is capable of providing management, configuration, encryption, and policy settings for WLAN access points. A controller-based WLAN design acts as a switch for wireless traffic and provides thin APs with configuration settings. Some ACs perform firewall, VPN, IDS/IPS, and monitoring functions.
The level of control and management options an AC needs to provide depends on the type of access points the organization implements. Three main types of wireless access points exist: fat, fit, and thin. Fat wireless access points are also sometimes called intelligent access points because they are all-inclusive: They contain everything needed to manage wireless clients, such as ACLs, quality of service (QoS) functions, VLAN support, and band steering. Fat APs can be used as standalone access points and do not need an AC. However, this capability makes them costly because they are built on powerful hardware and require complex software. A fit AP is a scaled-down version of a fat AP and uses an AC for control and management functions. A thin access point is nothing more than a radio and antenna controlled by a wireless switch. Thin access points are sometimes called intelligent antennas. In some instances, APs do not perform WLAN encryption; they merely transmit or receive the encrypted wireless frames. A thin AP has minimal functionality, so a controller is required. Thin APs are simple and do not require complex hardware or software.
Antenna Types, Placement, and Power
When designing wireless networks, configure antenna types, placement, and power output for maximum coverage and minimum interference. Four basic types of antennas are commonly used in 802.11 wireless networking applications: parabolic grid, yagi, dipole, and vertical.
Wireless antenna types are either omnidirectional or directional. Omnidirectional antennas provide a 360-degree radial pattern to provide the widest possible signal coverage. An example of omnidirectional antennas is the antennas commonly found on APs. Directional antennas concentrate the wireless signal in a specific direction, limiting the coverage area. An example of a directional antenna is a yagi antenna.
The need or use determines the type of antenna required. When an organization wants to connect one building to another building, a directional antenna is used. If an organization is adding Wi-Fi internally to an office building or a warehouse, an omnidirectional antenna is used. If an organization wants to install Wi-Fi in an outdoor campus environment, a combination of both antennas is used.
APs with factory-default omni antennas cover an area that is roughly circular and is affected by RF obstacles such as walls. When using this type of antenna, common practice is to place APs in central locations or divide an office into quadrants. Many APs use multiple-input, multiple-output (MIMO) or multiuser multiple-input, multiple-output (MU-MIMO) antennas. This type of antenna takes advantage of multipath signal reflections. Ideally, locate the AP as close as possible to the antennas. The farther the signal has to travel across the cabling between the AP and the antenna, the more signal loss occurs. Loss is an important factor when deploying a wireless network, especially at higher power levels. Loss occurs as a result of the signal traveling between the wireless base unit and the antenna.
APs that require external antennas need additional consideration. You need to configure the antennas properly, consider what role the AP serves (AP or bridge), and consider where the antennas are placed. When the antenna is mounted on the outside of the building or when the interface between the wired network and the transceiver is placed in a corner, it locates the network signal in an area where it can easily be intercepted. Antenna placement should not be used as a security mechanism.
Professional site surveys for wireless network installations and proper AP placement are sometimes used to ensure coverage area and security concerns. Up-front planning takes more time and effort but can pay off in the long run, especially for large WLANs.
One of the principle requirements for wireless communication is that the transmitted wave must reach the receiver with ample power to allow the receiver to distinguish the wave from the background noise. An antenna that is too strong raises security concerns. Strong omnidirectional Wi-Fi signals are radiated to a greater distance into neighboring areas, where the signals can be readily detected and viewed. Minimizing transmission power reduces the chances your data will leak. Companies such as Cisco and Nortel have implemented dynamic power controls in their products. The system dynamically adjusts the power output of individual access points to accommodate changing network conditions, helping ensure predictable wireless performance and availability.
Transmit power control is a mechanism used to prevent too much unwanted interference between different wireless networks. Adaptive transmit power control in 802.11 WLANs on a per-link basis helps increase network capacity and improves the battery life of Wi-Fi-enabled mobile devices.
Band direction and selection are also important parts of wireless access control management. The 2.4-GHz band used for older standards such as 802.11a/b/g is crowded and subject to both interference from other wireless devices and co-channel interference from other access points because of the limited number (three) of nonoverlapping channels. Newer standards such as 802.11n and 802.11ac use the 5-GHz band, which offers 23 nonoverlapping 20-MHz channels.
Cisco wireless LAN controllers can be configured for load balancing through band direction and band selection. Band direction allows client radios capable of operating on both 2.4-GHz and 5-GHz bands to move to a 5-GHz access point for faster throughput of network transfers. In a Cisco AP, clients receive a 2.4-GHz probe response and attempt to associate with the AP before receiving a 5-GHz probe response. Band selection works by delaying client 2.4-GHz radio probe responses, causing the client to be directed toward the 5-GHz channels. 802.11n can use 2.4 GHz or 5 GHz. The main purpose of band selection is to help the 802.11n-capable dual-band clients select 5-GHz access points. Band selection can cause roaming delays and dropped calls, so it is not recommended on voice-enabled WLANs.
MAC Filter
Most wireless network routers and access points can filter devices based on their MAC address. The MAC address is a unique identifier for network adapters. MAC filtering is a security access control method in which the MAC address is used to determine access to the network. When MAC address filtering is used, only the devices with MAC addresses configured in the wireless router or access point are allowed to connect. MAC filtering permits and denies network access through the use of blacklists and whitelists. A blacklist is a list of MAC addresses that are denied access. A whitelist is a list of MAC addresses that are allowed access. Chapter 10, “Security Technologies,” discusses blacklisting and whitelisting in further detail.
MAC addresses give a wireless network some additional protection, but they can be spoofed. An attacker can potentially capture details about a MAC address from the network and pretend to be that device to then connect. MAC filtering can be circumvented by scanning a valid MAC using a tool such as airodump-ng or Aircrack-ng Suite and then spoofing one’s own MAC into a validated MAC address. When an attacker knows a MAC address that is out of the blacklist or within the whitelist, MAC filtering is almost useless.
Disable SSID Broadcast
A service set identifier (SSID) is used to identify WAPs on a network. The SSID is transmitted so that wireless stations searching for a network connection can find it. By default, SSID broadcast is enabled. This means that it accepts any SSID. When you disable this feature, the SSID configured in the client must match the SSID of the AP; otherwise, the client cannot connect to the AP. Having SSID broadcast enabled essentially makes your AP visible to any device searching for a wireless connection.
To improve the security of your network, change the SSIDs on your APs. Using the default SSID poses a security risk even if the AP is not broadcasting it. When changing default SSIDs, do not change the SSID to reflect your company’s main names, divisions, products, or address. This just makes you an easy target for attacks such as war driving and war chalking. War driving is the act of a person in a moving vehicle searching for Wi-Fi wireless networks using a portable computer or other mobile device. War chalking involves drawing symbols in public places to advertise an open Wi-Fi network. Keep in mind that if an SSID name is enticing enough, it might attract hackers.
Turning off SSID broadcast does not effectively protect the network from attacks. Tools such as Kismet enable nonbroadcasting networks to be discovered almost as easily as broadcasting networks. From a security standpoint, securing a wireless network using protocols that are designed specifically to address wireless network threats is better than disabling SSID broadcast.