- Common Networking Devices
- Networking Architecture
- What's Next?
Networking Architecture
Explain basic corporate and datacenter network architecture.
The networking devices discussed previously in this chapter are used to build networks. For this particular objective, CompTIA wants you to be aware of some of the architecture and design elements of the network. Whether you’re putting together a datacenter or a corporate office, planning should be involved, and no network should be allowed to haphazardly sprout without management and oversight.
Three-Tiered Architecture
To improve system performance, as well as to improve security, it is possible to implement a tiered systems model. This is often referred to as an n-tiered model because the n- can be one of several different numbers.
If we were looking at database, for example, with a one-tier model, or single-tier environment, the database and the application exist on a single system. This is common on desktop systems running a standalone database. Early UNIX implementations also worked in this manner; each user would sign on to a terminal and run a dedicated application that accessed the data. With two-tier architecture, the client workstation or system runs an application that communicates with the database that is running on a different server. This common implementation works well for many applications. With three-tiered architecture, security is enhanced. In this model, the end user is effectively isolated from the database by the introduction of a middle-tier server. This server accepts requests from clients, evaluates them, and then sends them on to the database server for processing. The database server sends the data back to the middle-tier server, which then sends the data to the client system. Becoming common in business today, this approach adds both capability and complexity.
While the examples are of database tiering, this same approach can be taken with devices such as routers, switches, and other servers. In a three-tiered model of routing and switching, the three tiers would be the core, the distribution/aggregation layer, and the access/edge. We walk through each of the layers present in this scenario.
Core Layer
The core layer is the backbone: the place where switching and routing meet (switching ends, routing begins). It provides high-speed, highly redundant forwarding services to move packets between distribution-layer devices in different regions of the network. The core switches and routers would be the most powerful in the enterprise (in terms of their raw forwarding power,) and would be used to manage the highest-speed connections (such as 100 Gigabit Ethernet). Core switches also incorporate internal firewall capability as part of their features, helping with segmentation and control of traffic moving from one part of the network to another.
Distribution/Aggregation Layer
The distribution layer, or aggregation layer (sometimes called the workgroup layer), is the layer in which management takes place. This is the place where QoS policies are managed, filtering is done, and routing takes place. Distribution layer devices can be used to manage individual branch-office WAN connections, and this is considered to be smart (usually offering a larger feature set than switches used at the access/edge layer). Lower latency and larger MAC address table sizes are important features for switches used at this level because they aggregate traffic from thousands of users rather than hundreds (as access/edge switches do).
Access/Edge Layer
Switches that allow end users and servers to connect to the enterprise are called access switches or edge switches, and the layer where they operate in the three-tiered model is known as the access layer, or edge layer. Devices at this layer may or may not provide Layer 3 switching services; the traditional focus is on minimizing the cost of each provisioned Ethernet port (known as “cost-per-port”) and providing high port density. Because the focus is on connecting client nodes, such as workstations to the network, this is sometimes called the desktop layer.
Software-Defined Networking
Software-defined networking (SDN) is a dynamic approach to computer networking intended to allow administrators to get around the static limitations of physical architecture associated with traditional networks. They can do so through the implementation of technologies such as the Cisco Systems Open Network Environment.
The goal of SDN is not only to add dynamic capabilities to the network but also to reduce IT costs through implementation of cloud architectures. SDN combines network and application services into centralized platforms that can automate provisioning and configuration of the entire infrastructure.
The SDN architecture, from the top down, consists of the application layer, control layer, and infrastructure layer. CompTIA also adds the management plane as an objective, and a discussion of each of these components follows.
Application Layer
The application layer is the top of the SDN stack, and this is where load balancers, firewalls, intrusion detection, and other standard network applications are located. While a standard (non-SDN) network would use a specialized appliance for each of these functions, with an SDN network, an application is used in place of a physical appliance.
Control Layer
The control layer is the place where the SDN controller resides; the controller is software that manages policies and the flow of traffic throughout the network. This controller can be thought of as the brains behind SDN, making it all possible. Applications communicate with the controller through a northbound interface, and the controller communicates with switching using southbound interfaces.
Infrastructure Layer
The physical switch devices themselves reside at the infrastructure layer. This is also known as the control plane when breaking the architecture into “planes” because this is the component that defines the traffic routing and network topology.
Management Plane
With SDN, the management plane allows administrators to see their devices and traffic flows and react as needed to manage data plane behavior. This can be done automatically through configuration apps that can, for example, add more bandwidth if it looks as if edge components are getting congested. The management plane manages and monitors processes across all layers of the network stack.
Spine and Leaf
In an earlier section, we discussed the possibility of tiered models. A two-tier model that Cisco promotes for switches is the spine and leaf model. In this model, the spine is the backbone of the network, just as it would be in a skeleton and is responsible for interconnecting all the leaf switches in a full-mesh topology. Thanks to the mesh, every leaf is connected to every spine, and the path is randomly chosen so that the traffic load is evenly distributed among the top-tier switches. If one of the switches at the top tier were to fail, there would only be a slight degradation in performance throughout the datacenter.
Because of the design of this model, no matter which leaf switch is connected to a server, the traffic always has to cross the same number of devices to get to another server. This keeps latency at a steady level.
When top-of-rack (ToR) switching is incorporated into the network architecture, switches located within the same rack are connected to an in-rack network switch, which is connected to aggregation switches (usually via fiber cabling). The big advantage of this setup is that the switches within each rack can be connected with cheaper copper cabling and the cables to each rack are all that need be fiber.
Traffic Flows
Traffic flows within a datacenter typically occur within the framework of one of two models: East-West or North-South. The names may not be the most intuitive, but the East-West traffic model means that data is flowing among devices within a specific datacenter while North-South means that data is flowing into the datacenter (from a system physically outside the datacenter) or out of it (to a system physically outside the datacenter).
The naming convention comes from the way diagrams are drawn: data staying within the datacenter is traditionally drawn on the same horizontal line (East-to-West), while data leaving or entering is typically drawn on a vertical line (North-to-South). With the increase in virtualization being implemented at so many levels, the East-West traffic has increased in recent years.
Datacenter Location Types
One of the biggest questions a network administrator today can face is where to store the data. At one point in time, this question was a no-brainer: servers were kept close at hand so they could be rebooted and serviced regularly. Today, however, that choice is not such an easy one. The cloud, virtualization, software-defined networking, and many other factors have combined to offer several options in which cost often becomes one of the biggest components.
An on-premises datacenter can be thought of as the old, traditional approach: the data and the servers are kept in house. One alternative to this is to share a colocation. In this arrangement, several companies put their “servers” in a shared space. The advantage to this approach is that by renting space in a third-party facility, it is often possible to gain advantages associated with connectivity speed, and possibly technical support. When describing this approach, we placed “servers” in quotation marks because the provider will often offer virtual servers rather than dedicated machines for each client, thus enabling companies to grow without a reliance on physical hardware.
Incidentally, any remote and autonomous office, regardless of the number of users who may work from it, is known as a branch office. This point is important because it may be an easy decision to keep the datacenter on-premises at headquarters, but network administrators need to factor in how to best support branch offices as well. The situation could easily be that while on-premises works best at headquarters, all branch offices are supported by colocation sites.
Storage-Area Networks
When it comes to data storage in the cloud, encryption is one of the best ways to protect it (keeping it from being of value to unauthorized parties), and VPN routing and forwarding can help. Backups should be performed regularly (and encrypted and stored in safe locations), and access control should be a priority.
The consumer retains the ultimate responsibility for compliance. Per NIST SP 800-144,
The main issue centers on the risks associated with moving important applications or data from within the confines of the organization’s computing center to that of another organization (i.e., a public cloud), which is readily available for use by the general public. The responsibilities of both the organization and the cloud provider vary depending on the service model. Reducing cost and increasing efficiency are primary motivations for moving towards a public cloud, but relinquishing responsibility for security should not be. Ultimately, the organization is accountable for the choice of public cloud and the security and privacy of the outsourced service.
For more information, see http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-144.pdf.
Shared storage can be done on storage-area networks (SANs), network-attached storage (NAS), and so on; the virtual machine sees only a “physical disk.” With clustered storage, you can use multiple devices to increase performance. A handful of technologies exist in this realm, and the following are those that you need to know for the Network+ exam.
iSCSI
The Small Computer Systems Interface (SCSI) standard has long been the language of storage. Internet Small Computer Systems Interface (iSCSI) expands this through Ethernet, allowing IP to be used to send SCSI commands.
Logical unit numbers (LUNs) came from the SCSI world and carry over, acting as unique identifiers for devices. Both NAS and SAN use “targets” that hold up to eight devices.
Using iSCSI for a virtual environment gives users the benefits of a file system without the difficulty of setting up Fibre Channel. Because iSCSI works both at the hypervisor level and in the guest operating system, the rules that govern the size of the partition in the OS are used rather than those of the virtual OS (which are usually more restrictive).
The disadvantage of iSCSI is that users can run into IP-related problems if configuration is not carefully monitored.
Fibre Channel and FCoE
Instead of using an older technology and trying to adhere to legacy standards, Fibre Channel (FC) is an option providing a higher level of performance than anything else. It utilizes FCP, the Fiber Channel Protocol, to do what needs to be done, and Fibre Channel over Ethernet (FCoE) can be used in high-speed (10 GB and higher) implementations.
The big advantage of Fibre Channel is its scalability. FCoE encapsulates FC over the Ethernet portions of connectivity, making it easy to add into an existing network. As such, FCoE is an extension to FC intended to extend the scalability and efficiency associated with Fibre Channel.
Network-Attached Storage
Storage is always a big issue, and the best answer is always a storage-area network. Unfortunately, a SAN can be costly and difficult to implement and maintain. That is where network-attached storage (NAS) comes in. NAS is easier than SAN and uses TCP/IP. It offers file-level access, and a client sees the shared storage as a file server.