VM Storage Policies on VSAN
- Introducing Storage Policy-Based Management in a VSAN Environment
- VASA Vendor Provider
- VSAN Storage Providers: Highly Available
- VM Storage Policies
- Summary
VMware architects explain the concept of storage policy-based management.
In vSphere 5.0, VMware introduced a feature called profile-driven storage. Profile-driven storage is a feature that allows vSphere administrators to easily select the correct datastore on which to deploy virtual machines (VMs). The selection of the datastore is based on the capabilities of that datastore, or to be more specific, the underlying capabilities of the storage array that have been assigned to this datastore. Examples of the capabilities are RAID level, thin provisioning, deduplication, encryption, replication, etc. The capabilities are completely dependent on the storage array.
Throughout the life cycle of the VM, profile-driven storage allows the administrator to check whether its underlying storage is still compatible. In other words, does the datastore on which the VM resides still have the correct capabilities for this VM? The reason why this is useful is because if the VM is migrated to a different datastore for whatever reason, the administrator can ensure that it has moved to a datastore that continues to meet its requirements. If the VM is migrated to a datastore without paying attention to the capabilities of the destination storage, the administrator can still check the compliance of the VM storage from the vSphere client at any time and take corrective actions if it no longer resides on a datastore that meets its storage requirements (in other words, move it back to a compliant datastore).
However, VM storage policies and storage policy-based management (SPBM) have taken this a step further. In the previous paragraph, we described a sort of storage quality of service driven by the storage. All VMs residing on the same datastore would inherit the capabilities of the datastore. With VSAN, the storage quality of service no longer resides with the datastore; instead, it resides with the VM and is enforced by the VM storage policy associated with the VM and the VM disks (VMDKs). Once the policy is pushed down to the storage layer, in this case VSAN, the underlying storage is then responsible for creating storage for the VM that meets the requirements placed in the policy.
Introducing Storage Policy-Based Management in a VSAN Environment
VSAN leverages this approach to VM deployment, using an updated method called storage policy-based management (SPBM). All VMs deployed to a VSAN datastore must use a VM storage policy, although if one is not specifically created, a default one that is associated with the datastore is assigned to the VM. The VM storage policy contains one or more VSAN capabilities. This chapter will describe the VSAN capabilities. After the VSAN cluster has been configured and the VSAN datastore has been created, VSAN surfaces up a set of capabilities to the vCenter Server. These capabilities, which are surfaced by the vSphere APIs for Storage Awareness (VASA) storage provider (more on this shortly) when the cluster is configured successfully, are used to set the availability, capacity, and performance policies on a per-VM (and per-VMDK) basis when that VM is deployed on the VSAN datastore.
As previously mentioned, this differs significantly from the previous VM storage profile mechanism that we had in vSphere in the past. With the VM storage profile feature, the capabilities were associated with datastores, and were used for VM placement decisions. Now, through SPBM, administrators create a policy defining the storage requirements for the VM, and this policy is pushed out to the storage, which in turn instantiates per-VM (and per-VMDK) storage for virtual machines. In vSphere 6.0, VMware introduced Virtual Volumes (VVols). Storage policy-based management for VMs using VVols is very similar to storage policy-based management for VMs deployed on VSAN. In other words, administrators no longer need to carve up logical unit numbers (LUNs) or volumes for virtual machine storage. Instead, the underlying storage infrastructure instantiates the virtual machine storage based on the contents of the policy. What we have now with SPBM is a mechanism whereby we can specify the requirements of the VM, and the VMDKs. These requirements are then used to create a policy. This policy is then sent to the storage layer [in the case of VVols, this is a SAN or network-attached storage (NAS) storage array] asking it to build a storage object for this VM that meets these policy requirements. In fact, a VM can have multiple policies associated with it, different policies for different VMDKs.
By way of explaining capabilities, policies, and profiles, capabilities are what the underlying storage is capable of providing by way of availability, performance, and reliability. These capabilities are visible in vCenter Server. The capabilities are then used to create a VM storage policy (or just policy for short). A policy may contain one or more capabilities, and these capabilities reflect the requirements of your VM or application running in a VM. Previous versions of vSphere used the term profiles, but these are now known as policies.
Deploying VMs on a VSAN datastore is very different from previous approaches in vSphere. In the past, an administrator would present a LUN or volume to a group of ESXi hosts and in the case of block storage partition, format, and build a VMFS file system to create a datastore for storing VM files. In the case of network-attached storage (NAS), a network file system (NFS) volume is mounted to the ESXi host, and once again a VM is created on the datastore. There is no way to specify a RAID-0 stripe width for these VMDKs, nor is there any way to specify a RAID-1 replica for the VMDK.
In the case of VSAN (and now VVols), the approach to deploying VMs is quite different. Consideration must be given to the availability, performance, and reliability factors of the application running in the VM. Based on these requirements, an appropriate VM storage policy must be created and associated with the VM during deployment.
There were five capabilities in the initial release of VSAN, as illustrated in Figure 4.1.
Figure 4.1 VSAN capabilities that can be used for VM storage policies
In VSAN 6.2, the number of capabilities is increased to support a number of new features. These features include the ability to implement RAID-5 and RAID-6 configurations for virtual machine objects deployed on an all-flash VSAN configuration, alongside the existing RAID-0 and RAID-1 configurations. With RAID-5 and RAID-6, it now allows VMs to tolerate one or two failures, but it means that the amount of space consumed is much less than a RAID-1 configuration to tolerate a similar amount of failures. There is also a new policy for software checksum. Checksum is enabled by default, but it can be disabled through policies if an administrator wishes to disable it. The last capability relates to quality of service and provides the ability to limit the number of input/output operations per second (IOPS) for a particular object.
You can select the capabilities when a VM storage policy is created. Note that certain capabilities are applicable to hybrid VSAN configurations (e.g., flash read cache reservation), while other capabilities are applicable to all-flash VSAN configurations (e.g., failure tolerance method set to performance).
VM storage policies are essential in VSAN deployments because they define how a VM is deployed on a VSAN datastore. Using VM storage policies, you can define the capabilities that can provide the number of VMDK RAID-0 stripe components or the number of RAID-1 mirror copies of a VMDK. If an administrator desires a VM to tolerate one failure but does not want to consume as much capacity as a RAID-1 mirror, a RAID-5 configuration can be used. This requires a minimum of four hosts in the cluster and implements a distributed parity mechanism across the storage of all four hosts. If this configuration would be implemented with RAID-1, the amount of capacity consumed would be 200% the size of the VMDK. If this is implemented with RAID-5, the amount of capacity consumed would be 133% the size of the VMDK.
Similarly, if an administrator desires a VM to tolerate two failures using a RAID-1 mirroring configuration, there would need to be three copies of the VMDK, meaning the amount of capacity consumed would be 300% the size of the VMDK. With a RAID-6 implementation, a double parity is implemented, which is also distributed across all the hosts. For RAID-6, there must be a minimum of six hosts in the cluster. RAID-6 also allows a VM to tolerate two failures, but only consumes capacity equivalent to 150% the size of the VMDK.
Figure 4.2 shows the new policies introduced in VSAN 6.2.
Figure 4.2 New VSAN capabilities
The sections that follow highlight where you should use these capabilities when creating a VM storage policy and when to tune these values to something other than the default. Remember that a VM storage policy will contain one or more capabilities.
In the initial release of VSAN, five capabilities were available for selection to be part of the VM storage policy. In VSAN 6.2, as previously highlighted, additional policies were introduced. As an administrator, you can decide which of these capabilities can be added to the policy, but this is, of course, dependent on the requirements of your VM. For example, what performance and availability requirements does the VM have? The capabilities are as follows:
Number of failures to tolerate
Number of disk stripes per object
Failure tolerance method
IOPS limit for object
Disable object checksum
Flash read cache reservation (hybrid configurations only)
Object space reservation
Force provisioning
The sections that follow describe the VSAN capabilities in detail.
Number of Failures to Tolerate
In this section, number of failures to tolerate is described having failure tolerance method set to its default value that is Performance. Later on we will describe a different scenario when failure tolerance method is set to Capacity.
This capability sets a requirement on the storage object to tolerate at least n number of failures in the cluster. This is the number of concurrent host, network, or disk failures that may occur in the cluster and still ensure the availability of the object. When the failure tolerance method is set to its default value of RAID-1, the VM’s storage objects are mirrored; however, the mirroring is done across ESXi hosts, as shown in Figure 4.3.
Figure 4.3 Number of failures to tolerate results in a RAID-1 configuration
When this capability is set to a value of n, it specifies that the VSAN configuration must contain at least n + 1 replicas (copies of the data); this also implies that there are 2n + 1 hosts in the cluster.
Note that this requirement will create a configuration for the VM objects that may also contain an additional number of witness components being instantiated to ensure that the VM remains available even in the presence of up to number of failures to tolerate concurrent failures (see Table 4.1). Witnesses provide a quorum when failures occur in the cluster or a decision has to be made when a split-brain situation arises. These witnesses will be discussed in much greater detail later in the book, but suffice it to say that witness components play an integral part in maintaining VM availability during failures and maintenance tasks.
Figure 4.4 RAID-5 configuration, a result of failure tolerance method RAID5/6 and number of failures to tolerate set to 1
The RAID-5 or RAID-6 configurations also work with number of disk stripes per object. If stripe width is also specified as part of the policy along with failure tolerance method set to RAID5/6 each of the components on each host is striped in a RAID-0 configuration, and these are in turn placed in either a RAID-5 or-6 configuration.
One final note is in relation to having a number of failures to tolerate setting of zero or three. If you deploy a VM with this policy setting, which includes a failure tolerance method RAID5/6 setting, the VM provisioning wizard will display a warning stating that this policy setting is only effective when the number of failures to tolerate is set to either one or two. You can still proceed with the deployment, but the object is deployed as a single RAID-0 object.
Number of Disk Stripes Per Object
This capability defines the number of physical disks across which each replica of a storage object (e.g., VMDK) is striped. When failure tolerance method is set to performance, this policy setting can be considered in the context of a RAID-0 configuration on each RAID-1 mirror/replica where I/O traverses a number of physical disk spindles. When failure tolerance method is set to capacity, each component of the RAID-5 or RAID-6 stripe may also be configured as a RAID-0 stripe. Typically, when the number of disk stripes per object is defined, the number of failures to tolerate is also defined. Figure 4.5 shows what a combination of these two capabilities could result in, once again assuming that the new VSAN 6.2 policy setting of failure tolerance method is set to its default value RAID-1.
Figure 4.5 Storage object configuration when stripe width set is to 2 and failures to tolerate is set to 1 and replication method optimizes for is not set
To understand the impact of stripe width, let’s examine it first in the context of write operations and then in the context of read operations.
Because all writes go to the cache device write buffer, the value of an increased stripe width may or may not improve performance. This is because there is no guarantee that the new stripe will use a different cache device; the new stripe may be placed on a capacity device in the same disk group, and thus the new stripe will use the same cache device. If the new stripe is placed in a different disk group, either on the same host or on a different host, and thus leverages a different cache device, performance might improve. However, you as the vSphere administrator have no control over this behavior. The only occasion where an increased stripe width could definitely add value is when there is a large amount of data to destage from the cache tier to the capacity tier. In this case, having a stripe could improve destage performance.
From a read perspective, an increased stripe width will help when you are experiencing many read cache misses, but note that this is a consideration in hybrid configurations only. All-flash VSAN considerations do not have a read cache. Consider the example of a VM deployed on a hybrid VSAN consuming 2,000 read operations per second and experiencing a hit rate of 90%. In this case, there are still 200 read operations that need to be serviced from magnetic disk in the capacity tier. If we make the assumption that a single magnetic disk can provide 150 input/output operations per second (IOPS), then it is obvious that it is not able to service all of those read operations, so an increase in stripe width would help on this occasion to meet the VM I/O requirements. In an all-flash VSAN, which is extremely read intensive, striping across multiple capacity flash devices can also improve performance.
In general, the default stripe width of 1 should meet most, if not all VM workloads. Stripe width is a capability that should change only when write destaging or read cache misses are identified as a performance constraint.
IOPS Limit for Object
IOPS limit for object is a new Quality of Service (QoS) capability introduced with VSAN 6.2. This allows administrators to ensure that an object, such as a VMDK, does not generate more than a predefined number of I/O operations per second. This is a great way of ensuring that a “noisy neighbor” virtual machine does not impact other virtual machine components in the same disk group by consuming more than its fair share of resources. By default, VSAN uses an I/O size of 32 KB as a base. This means that a 64 KB I/O will therefore represent two I/O operations in the limits calculation. I/Os that are less than or equal to 32 KB will be considered single I/O operations. For example, 2 × 4 KB I/Os are considered as two distinct I/Os. It should also be noted that both read and write IOPS are regarded as equivalent. Neither cache hit rate nor sequential I/O are taken into account. If the IOPS limit threshold is passed, the I/O is throttled back to bring the IOPS value back under the threshold. The default value for this capability is 0, meaning that there is no IOPS limit threshold and VMs can consume as many IOPS as they want, subject to available resources.
Flash Read Cache Reservation
This capability is applicable to hybrid VSAN configurations only. It is the amount of flash capacity reserved on the cache tier device as read cache for the storage object. It is specified as a percentage of the logical size of the storage object (i.e., VMDK). This is specified as a percentage value (%), with up to four decimal places. This fine granular unit size is needed so that administrators can express sub 1% units. Take the example of a 1 TB VMDK. If you limited the read cache reservation to 1% increments, this would mean cache reservations in increments of 10 GB, which in most cases is far too much for a single VM.
Note that you do not have to set a reservation to allow a storage object to use cache. All VMs equally share the read cache of cache devices. The reservation should be left unset (default) unless you are trying to solve a real performance problem and you believe dedicating read cache is the solution. If you add this capability to the VM storage policy and set it to a value 0 (zero), however, you will not have any read cache reserved to the VM that uses this policy. In the current version of VSAN, there is no proportional share mechanism for this resource when multiple VMs are consuming read cache, so every VM consuming read cache will share it equally.
Object Space Reservation
All objects deployed on VSAN are thinly provisioned. This means that no space is reserved at VM deployment time, but rather space is consumed as the VM uses storage. The object space reservation capability defines the percentage of the logical size of the VM storage object that may be reserved during initialization. The object space reservation is the amount of space to reserve specified as a percentage of the total object address space. This is a property used for specifying a thick provisioned storage object. If object space reservation is set to 100%, all of the storage capacity requirements of the VM storage are reserved up front (thick). This will be lazy zeroed thick (LZT) format and not eager zeroed thick (EZT). The difference between LZT and EZT is that EZT virtual disks are zeroed out at creation time; LZT virtual disks are zeroed out gradually at first write time.
One thing to bring to the readers’ attention is the special case of using object space reservation when deduplication and compression are enabled on the VSAN cluster. When deduplication and compression are enabled, any objects that wish to use object space reservation in a policy must have it set to either 0% (no space reservation) or 100% (fully reserved). Values between 1% and 99% are not allowed. Any existing objects that have object space reservation between 1% and 99% will need to be reconfigured with 0% or 100% prior to enabling deduplication and compression on the cluster.
Force Provisioning
If the force provisioning parameter is set to a nonzero value, the object that has this setting in its policy will be provisioned even if the requirements specified in the VM storage policy cannot be satisfied by the VSAN datastore. The VM will be shown as noncompliant in the VM summary tab and relevant VM storage policy views in the vSphere client. If there is not enough space in the cluster to satisfy the reservation requirements of at least one replica, however, the provisioning will fail even if force provisioning is turned on. When additional resources become available in the cluster, VSAN will bring this object to a compliant state.
One thing that might not be well understood regarding force provisioning is that if a policy cannot be met, it attempts a much simpler placement with requirements which reduces to number of failures to tolerate to 0, number of disk stripes per object to 1, and flash read cache reservation to 0 (on hybrid configurations). This means Virtual SAN will attempt to create an object with just a single copy of data. Any object space reservation (OSR) policy setting is still honored. Therefore there is no gradual reduction in capabilities as VSAN tries to find a placement for an object. For example, if policy contains number of failures to tolerate = 2, VSAN won’t attempt an object placement using number of failures to tolerate = 1. Instead, it immediately looks to implement number of failures to tolerate = 0.
Similarly, if the requirement was number of failures to tolerate = 1, number of disk stripes per object = 4, but Virtual SAN doesn’t have enough capacity devices to accommodate number of disk stripes per object = 4, then it will fall back to number of failures to tolerate = 0, number of disk stripes per object = 1, even though a policy of number of failures to tolerate = 1, number of disk stripes per object = 2 or number of failures to tolerate = 1, number of disk stripes per object = 3 may have succeeded.
Caution should be exercised if this policy setting is implemented. Since this allows VMs to be provisioned with no protection, it can lead to scenarios where VMs and data are at risk.
Administrators who use this option to force provision virtual machines need to be aware that although virtual machine objects may be provisioned with only one replica copy (perhaps due to lack of space), once additional resources become available in the cluster, VSAN may immediately consume these resources to try to satisfy the policy settings of virtual machines.
Some commonly used cases where force provisioning is used are (a) when boot-strapping a VSAN management cluster, starting with a single node that will host the vCenter Server, which is then used to configure a larger VSAN cluster, and (b) when allowing the provisioning of virtual machine/desktops when a cluster is under maintenance, such as a virtual desktop infrastructure (VDI) running on VSAN.
Remember that this parameter should be used only when absolutely needed and as an exception. When used by default, this could easily lead to scenarios where VMs, and all data associated with it, are at risk. Use with caution!
Disable Object Checksum
VSAN 6.2 introduced this new capability. This feature, which is enabled by default, is looking for data corruption (bit rot), and if found, automatically corrects it. Checksum is validated on the complete I/O path, which means that when writing data the checksum is calculated and automatically stored. Upon a read the checksum of the data is validated, and if there is a mismatch, the data is repaired. VSAN 6.2 also includes a scrubber mechanism. This mechanism is configured to run once a year (by default) to check all data on the VSAN datastore; however, this value can be changed by setting an advanced host setting. We recommend leaving this configured to the default value of once a year. In some cases you may desire to disable checksums completely. The reason for this could be performance, although the overhead is negligible and most customers prefer data integrity over a 1% to 3% performance increase. However in some cases, this performance increase may be desired. Another reason for disabling checksums is in the situation where the application already provides a checksum mechanism, or the workload does not require checksum. If that is the case, then checksums can be disabled through the “disable object checksum capability,” which should be set to “Yes” to disable it.
That completes the capabilities overview. Let’s now look at some other aspects of the storage policy-based management mechanism.