VMware Horizon Suite: How View, Mirage, and Workspace Form the New End-User Model
- The Evolution of the End User
- An Introduction to VMware View, Mirage, and Workspace
- Summary
The Evolution of the End User
VMware’s product line has rapidly evolved to become a more complete end-user suite. VMware acquired Wanova and its flagship product Mirage. The founders of Wanova had extensive experience in wide-area networking (WAN) and had already developed and sold a WAN services company, Actona, which was acquired by Cisco. The experience in WAN optimization was the basis of the company name Wanova.
Mirage was designed to centralize and optimize desktops by providing layered image management over both local-area network (LAN) and WAN links. Endpoints are synced to a centralized virtual desktop (CVD) in the datacenter. The CVD is built using OS and application layers, in addition to a driver library that can be managed and changed. Any changes are synchronized back to the endpoint, enabling centralized management without sacrificing the decentralized execution of a traditional desktop environment.
VMware View, VMware’s vSphere-based virtual desktop solution, provides online and offline access to desktops in the datacenter. Users can connect to virtual desktops running in the datacenter from various types of View clients. If the virtual desktop needs to be run in a disconnected/offline mode, users running Windows clients can check out the virtual machine (VM) so that it runs as a local mode desktop; changes to the desktop are synchronized to the datacenter when they reconnect. Unlike local mode desktops, Mirage does not make use of a hypervisor as it runs as an agent on a traditional XP or Windows 7 operating system. VMware announced that View would be part of the Horizon Suite, which thus consists of VMware View, VMware Mirage, and VMware Workspace.
VMware Workspace is targeted to delivering services in a way that is more tablet, mobile, and Cloud friendly. It was included in Horizon Suite to deliver a virtual workspace that provides a single portal for the user to access enterprise and Cloud-based applications, data services, and View desktops.
If you are currently considering deploying end-user computing (EUC), it is likely you will have requirements for all three: a virtual desktop delivered with View, mobile and tablet-based users accessing Workspace, and traditional desktops delivered through Mirage.
You’ll see several terms in this book used interchangeably. When describing a virtual desktop, the text may refer to a virtual instance, View desktop, or (as previously mentioned) virtual desktop. When the text describes the larger View environment, you might see View infrastructure, virtual desktop infrastructure (VDI), or the abbreviation View. For VMware Mirage and Workspace, both Mirage and Workspace are used to reference the products. This book focuses on the core products and their related architecture. It is assumed that you have an understanding of vSphere and its related components, so this book does not cover these items.
This book covers VMware View, Mirage, and Workspace: their architectures, planning considerations, and how to properly install and configure each environment.
An End-User Service Catalog
Inevitably, the understanding of Cloud technology will become as fundamental as the understanding of virtualization is in IT today. One of the key concepts behind Cloud computing is the idea of a service catalog. A service catalog is not a VM or a collection of VMs, but rather a complete solution that the end user can consume. In making the transition from supplying virtual desktops to delivering services to end users, this is an important concept to understand.
How can you deploy virtual desktops, provide end-user-driven storage repositories, and integrate software-as-a-service (SaaS) applications and traditional Windows applications in a service? This is the value message of Workspace, which integrates all of these services.
You will see the idea of a service catalog used heavily in VMware Workspace, where a View desktop is one of the services you can offer. Entitlement is the term used across the end-user product line to enable a service for a user. In View, you entitle desktops to Active Directory users or groups. In Workspace, you entitle users to access different services to build a service catalog, as shown in Figure 1.1.
Figure 1.1 Service catalog
The post-PC era refers to the replacing of traditional desktops with mobile devices such as tablets and smartphones. Although businesses still largely use Windows-installed computers, tablets are now out-shipping desktops and laptops, even though they have been around for only three years. To make the transition to the post-PC era, you must consider entitling users to service catalogs, of which a View desktop is an important component.
By focusing on a service catalog, you can ensure that the planning you are doing now will enable you to service any type of end-user device without reengineering the entire environment. Even though a lot of Cloud-based apps are being developed, desktops will be around for some time because a large percentage of our day-to-day applications use traditional enterprise-based client-server applications. The goal of this book is to enable you to deploy a single framework that leverages VMware View, Workspace, and Mirage to deal with a multitude of end-user requirements.
How Do View, Mirage, and Horizon Workspace Deliver a Service Catalog?
Delivering a service catalog involves understanding what is delivered by each of the products within the Horizon Suite. When it was View, it was pretty straightforward. With Horizon Suite, you have View, Mirage, and Workspace.
VMware View
Each component of the Horizon Suite delivers different business value and meets different end-user requirements. View delivers virtual desktops running in the datacenter from a centralized vSphere environment. It provides several technologies designed to manage key components of the end-user experience: Persona for end-user data management, Composer for deployment and image management, and ThinApp to enable application delivery without interoperability problems. These technology components are managed through the View Connection Server to deliver a consistent and robust end-user experience, enabling the desktop to be delivered as a service.
VMware Mirage
Mirage provides centralized image management, and so is similar to Composer-created View desktops. However, it is installed as software in the operating system of the endpoint. Mirage runs an agent that synchronizes changes made to CVD, essentially providing uniformity to a distributed desktop environment. This enables desktop services to manage the CVD in the datacenter while the agent ensures consistency on all endpoints. Mirage enables you to extend your service catalog to deliver centralized management to a decentralized desktop environment. Mirage can support online desktops and laptops that might be offline or online as a full operating system runs on the endpoint. In addition, VMware now supports running Mirage in a virtual desktop environment.
VMware Workspace
Workspace provides an integration with View to enable you to present desktop-as-a-service (DaaS). In addition to presentation, it integrates other Cloud-like services, such as unifying web and Cloud and traditional Windows applications using ThinApp through a single portal. It also provides users an easy way to exchange files and documents by integrating data services. The integration of Workspace enables you to provide a complete suite of services with the flexibility to deliver to any device form factor. If the ideal end-user experience is a View desktop, you integrate and deliver it through Workspace. If the ideal end-user experience is direct access to web or Cloud applications, you can entitle these services in Workspace.
Considerations for Deploying View, Mirage, and Workspace
Each component of the suite is deployed very differently. Workspace comes as a vApp made up of five virtual appliances, whereas View and Mirage are more traditional installations. It is important to understand some of the considerations when deploying each.
VMware View
When deploying View, you have many things to consider. One of the first things to understand before getting into the design is what the business strategy is for the virtual desktops, as shown in Figure 1.2. Is it to replace physical desktops? Is it part of a bring-your-own-device (BYOD) initiative? Understanding the intended use enables you to properly plan the integration of virtual desktops in your environment. When you understand the use case, it is important to understand what services end users require; this factor will significantly influence your design.
Figure 1.2 Considerations
How the use case can influence design is perhaps best illustrated with an example. In the first example, company XYZ runs several manufacturing plants in which computers are deployed throughout the shop floor and are used by various personnel throughout the day in 24-hour shifts. The primary application is used for text input to control various aspects of the manufacturing process. No information is stored on the computer.
The company has decided to use tablets to remove the need for fixed computer stations and to enable the operators to inspect the adjustments they are making within the computer program against what is happening on the line. A VMware View environment will be used to centralized the desktop and enable access through the tablets.
In the second example, company ABC is an engineering design company that designs large industrial flow and control pump replacement parts. The company would like to use VMware View to provide more flexibility for its engineers to design, regardless of where they are located. In addition, it wants to enable access to the engineer’s desktops while they are consulting and reviewing aspects of the design with ABC’s customers.
You can see clearly in these examples why knowing the use case upfront can have a large impact on the architecture and the design. For company XYZ, the end user would likely be classified as a light user, so resources might not be my primary concern, but View compatibility with the tablet device likely would be. For company ABC, 3D rendering can often pose problems if not considered carefully in the design. In addition, engineers are likely to require a significant amount of CPU and memory allocated to their View desktops.
VMware has significantly improved graphics capabilities in the platform and now offers three distinct types of video support:
- Soft 3D and SVGA: Software 3D Renderer and Super Video Graphics Array (SVGA) are provided through the VMware display driver. The Soft 3D and SVGA support is installed when you install the VMware Tools within the VM. Because it has no hardware dependencies, services such as VMotion and Distributed Resource Services (DRS) and the automation of VMotion are fully supported.
- vDGA: Virtual Direct Graphics Acceleration (vDGA) is a PCI pass-through to an underlying graphics card. It enables you to pass through the graphics processing of a VM to an underlying physical graphics card and dedicates a graphics processing unit (GPU) to the VM. The relationship between the VM and GPU is one to one. Because it is a one-to-one relationship, VMotion and DRS are not supported. It is a property of the VM, however, so to VMotion a VM, you can disable vDGA temporarily.
- vSGA: Virtual Shared Graphics Acceleration (vSGA) enables you to pass through the graphics processing of a VM to an underlying physical graphics card that has multiple GPUs or CPUs shared among VMs. The amount of video memory that can be assigned is restricted to 512 MB, but it does enable you to address the engineering requirements of ABC from within a virtual desktop. The configuration of vSGA is very flexible and can be set to Software, Hardware, or Automatic. The Software setting only uses vSphere software 3D rendering, even if physical GPUs are available in the host. VMotion and DRS are supported using this setting. Hardware is the opposite and forces the requirement of a GPU in the vSphere host. If one is not present, a VMotion will be restricted, and the VM may not be able to be powered on. Automatic is the option in the middle and enables you to switch from vSphere software 3D rendering to physical GPUs based on availability.
When you know the business strategy, it is easier to deduce which features are required for the end users. As alluded to in the preceding paragraph, the business case drives the use case, and the use case determines which features of View will be most critical to the users. An understanding of the features required also helps influence the design. For example, in company XYZ, stateless desktops (a desktop in which customizations or writes are not preserved between user logons) are likely ideal, which means that View Composer will likely be a key component of the architecture. View Composer enables a single image to be represented to many different users without requiring a full clone of the parent image for each virtual desktop. Through the use of a linked clone tree, a single 20-GB desktop image can be shared to multiple VMs while appearing to be an independent desktop OS to each user.
In addition, it is likely that View Blast would meet the requirement. View Blast is the integration of HTML5 support that was added in View 5.2 SP1. It enables the desktop to be delivered over a web browser that supports HTML5 without requiring the installation of a client. VMware recommends it for users who do not spend a significant amount of time interacting with the View desktop. For users who do spend hours working on their desktop, PCoIP (PC over IP) will provide the most robust high-fidelity experience.
For company ABC, if each engineer and designer has a specific set of tools, a dedicated full-clone desktop (the relationship between the end user and assigned View desktop is preserved along with any user changes between logons) that is associated with each individual user is more appropriate. Having a large number of stateful desktops also heavily influences the design and architecture.
Design and architecture is influenced by the business case, end-user requirements, and by the scale of the environment. If you are designing an environment that will scale to deliver thousands of View instances, how you structure the View environment is very important.
When scaling the environment, should you just create a single vSphere environment with one large cluster and just keep adding capacity? Should you intermix the View management and infrastructure servers with the virtual desktops? Should you extend your server virtualization to add View desktops? There is a way of scaling a View environment recommended by VMware, and it focuses on a modular approach versus a single vSphere, single cluster, as shown in Figure 1.3.
Figure 1.3 Scale out
VMware does not recommend this approach, of course, but instead recommends separating management and compute clusters. A management cluster is recommended to have a minimum of three vSphere hosts to provide N+N redundancy versus N+1 or single-cluster failover. The management cluster is referred to by VMware as the management block and provides an environment where the View and vSphere management servers run. This may include Connection and Replica Servers, as well as View Security, Composer, and Transfer Servers. It may also run vCenter, VMware Update Manager, and perhaps vCenter Operations Manager for monitoring the View environment.
View Connection servers replicate metadata through the Active Directory Lightweight Directory Service (AD LDS). Each Connection Server must be connected by a LAN. Seven is the maximum number of Connection Servers that you can deploy together. Seven allows for five active Connection Servers running, with two possible failures. This collection of Connection and Replica Servers is referred to as a pod. A maximum of seven Connection Servers is supported within a cluster or pod. Each View Connection Server supports 2000 concurrent connections for a theoretical maximum of 14,000; however, it is recommended that you do not exceed 10,000 concurrent connections per pod. The operational maximum set by VMware is 10,000 desktops managed by a single pod.
In addition to a management block and Connection Server pod, you will also have a compute cluster. A compute cluster is a cluster for running View desktops. The compute clusters for running virtual desktops are known as View blocks. View blocks are controlled by management blocks. VMware recommends that no more than 2500 View desktops be deployed in a View block. This means that you will have a single management block running a pod of Connection Servers to manage several View blocks, as shown logically in Figure 1.4.
Figure 1.4 A Management block controlling View blocks
This modular approach enables you to scale in a predictive fashion. Each management block is designed to control 10,000 concurrent connections. So, if each View block is designed to support 2500 users, you know that you will need one management block for every four View blocks.
When scaling to 10,000 View desktops, multiple vCenter servers are recommended. Using multiple vCenters enables you to redistribute operations that can be resource intensive, such as redeploying the View desktop system drives to bring the OS back to a clean state. (Operationally this is known as a View refresh; for additional details, see Chapter 7, “View Operations and Management.”)
Sizing and performance go hand in hand with your design and architecture. There are several aspects of sizing and performance from the CPU and memory required per View desktop to the calculation of disk and network I/O. Assuming users are migrating from physical desktops, there is no better way to calculate resource requirements for a virtual desktop than running a proper performance and capacity assessment tool against the current physical desktops. This is critical if you are designing an environment that will scale to thousands of View desktops.
You can apply some general guidelines in sizing View desktops. Several common considerations are how many vCPUs should be assigned, how much memory, and whether you should use a 32-bit or 64-bit OS? Multiple vCPUs only benefit multithreaded applications. The reality is, however, that most modern Windows desktop operating systems and applications support multithreading. For example, multithreading is supported in Microsoft Office 2010. As multiple vCPUs add overhead, you generally should not add more than one unless utilization is over 60% of a single vCPU.
Unlike in physical environments, if you overallocate CPUs by configuring a View desktop with multiple vCPUs, you are not just wasting CPU cycles, but you might be forcing delays. Assigning multiple CPUs means that the hypervisor must wait for two physical cores to become available to schedule execution. If you have mixed single vCPUs with multiple vCPUs, you can be forcing a wait state that is not required. Although this is less common now that you have CPUs with many cores, it can be a factor because a virtual desktop environment typically has a much higher VM-to-core ratio. This is where understanding your target workload through proper analysis using tools like SysTrack from Lakeside Software can ensure you properly allocate resources instead of over- or underallocate them.
The question of whether to deploy an x86 versus 64-bit desktop OS is tied to the memory allocation. If your View desktop does not require more than 4 GB of memory, a 32-bit desktop is likely to meet your requirement. If it requires more than 4 GB, a 64-bit desktop OS is required. For example, according to Microsoft, the Windows 7 64-bit OS has a memory limitation of 192 GB. The other consideration is whether the application itself is only 64-bit. If this case, you will need to deploy the 64-bit OS and adjust your memory requirements accordingly.
Network requirements vary considerably depending on what services are enabled within the View desktop. It is therefore important to understand the user requirements so that you can estimate your bandwidth requirements. For example, the basic PCoIP protocol requirements are approximately 250 Kbps per session. Offering high-resolution video can require an additional 4096 Kbps per session. Table 1.1 lists a sampling of the PCoIP bandwidth requirements.
Table 1.1 Sample PCoIP Bandwidth Requirements
PCoIP base requirements |
250 Kbps |
Multimedia video |
1024 Kbps |
3D graphics |
10,240 Kbps |
480p video |
1024 Kbps |
1080p video |
4096 Kbps |
Bidirectional audio |
500 Kbps |
USB peripherals |
500 Kbps |
Stereo audio |
500 Kbps |
CD-quality audio |
2048 Kbps |
Storage sizing is a bit of a science in virtual desktop environments. This is because virtual desktop environments have different I/O characteristics depending on the operational activity or state of the desktop. For example, if 100 virtual desktops are simultaneously powering on, a burst of I/O activity is occurring. If the virtual desktops are powered on and users are logged in, this is considered normal or operational I/O activity. To properly size the environment from a storage perspective, you need to understand both these I/O properties (burst and operational) and also the capacity and size of storage required. Although this might seem straightforward, size and I/O requirements can be in stark contrast to one another. For example, View Composer thinly provisions storage requirements, so from a capacity perspective, it requires less storage. Because View Composer uses a linked clone tree, the storage I/O requirements can be very high.
Storage vendors treat burst I/O and capacity and operational requirements distinctly from a sizing perspective because they are typically pinned to different storage tiers. Burst I/O requirements are often placed on solid-state drive (SSDs) or flash drives because these technologies can deliver a tremendous amount of input/output operations per second (IOPS). Because they tend to cost more, general storage or capacity requirements are usually placed on SATA or SAS disks. To determine the burst I/O, it is common to follow some general guidelines:
- Determine the high-water mark for IOPS per View desktop.
- Take the total IOPS and separate them as a percentage of reads versus writes.
- Factor the performance penalty based on the RAID configuration, as shown in Table 1.2.
Table 1.2 RAID Penalties
RAID Type |
IOPS Write Penalty |
RAID 1 |
2x |
RAID 5 |
4x |
RAID 6 |
6x |
RAID 10 (1+0) |
2x |
For example, consider Figure 1.5. We have estimated 25 IOPS per View desktop, with 60% reads and 40% writes. We have 100 View desktops to deploy. We calculate the reads at 15 × 100 (25 × 60%) to give us 1500 IOPS. We then add the write IOPS with the penalty, (10 × 1000) × 4 due to RAID 5), to give us 4000 IOPS. The expected burst I/O is estimated to be 5500 IOPS.
Figure 1.5 IOPS example
Virtual desktops are expensive to deploy from a storage perspective. With the advancements in deduplication technology, the footprint of a desktop deployment on storage has been dramatically reduced. The issue is no longer the cost of “space” alone. Deduplication removes the “like” blocks and stores just one copy and provides references to dependent data. In a virtual desktop environment that consists of hundreds if not thousands of copies of a Windows desktop OS, the possible consolidation of storage is very high. However, the I/O required for a single VM can also be high; in production, VMs have been observed to require 25 to 100 IOPs per virtual desktop. Although the storage footprint is small, the performance requirement is extremely high, as shown in the example in Figure 1.5. The performance footprint of a virtual desktop environment can vastly exceed the performance demanded of all but a very few high-performance, high-demand enterprise software solutions such as Oracle, SQL, and large Microsoft Exchange environments.
Virtual Storage Accelerator
VMware has introduced several technologies that increase performance while reducing cost. VMware implemented virtual storage acceleration (VSA), which is a form of local host caching. When a VM is deployed, a digest file is created that references the most common blocks of the VM’s OS. In operation, the digest file is used to pull the requested blocks into memory on the ESXi host. This reduces the read requests that are serviced by the storage system by introducing a host-based cache.
View Composer, Stateless Desktops, and Storage Reclaim
VMware View enables the deployment of a stateless desktop. A stateless desktop essentially redirects the writes so that the majority of the desktop is read-only. A stateless desktop is a much cheaper desktop to deliver and manage operationally because it is not customized to an individual user and makes use of View Composer linked clone technology. This enables a large number of desktops to use very little space. The decoupling between the user and desktop and the use of Composer allows more flexible deployment options. Stateless desktops can make use of local SSDs on the ESXi host to deploy the OS disk of the VM. The benefits have been somewhat difficult to realize, though, because operationally the OS disk must be re-created to reclaim unused space and reduce the size of the tree.
VMware View 5.3 introduced a reclaim process that enables this to be done automatically and to take place outside production times. This enables a linked clone tree to be deployed for an extensive period of time and reduces the manual operational process of reclaiming space.
vSphere Flash Read Cache
vSphere 5.5 introduced the capability of pooling locally installed SSDs on the vSphere hosts to a logical cache accelerator that all read-intensive VMs can benefit from. The vSphere Flash Read Cache aggregates all read requests so that they are cached locally on an SSD drive versus in memory (as is the case with VSA). This creates a separate caching layer across all hosts (provided they have local SSDs installed) to accelerate performance. Although not specifically designed for virtual desktop environments, it will enhance any read activity across the virtual desktop compute cluster.
Virtual SAN
VMware has entered the storage virtualization market with the release of VMware Virtual SAN. A Virtual SAN enables the customer to completely segregate the virtual desktop environment onto a storage-area network (SAN) that is built using local SSD and host hard drives (HHDs) that are collected and presented logically as a single, shared storage environment. Segregation or separation of the virtual desktop environment provides the benefit of isolating the View requirements on a distinct set of physical resources so that there is no overlap at the hypervisor or storage levels between View desktops and production enterprise workloads. The benefits of Virtual SAN are many:
- Predictive hardware performance
- No risk of virtual desktop performance impacting general storage performance
- Scalable, building block approach to deployment
- Centralized storage through logical SAN presentation
- Native support for vSphere High Availability (HA), VMotion, and DRS
- Reducing the cost of storage while still providing all the benefits of a SAN
The solution is based on micro converged infrastructure in which a physical server with local SSDs and HHDs runs the vSphere ESXi and a storage controller, as shown in Figure 1.6. For storage controllers that support passthrough, complete control of the SSDs and HHDs attach to the storage controller. For storage controllers that do not support passthrough, RAID0 mode is used. This essentially creates a single-drive RAID0 set using the storage controller, which requires you to manually mark the SSDs within vSphere.
Figure 1.6 Virtual SAN logical diagram
VMware Mirage
VMware Mirage is based on a Distributed Desktop Virtualization (DDV) architecture that makes use of physical endpoints and a Mirage agent. Although it was initially designed to address a physical desktop environment and remote and mobile workers, it now works with VMware View. In a simple architecture as shown in Figure 1.7, the desktop image is stored in the datacenter on a Mirage Server. This desktop image is referred to as a CVD. The endpoint runs the Mirage agent, which synchronizes changes made to the endpoint OS.
Figure 1.7 Simple Mirage architecture
A Mirage desktop is not a single layer, but is actually made up of five layers that are treated distinctly. What makes Mirage so powerful is that it will only synchronize the changes to the datacenter made within a layer, not the entire image, thus making it extremely efficient on the network. In addition, it is possible to update individual layers centrally and have those changes pushed down to the endpoint, making it also a great migration tool. When you change a layer, it does require a reboot. In upcoming releases, however, VMware is looking at which layers can be dynamically changed without rebooting. Currently, Mirage supports migrations between Windows XP and Windows 7 only, although Windows 8 is on the roadmap.
The use cases for Mirage are diverse and include the following:
- Centralized image management
- Centralized desktop data backup
- Migration from Windows XP to Windows 7
- As an enhancement to disaster recovery to provide endpoints
- System provisioning of desktops
When deploying Mirage, you want to consider several things. Mirage supports upload and download operations of the base and application layers. Unlike View, which streams the display of the desktop using PCoIP, Mirage delivers layer changes over the network. Although designed to be highly efficient on the network, there are some key considerations when deploying Mirage in large environments. To avoid each endpoint downloading the CVD, you can make use of a branch reflector, as shown in Figure 1.8. A branch reflector acts as a proxy for the downloading and synchronization of CVDs and endpoints. A branch reflector is deployed on the remote side of the WAN to reduce network transmission. A branch reflector downloads image changes locally, enabling Mirage agents to download from the branch reflector. Using a branch reflector can reduce the bandwidth requirements during mass deployments.
Figure 1.8 Branch reflector
Another key consideration is the availability of your Mirage Servers. It is recommended that Mirage Servers be load balanced to ensure that one is always available to service the environment.
The deployment considerations for Mirage require a proper understanding of how much bandwidth is required for downloading and uploading between the Mirage clients and servers. You can mitigate the download requirements by properly understanding the environment and placing branch reflectors to reduce the number of direct downloads by Mirage agents.
VMware Workspace
Workspace is like a universal aggregator for a variety of end-user services. The 1.0 release of Workspace was released on March 4, 2013. VMware acquired a virtualization technology for Android and iOS-based phones from Trango in October 2008. Trango developed a Mobile Virtual Platform (MVP) for phones and that was eventually released by VMware as Horizon Mobile. Horizon Mobile is designed to deliver Mobile Access Management (MAM) for business applications to smartphones.
In actual fact, MVP is only used for Android phones, and app wrapping was removed from Workspace 1.5 as Apple provided native abilities in iOS. VMware Workspace 1.5 and 1.8 consolidate Workspace 1.0 and Horizon Mobile into a single universal broker for delivering applications to PCs, tablets, and smartphones. In addition, the Horizon Mobile API is likely to take advantage of new Mobile Device Management features of Apple iOS 7. Workspace’s Mobile Device Management is likely to change again with VMware’s recent acquisition of a leading software provider in this space, AirWatch. Because a lot of changes are taking place around this particular feature set, it is not something we will focus on in this book.
You’ll learn more about VMware Workspace deployment in Chapter 6, “Integrating VMware View and Workspace,” but for now, one of the key considerations is what services you will aggregate initially. With Workspace, you can integrate View. You can also deploy Data Service, which was formerly known as Project Octopus and is similar to Dropbox or Box.net, only designed from the start for enterprise customers. Data Service runs entirely on premise, which makes it unique from other solutions on the market. You can also provide access to any number of third-party web or Cloud-based applications and access to smartphones. In addition, for Windows clients, you can stream application virtualization packages using VMware ThinApp. All these options are available as different modules in Workspace, as shown in Figure 1.9.
Figure 1.9 VMware Workspace modules
To deploy modules that will meet a specific business requirement, an important consideration is understanding your end-user requirements. Also, you must verify with the phone manufacturer whether the smartphone model is VMware enabled for the MAM component of Horizon; this is applicable only if it is based on Android. VMware has several agreements in place with Verizon and Samsung in the United States, but you should verify that the make and model of the Android phone is on the hardware compatibility list (HCL) as part of your deployment planning.
Each module within VMware Workspace has its own deployment and architecture considerations. For example, with data services, you must understand how much storage you will provide for each user to determine how much is allocated to data services.
It is better to have a short list of modules that you plan on enabling initially and then to bring additional modules or service online as required. This actually is generally true of VMware View, too. Many virtual desktop projects have floundered because too much emphasis was put on deploying features versus deploying aspects of the technology that address core business requirements. The nice thing about View and Workspace is it is easy to extend the architecture as additional business requirements are identified.