VMware View Security Essentials
上QQ阅读APP看书,第一时间看更新

vSphere considerations

Before we start with the View considerations, let's step back a second and understand what basic security concepts have to be implemented on the vSphere level in order to secure the whole virtualization stack that VMware View depends on.

Using View means that you are using vSphere. Desktop virtualization centralizes the desktop infrastructure onto the core virtualization stack. Therefore if the core virtualization (vSphere) is not available, View will not be available; meaning anybody that uses a virtualized desktop will not be able to work. The cost implications are clear.

When we are talking about vSphere security, we have to understand that this encompasses a multitude of topics. As this book is focusing mainly on View, I will only touch this topic briefly.

The vSphere stack is built from a minimum of one VM/appliance, but in most cases we are talking about two to three VMs. Best practices for scaling and security dictate that vSphere uses a dedicated database server. For scaling purposes, it is a very good idea to split the vSphere 5.1 services: Single Sign-On (SSO), Inventory Service, Virtual Center (vCenter), and WebClient into at least two VMs, one VM that runs SSO and the WebClient and one that runs vCenter and the Infrastructure Client. SSO and WebClient have a one-to-many (1:n) relationship with vCenter, therefore allowing for easy scalability.

In addition to this we need at least one VM for the View Connection Server, but ideally we would want more, as we would want additional View Connection Servers, Security Servers, Composers, and Transfer Servers. This ends up to be quite a lot of VMs. If these VMs fail or are running out of resources, they start impacting the vSphere and View environment. Therefore, considering the vSphere Cluster settings is very important.

A rather important fact to know is that View 5.1 can have a maximum of 32 ESXi hosts per cluster if NFS is used as the filesystem.

VMware High Availability (HA)

When we are talking about vSphere Cluster settings, it is important to understand that we need to separate workloads. Beat practice for VMwares states that all management VMs are protected by High Availability (HA) and Dynamic Resource Scheduling (DRS) (HA being able to restart failed VMs and DRS responsible to relocate the VMs to load balance the cluster).

All essential vSphere Management and View Management VMs should be located in their own cluster and have the following cluster HA settings:

  • HA VM Restart Priority: HIGH
  • HA VM Monitoring: HIGH

DRS should be enabled and automated. I personally use a DRS group that makes sure that all essential vSphere VMs (DB, SSO, and VC) are (a "should" not a "must" rule) kept on the first host in the cluster. This allows me, in a failure situation, to access the essential VMs directly without searching all the hosts for them.

Workload VMs (the Virtual desktop VMs) are best kept on a dedicated cluster. If this is the case, DRS should be switched on for them. Switching on HA will reduce the available resources for the desktops. Activating HA will mean that you will either lose a whole ESXi Server (n+1 setting), or a given percentage of the total amount available (% settings). If an ESXi Host fails, the View desktop currently deployed on it will also fail. Depending on the type of View desktop pool used, this isn't such a big deal as a user would just reconnect to the portal and choose a new desktop. HA might make sense for persistent desktops that are critical to the business, such as Admin desktops.

So, the choice is basically spending extra money on hardware (for HA redundancy) or accepting limited blackouts.

If they share a cluster with other VMs, it might be a good practice to exclude them from HA (possible with scripting, but rather complicated), or at least reduce the default HA restart priority. The reason behind this is that in case of a host failure, the production VMs should recover first and foremost. In general a production database, e-mail, or CRM system is more important and causes more interruption to the business than a couple of desktops. Worst case, if HA isn't configured correctly, the result could be that the Virtual desktops start up; however, no resources are left to start the production systems.

Fault Tolerance (FT)

FT can be used with View desktops, however I cannot imagine any desktop that would need it. Firstly, FT only supports one vCPU; secondly, costs for an FT desktop with regards to resources used is rather large, increasing the cost (in $) for a given desktop. Any FT-enabled VM requires double the amount of resources (basically one copy of the VM on two different ESXis).

Personally, I find that FT for View Servers doesn't make sense due to the costs involved.

DRS and resource pools

If you have to share a production environment with a View environment, it is good to consider some basic inputs. The VMware recommendation is to separate View and production workload. Please also keep in mind that the licensing agreement doesn't allow you to run non-View VMs on a View-licensed cluster.

As Virtual desktops have, compared to Servers, a smaller CPU and memory footprint, DRS is essential for any View environment. This is especially the case in share environments. Virtual desktop workload in most business cases is more time-dependent then server workload. Virtual desktop workload follows the office hours, meaning that they are most active between 9-12 and 13-18 on workdays. A typical problem with Virtual desktop environments is in the morning. As everyone more or less starts around 9:00 a.m., the demand on the underlying vSphere environment is extremely peaking at this time. This can pose a problem in shared environments when a lot of desktops start consuming memory and CPU at the same time, as the underlying systems are impacted. A design with no resource pools can lead to starved production servers. Situations like these are called Boot Storms; we will look into this a bit more from the client's perspective in Chapter 4, Securing the Client.

The best practice in a share environment is to create resource pools for different types of workloads and adjust the memory and CPU shares to suite the importance of the systems. This is mostly done by understanding the money value that these systems represent. For example, the wage of one worker for one hour against the significant business impact of e-mail servers or web servers. In addition to adjusting the shares, it is essential to use reservations and limitations on these resource pools. If there are no resource pools, all VMs share memory and CPU on the same level. In general, we should have a minimum of three resource pools: one for production workload, one for management workload, and one for the View workload. As the management workload is rather important, maybe even as much as the production servers, we should put a reservation on it that supplies it with the absolute minimum CPU and memory for it to function. If the amount of Virtual desktops is known and doesn't change, a limitation on the resource pool should be considered. Last but not least, I would recommend that the shares for the production pools are set to HIGH. However, determining the right setting is not a straightforward task, and requires knowledge of your environment and the business impact of the various workloads.

Resource pools for dedicated View clusters follow a slightly different approach. In a dedicated cluster, the consideration for resource pools is how many different desktop pools are using this cluster. If the cluster is used only for one desktop pool, no resource pool is needed. If there are different types of View desktop pools for different functions and purposes, resource pools can be used to prioritize resource allocation. Following the same logic discussed earlier, we have to determine which pools are more important than the others and then assign resources appropriate for them.