vCenter Server
vCenter Server is the management server for
vSphere 5. It is the one-stop shop for managing everything in your
virtual infrastructure. It runs on a Windows Server and requires a
database for storing critical data. With vSphere 5, the vCenter Server
is now a 64-bit application requiring Windows 2008 R2. You do have an
option of deploying a vCenter Linux-based appliance as an option, but
that appliance is generally considered ideal for smaller environments.
As vCenter Server becomes increasingly more important as you start to
scale out your Virtual Desktop Infrastructure, it is important to plan
for this service to be highly available.
The deployment of vCenter Server enables some
key features such as vMotion and storage vMotion. vMotion does not
require you to move the files associated with the virtual machine,
whereas sVMotion does. These files include the configuration, logs,
swap, snapshots, and the virtual machine disks, which are commonly
referred to as the virtual machine’s home files. Logically, vMotion
moves the running state of the VM, but not the associated virtual hard
drive, as shown in Figure 2.
Figure 2. vMotion copies the running state.
VMotion copies the running state of the
virtual machine from one ESXi host to another but does not copy the
home files. VMotion has a start and a ending to the migration, or copy
activity. This is in contrast to VMware FT, which uses the same
mechanisms but keeps the copy running continuously to provide a
mirrored copy that can be available in the event that the original
source becomes unavailable.
Storage VMotion can “hot migrate” a running
virtual machine between two different ESXi hosts and across different
datastores. This capability is different from that of vMotion because
vMotion enables a virtual machine to hot migrate from one ESXi host to
another within the same VMFS, as shown logically in Figure 3.
Figure 3. Storage vMotion.
Storage VMotion copies both the running state and the virtual machine’s home files from one VMFS to another.
It is important that you understand these
fundamental technologies within vSphere 5 so that you can use them to
build a better VMware View design. VMotion and sVMotion have
enhancements that enable you to automate and take advantage of both.
VMotion can be automated through Distributed
Resource Scheduler (DRS). DRS allows you to throttle the level of
automatic VMotion within a virtual cluster from manual to full
automation.
In a virtual desktop environment, DRS allows
you to level the load across all the hosts so that utilization is
evenly distributed. Consider the uneven workload distribution shown in Figure 4.
Figure 4. Uneven workload distribution.
In this figure, four ESXi virtual servers are
running VMware View. Without DRS, however, the virtual desktops are not
evenly distributed across the host servers, leading to very uneven
utilization of resources across the virtual cluster. One host may be
running at 70%–80% utilization, whereas another may be running at
10%–20%. This difference translates to very inconsistent performance
across the virtual desktop environment. When you design the vSphere
platform with DRS, you get an environment with even distribution. Look
at the effect in the logical picture shown in Figure 5.
Figure 5. DRS ensures even distribution.
sVMotion enables you to
do the same with storage utilization. vSphere 5 provides a feature
called storage DRS that looks at disk reads and writes or storage input
and output and will copy running virtual desktops to the appropriate
storage tier by making use of sVMotion. In addition to storage DRS from
VMware, your storage vendor might include dynamic tiering capabilities.
Dynamic tiering is the capability of the storage system to move in
demand data from slower disks to faster disks. Each storage vendor has
a slightly different acronym to describe the feature, but in general,
you should find out the level of automation, how quickly data is moved,
how granular the blocks are in which the data is moved, and whether
other storage systems are supported.
These technologies are designed to reduce
storage hotspots. Storage hotspots are parts of the storage system
characterized by high activity, causing long wait times for I/O
requests and leading to long waits for data. These hotspots can lead to
latency for storage requests that would slow down the performance of
your virtual desktops.
One of the technical challenges for scaling
VDI has been the demand on storage I/O. Storage DRS allows you to take
advantage of storage tiering. Storage tiering allows you to put the
most needed data on the fastest (most expensive) storage disks and data
that is not in demand on the slower (cheaper) storage disks. Storage
tiering is key in a virtual desktop environment because of the many
types of data used to build out a desktop environment. For example, you
have user profile information, Windows OS data, application data, and
also different types of end users with different usage patterns.
Within one organization, you may have
developers who are quite demanding of desktop resources and others who
are not. Storage DRS is built on the Storage I/O Controller (SIOC),
which monitors for latency at the datastore level. The Storage I/O
Controller “controls” access to I/O on the datastore using a series of
I/O queues and the number of assigned shares to a virtual machine. If
latency is detected, the controller assigns a smaller number of I/O
queue slots to virtual machines with a lower number of shares and a
higher number of queue slots to virtual machines with a higher number
of shares. This assignment is done at a volume level versus a host
level so that prioritization is done across all hosts and virtual
machines versus within a host and its running virtual machines.
Storage DRS has the capability to move
running virtual machines to datastores that are experiencing lower or
no latency to ensure consistent I/O is provided. From a VDI
perspective, this feature allows you to combine faster storage drives
(and more expensive) and slower storage drives (and less expensive) and
have storage DRS level I/O utilization across the environment. In Figure 6,
the environment is made up of multiple tiers with sVMotion and DRS
moving Virtual Machine Disks (VMDKs) to ensure a more even distribution
of utilization.
Figure 6. Storage DRS.
Host profiles are designed to ensure the
configuration of ESXi hosts is consistent across a collection of ESXi
hosts or a group of ESXi hosts in a virtual cluster. The availability
of this technology in vCenter should influence how you deploy the
environment. Perhaps your strategy may be to use host profiles and Auto
Deploy and to run your vCenter as a virtual machine. vCenter should be
running on a different cluster than the one it is managing to ensure
availability if you are considering Auto Deploy.
To ensure availability, you should deploy two
ESXi servers for redundancy, then deploy the vCenter VM, and finally
deploy the remaining physical host servers using Auto Deploy with host
profiles in a separate cluster. Auto Deploy allows you to customize the
ESXi installable image and then deploy it consistently across all ESXi
hosts. It is important that you configure your design and architecture
and understand what features you will use and why. It is also important
to consider a method of deployment that reduces the time and complexity
it takes to get things up and running. Depending on how big the
environment is, getting things running could take a considerable amount
of time and effort.