1. Planning Hyper-V deployment
Successful deployment requires careful planning ahead of time to
ensure problems don’t arise during or after the deployment process.
The following issues should be considered when planning the deployment
of Hyper-V hosts within your datacenter:
-
Hardware
-
Editions
-
Networking
-
Storage
-
Management
-
Security
-
Scalability
-
Availability
-
Mobility
-
Disaster recovery
In addition, each of these issues should be considered from both
the host and virtual machine perspective before you begin to deploy
Hyper-V hosts within your datacenter. While the sections that follow
focus mainly on considerations relating to planning host machines,
some mention of planning for virtual machines is also included where
appropriate, especially when it directly relates to host-planning
issues.
A key hardware requirement for a Hyper-V host is that the
underlying host system support hardware-assisted virtualization such
as Intel Virtualization Technology (Intel VT) or AMD Virtualization
(AMD-V) technologies. In addition, hardware-enforced Data Execution
Prevention (DEP) must be available and enabled on the host system.
Specifically, this means that the Intel XD bit (the execute disable
bit) or AMD NX bit (the no execute bit) must be enabled.
Although you can install the Hyper-V role on a Windows Server
2012 server that meets the minimum system requirements of a
single-core, 1.4-GHz CPU and 512 MBs of RAM, you probably won’t be
able to run any virtual machines on that host machine. This is
mainly because each virtual machine you run on a host requires a
minimum amount of RAM that depends on the guest operating system
installed in the virtual machine. In other words, the number of
virtual machines and types of virtualized workloads you can run on
Hyper-V hosts directly relate to the available hardware resources of
the host.
To plan your host hardware, you therefore should start with
the maximum supported processor and memory capabilities of Windows
Server 2012, which are as follows:
-
Up to 64 physical processors (sockets) per host
-
Up to 320 logical processors (cores) per host
-
Up to 4 TBs of physical memory per host
Next, you should consider the maximum supported processor and
memory capabilities for virtual machines running on Windows Server
2012 Hyper-V hosts. These are as follows:
-
Up to 64 virtual processors per virtual machine (up to a
total of 2048 virtual processors per host)
-
Up to 1 TB of physical memory per virtual machine
-
Up to 1024 active virtual machines running on the
host
Finally, you must decide how many virtual machines you want to
run on each host. In deciding this, you must consider the
following:
-
How many cores you can afford when you purchase your host
systems
-
How much physical memory you can afford for your host
systems
-
How much processing power and physical memory your
virtualized workloads will need for them to meet the performance
requirements of your service level agreement (SLA).
Note
Planning the host
processor and memory
As an example, let’s say you wanted to run two file servers
and a Microsoft SQL Server database server on a single Hyper-V
host in your datacenter. You’ve determined that the file servers
will each require 2 virtual processors and 4 GBs of RAM to perform
as intended, while the database server will require 4 virtual
processors and 12 GBs of RAM for optimal performance. The total
processor and memory requirements of your virtual machines will
therefore be
(2 x 2) + 4 = 8 virtual processors
(2 x 4) + 12 = 20 GBs RAM
By including the minimum processor and memory requirements
of the underlying host operating system plus room for growth, you
might decide that a rack-mounted system with dual Intel Xeon
E5-2430 processors and 24 GBs of RAM can meet your needs. The Xeon
E5-2430 is a 6-core processor, so two of them gives you 12 cores,
which easily meets the requirements of 8 dedicated virtual
processors needed by the virtual machines. And the 24 GBs of RAM
provides several GBs of RAM overhead on the host in case extra
memory needs to be assigned to the database server
workload.
Your decision concerning how many virtualized workloads to run
on a host might also influence your decision about which edition of
Windows Server 2012 to purchase. There are no technical differences
between the capabilities of the Standard and Datacenter editions of
Windows Server 2012. Both editions support up to 64 physical
processors and 4 TBs of physical memory. Both editions also support
installing the same set of roles and features. The only differences
between these editions are the virtualization rights included in
their licensing and the price of the editions.
The virtualization rights included with each edition are as
follows:
As a result, you should choose the Standard edition if you
need to deploy Windows Server 2012 as a workload on bare metal in a
nonvirtualized environment, and choose the Datacenter edition if you
need to deploy Windows Server 2012 Hyper-V hosts for a virtualized
datacenter or private-cloud scenario.
The licensing model for Windows Server 2012 has also been
simplified to make it easier for you to plan the budget for your IT
department. Specifically, the Windows Server 2012 Datacenter edition
is now licensed in increments of two physical processors. This
means, for example, that if you want to deploy the Datacenter
edition onto a system that has eight processors, you need to
purchase only four licenses of the product.
Hyper-V networking requires careful planning to ensure
reliable and secure network connectivity and management of both
hosts and virtual machines. At a minimum, your host machines should
have at least two physical network adapters configured as
follows:
-
One network adapter to allow virtualized workloads to
communicate with other systems on your production network
-
One network adapter dedicated for the management of your
Hyper-V hosts and connected to a dedicated network used by your
systems management platform.
More physical network adapters might be needed if you have
additional services or special requirements. For example, you might
need additional network adapters for the following:
-
Providing connectivity between hosts and Internet SCSI
(iSCSI) storage
-
Deploying a failover cluster
-
Using cluster shared volume (CSV) shared storage
-
Performing live migrations of running virtual
machines
-
Increasing available bandwidth using Windows NIC
Teaming
Note
Planning host
networking
As an example, let’s say you want to deploy Hyper-V to run a
number of mission-critical server workloads for your organization.
You decide that your hosts should be clustered and use CSV for
performing live migration. You also decide that a single 1-gigabit
Ethernet (GbE) network adapter will have insufficient bandwidth to
allow clients to access the workloads. So you decide to use
Windows NIC Teaming, a new feature of Windows Server 2012, to
allow two network adapters to provide 2 gigabits per second (Gbps)
of network connectivity between your host cluster and the 10-GbE
backbone of your production network. Finally, you plan on using
your Fibre Channel storage area network (SAN) to provide storage
for your host machines. How many physical network adapters will
each host machine need?
-
One NIC to provide dedicated connectivity to your
management network
-
Two NICs teamed together to provide connectivity between
the virtualized workloads and your production network
-
One NIC dedicated to the private network needed for
failover clustering
-
One NIC dedicated for use by CSV shared storage
-
One NIC dedicated to live migration traffic
That’s six network adapters in total that are needed for
each host. Note that no network adapter will be required for SAN
connectivity because you’re using Fibre Channel not iSCSI.
In addition to deciding how many network adapters your hosts
will need, you must also consider what types of virtual switches
will be needed for your environment. A Hyper-V virtual switch is a
layer 2 network switch that works like a physical Ethernet switch
but is implemented in software on the host. Hyper-V allows you to
create three different kinds of virtual switches:
-
Private This type of
virtual switch allows virtual machines running on the host to
communicate only with each other and not with the operating
system of the host. A private virtual switch is not bound to any
physical network adapter on the host, which means that the
virtual machines on the host cannot communicate with any other
system on any physical network connected to the host.
-
Internal This type of
virtual switch allows virtual machines running on the host to
communicate with each other and with the operating system of the
host. An internal virtual switch is not bound to any physical
network adapter on the host, which means that the virtual
machines on the host cannot communicate with any other system on
any physical network connected to the host.
-
External Unlike the other
two types of virtual switches listed, this type is bound to a
physical network adapter on the host. The result is that an
external virtual switch allows virtual machines running on the
host to communicate with each other, with the operating system
of the host, and with other systems on the physical network
connected to the host through that adapter. In addition, the
external virtual switch can be bound to the physical network
adapter by means of miniports in one of three ways:
-
By using a single miniport representing a single
physical network adapter
-
By using a single miniport representing multiple
physical network adapters
-
By using multiple miniports representing a single
physical network adapter
Note
Virtual
switches
In most cases, you’ll want to create one or more external
virtual switches to enable clients on your production subnet or
subnets to access server workloads running in virtual machines on
your hosts. If you’re doing test or development work, however, a
private or internal virtual switch might be a good choice.
The Hyper-V virtual switch has been enhanced in Windows Server
2012 with extensibility features that allow independent software
vendors (ISVs) to add functionality for filtering, forwarding, and
monitoring network traffic through virtual switches. These
virtual-switch extensions can be implemented using two kinds of
drivers:
-
NDIS filter drivers These
extensions can be used to perform network packet inspection,
network packet filtering, and network forwarding. They are based
on the Network Driver Interface Specification (NDIS) 6.0
specification, which is new in Windows Server 2012.
-
WFP callout drivers These
extensions are based the Windows Filtering Platform (WFP) and
can be used to provide virtual firewall functionality,
connection monitoring, and filtering traffic that is protected
using Internet Protocol security (IPsec).
If your virtualized infrastructure requires any of the
preceding functionalities to be implemented at the virtual-switch
level on Windows Server 2012 Hyper-V hosts, you can search for an
ISV that provides a software solution that meets your needs.