Guest clustering
Failover Clustering of Hyper-V can be implemented in two ways:
-
Host clustering, in which the Failover
Clustering feature runs in the parent partition of the Hyper-V host
machines. In this scenario, the VMs running on the hosts are managed as
cluster resources and they can be moved from one host to another to
ensure availability of the applications and services provided by the
VMs.
-
Guest clustering, in which the Failover
Clustering feature runs in the guest operating system within VMs. Guest
clustering provides high availability for applications and services
hosted within VMs, and it can be implemented either on a single
physical server (Hyper-V host machine) or across multiple physical
servers.
Host clustering helps ensure continued availability in the case of hardware failure or when you need to apply software
updates to the parent partition. Guest clustering, by contrast, helps
maintain availability when a VM needs to be taken down for maintenance.
Implementing guest clustering on top of host clustering can provide the
best of both worlds.
Guest clustering requires that the guest
operating systems running in VMs have direct access to common shared
storage. In previous versions of Windows Server, the only way to
provision such shared storage in a guest clustering scenario was to
have iSCSI initiators running in the guest operating systems so they
could connect directly with iSCSI-based storage. Guest clustering in
previous versions of Windows Server did not support using Fibre Channel
SANs for shared storage. VMs running Windows Server 2008 R2 in a guest
clustering scenario can use Microsoft iSCSI Software Target 3.3, which can be downloaded from the Microsoft Download Center. Figure 5 illustrates the typical way guest clustering was implemented in Windows Server 2008 R2.
In Windows Server 2012, iSCSI Software Target is now an in-box feature integrated into Failover Clustering, making it easier to implement guest clustering using shared iSCSI storage. And by starting the High
Availability Wizard from the Failover Clustering Manager console, you
can add the iSCSI Target Server as a role to your cluster quickly. You
can also do this with PowerShell by using the
Add-ClusteriSCSITargetServerRole cmdlet.
But iSCSI is now no longer your only option as far as shared storage
for guest clustering goes. That’s because Windows Server 2012 now
includes an in-box Hyper-V Virtual Fibre Channel adapter that allows you to connect directly from within the guest operating system of a VM to LUNs on your Fibre Channel SAN (see Figure 6). The new virtual Fibre Channel adapter supports up to four virtual HBAs assigned to each guest with separate worldwide
names (WWNs) assigned to each virtual HBA and N_Port ID Virtualization
(NPIV) used to register guest ports on the host.
Configuring Fibre Channel from the guest
Before you configure Fibre Channel as the shared storage
for VMs in a guest cluster, make sure that you have HBAs installed in
your host machines and connected to your SAN. Then, open the Virtual
SAN Manager from the Hyper-V Manager console and click Create to add a
new virtual Fibre Channel SAN to each host:
Provide a name for your new virtual
Fibre Channel and configure it as needed. Then open the settings for
each VM in your guest cluster and select the Add Hardware option to add
the virtual Fibre Channel adapter to the guest operating system of the VM:
Then simply select the virtual SAN you created earlier, and once you’re done, each VM in your guest cluster can use your SAN for shared storage:
Guest clustering in Windows Server 2012 also supports other new Failover Cluster features, such as CAU, node drain, Storage Live Migration, and much more.