IT tutorials
 
Technology
 

Windows Server 2012 : Continuous availability (part 7) - SMB Transparent Failover, Storage migration

10/7/2013 9:05:24 PM
- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019

Guest clustering vs. VM monitoring

Guest clustering in Windows Server 2012 is intended for server applications that you currently have clustered on physical servers. For example, if you currently have Exchange Server or SQL Server deployed on host clusters, you will have the additional option of deploying them on guest clusters (which can themselves be deployed on host clusters) for enhanced availability when you migrate your infrastructure to Windows Server 2012.

VM monitoring by contrast can enhance availability for other server roles in your environment, such as your print servers. You can also combine VM monitoring with guest clustering for even greater availability.

Guest Clustering: key differences between the Windows Server 2008 R2, Windows Server 2012, and VMware approaches

When we speak about clusters, we usually draw a picture of a few servers and a shared disk resource, required to build a cluster. Hence, for certain applications, like Exchange Server 2010, SQL Server 2012, or System Center 2012, clustering architecture may not require a shared disk resource; there are still plenty of scenarios where shared disks are essential to build a cluster.

In Windows Server 2008 R2, Hyper-V doesn’t provide a way to share a single virtual hard disk (VHD) or pass-through disk between VMs. It also doesn’t provide native access to Fibre Channel, so you can’t share a LUN. The only way to build guest clusters in Windows Server 2008 R2 is to use an iSCSI initiator. You can build a cluster with up to 16 nodes. You can freely live-migrate guest clusters and use dynamic memory in that machine.

In VMware vSphere, you can add the emulated LSI Logic SAS and Parallel controllers to provide a shared VMDK or a LUN to two VMs. No, you can’t create a cluster of more than two nodes on vSphere with built-in disk sharing support. Note that the usage of vSphere advanced techniques like VMotion or FT are not supported for guest clusters in VMware environment. The same applies to hosts, with overcommitted memory.

Windows Server 2012 Hyper-V brings synthetic Fibre Channel interface to VMs, building clusters without limitation for the number of nodes. Here, 16-node guest clusters of Windows Server 2008 R2 and 64-node guest clusters of Windows Server 2012 come to reality.

Enhanced PowerShell support

Failover Clustering in Windows Server 2012 also includes enhanced PowerShell support with the introduction of a number of new cmdlets for managing cluster registry checkpoints, creating scale-out file servers, monitoring health of services running in VMs, and other capabilities. Table 2 lists some of the new PowerShell cmdlets for Failover Clustering.

Table 3-2. New PowerShell Cmdlets for Failover Clustering

PowerShell cmdlet

Purpose

Add-ClusterCheckpoint

Manages cluster registry checkpoints, including cryptographic checkpoints

Get-ClusterCheckpoint

 

Remove-ClusterCheckpoint

 

Add-ClusterScaleOutFileServerRole

Creates a file server for scale-out application data

 

Add-ClusterVMMonitoredItem

Monitors the health of services running inside a VM 

Get-ClusterVMMonitoredItem

 

Remove-ClusterVMMonitoredItem

 

Reset-ClusterVMMonitoredState

 

Update-ClusterNetworkNameResource

Updates the private properties of a Network Name resource and sends DNS updates

 

Test-ClusterResourceFailure

Replaces the Fail-ClusterResource cmdlet

 

2. SMB Transparent Failover

Windows Server 2012 includes the updated version 3.0 of the Server Message Block (SMB) file-sharing protocol. SMB Transparent Failover is a new feature that facilitates performing maintenance of nodes in a clustered file server without interrupting server applications that store data on Windows Server 2012 file servers. SMB Transparent Failover can also help ensure continuous availability by transparently reconnecting to a different cluster node when a failure occurs on one node.

3. Storage migration

Storage migration is a new feature of Hyper-V in Windows Server 2012 that lets you move all of the files for a VM to a different location while the VM continues running. This means that with Hyper-V hosts running Windows Server 2012, it’s no longer necessary to take a VM offline when you need to upgrade or replace the underlying physical storage.

When you initiate a storage migration for a VM, the following takes place:

  1. A new VHD or VHDX file is created in the specified destination location (storage migration works with both VHD and VHDX).

  2. The VM continues to both read and write to the source VHD, but new write operations are now mirrored to the destination disk.

  3. All data is copied from the source disk to the destination disk in a single-pass copy operation. Writes continue to be mirrored to both disks during this copy operation, and uncopied blocks on the source disk that have been updated through a mirrored write are not recopied.

  4. When the copy operation is finished, the VM switches to using the destination disk.

  5. Once the VM is successfully using the destination disk, the source disk is deleted and the storage migration is finished. If any errors occur, the VM can fail back to using the source disk.

Moving a VM from test to production without downtime

A VM that is in a test environment typically lives on a Hyper-V server, usually nonclustered, and usually not in the best location or on the best hardware. A VM that is in production typically lives on a cluster, on good hardware, and is in a highly managed and monitored datacenter.

Moving from one to the other has always involved downtime—until now.

Hyper-V on Windows Server 2012 enables some simple tasks that greatly increase the flexibility of the administrator when it comes to movement and placement of running VMs. Consider this course of events.

I create a VM on a testing server, configure it, get signoff, and make the VM ready for production. With Hyper-V shared nothing Live Migration, I can migrate that VM to a production cluster node without taking the VM offline. The process will copy the VHDs using storage migration, and then once storage is copied, perform a traditional live migration between the two computers. The only thing the computers need is Ethernet connectivity. In the past, this would have required an import/export operation.

Now that the VM is running on my node, I need to cluster it. This is a two-step process. First, using storage migration, I can move the VHD of the VM onto my CSV volume for the cluster. I could also move it to the file share that is providing storage for the cluster, if I’m using Hyper-V over SMB. Regardless of the configuration, the VHD can be moved to a new location without any downtime in the VM. In the past, this would have taken an import/export of the VM, or, at minimum, shutdown and manual movement of the VHD file.

Finally, I can fire up my Failover Cluster Manager and add the VM as a clustered object. Windows Server 2012 lets you add running VMs to a failover cluster without needing to take the VMs offline to do this.

There you have it: start the VM on the stand-alone test server, move the VM to the cluster and cluster storage, and finally create the cluster entry for the VM, all without any downtime required.

Storage migration of unclustered VMs can be initiated from the Hyper-V Manager console by selecting the VM and clicking the Move option. Storage migration of clustered VMs cannot be initiated from the Hyper-V Manager console; the Failover Clustering Manager console must be used instead. You can also perform storage migrations with PowerShell by using the Move-VMStorage cmdlet.

Storage Migration: real-world scenarios

Storage Migration simply adds greater flexibility on when your VMs can be moved from one storage volume to another. This becomes critical as we move from high-availability clusters to continuously available clusters. This, of course, adds tremendous agility, allowing IT to better respond to changing business requirements.

Let’s consider two kinds of scenarios: out of space and mission-critical workloads.

Out of space

You just ran out of space on the beautiful shiny storage enclosure you bought about 12 months ago. This can happen due to many reasons, but the common ones include the following:

  • Unclear business requirements when the enclosure was acquired

  • Server sprawl or proliferation, which is a very common problem in most established virtualization environments

That storage enclosure probably has hundreds or thousands of VMs and performing the move during the shrinking IT maintenance windows are simply not realistic. With Live Storage Migration, IT organizations can essentially move the VMs to other storage units outside of typical maintenance windows.

Mission-critical workloads

The workload associated with your most mission-critical VMs is skyrocketing. You bought a new high-performance SAN to host this workload, but you can’t take the VMs down to move them to the new SAN.

This is a common problem in organizations with very high uptime requirements or organizations with very large databases, where the move to the new storage volume would simply take too long.

 
Others
 
- Windows Server 2012 : Continuous availability (part 6) - Failover Clustering enhancements - Guest clustering, Configuring Fibre Channel from the guest
- Windows Server 2012 : Continuous availability (part 5) - Failover Clustering enhancements - Node drain, Cluster-Aware Updating
- Windows Server 2012 : Continuous availability (part 4) - Failover Clustering enhancements - Virtual machine monitoring
- Windows Server 2012 : Continuous availability (part 3) - Failover Clustering enhancements - VM priority
- Windows Server 2012 : Continuous availability (part 2) - Failover Clustering enhancements - Simplified cluster management, Active Directory integration
- Windows Server 2012 : Continuous availability (part 1) - Failover Clustering enhancements - CSV2 and scale-out file servers
- Windows Phone 8 : Location APIs (part 3) - Emulating Location Information
- Windows Phone 8 : Location APIs (part 2) - Accessing Location Information
- Windows Phone 8 : Location APIs (part 1) - Location Permission
- SQL Server 2008 : Data compression (part 2) - Data compression considerations
 
 
Top 10
 
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
- Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
- Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
Technology FAQ
- Is possible to just to use a wireless router to extend wireless access to wireless access points?
- Ruby - Insert Struct to MySql
- how to find my Symantec pcAnywhere serial number
- About direct X / Open GL issue
- How to determine eclipse version?
- What SAN cert Exchange 2010 for UM, OA?
- How do I populate a SQL Express table from Excel file?
- code for express check out with Paypal.
- Problem with Templated User Control
- ShellExecute SW_HIDE
programming4us programming4us