My lab consists of 2 Dell Precision T7500 workstations, each configured with 96GB of RAM. These are each nodes in a Hyper-V 2012 cluster. They mount cluster shared volumes via iSCSI, some are SSD, and some are SAS RAID based disks, from a 3rd Dell Precision Workstation.
One of the things I have experienced, is that when I want to patch the hosts, I pause the node, and drain the roles. This kicks off a live migration of all the VM’s on Node1 to Node2. This can take a substantial amount of time, as these VM’s are consuming around 80GB of memory.
When performing a full live migration of these 18 VM’s across a single 1GB Ethernet connection, the Ethernet link was 100% saturated, and it took exactly 13 minutes and 15 seconds.
I recently got a couple 10 gigabit Ethernet cards for my lab environment. I scored an awesome deal on eBay for 10 cards for $250, or $25 for each Dell/Broadcom 10GBe card! The problem I have now is that the CHEAPEST 10GBe switch on the market is $850. No way am I paying that for my lab. The good news is, these cards, just like 1GB Ethernet cards, support direct connect auto MDI/MDIX detection, so you can form an old school “crossover” connection just using a standard patch cable. I did order a CAT6A cable just to be safe.
Once I installed and configured the new 10GBe cards, I set them up in the Cluster as a Live Migration network:
The same live migration over 10GBe took 65 SECONDS!
In summary -
1GB Live migration, 18 VM’s, 13m15s.
10GB Live migration, 18VM’s, 65 seconds.
In my case, I can drastically decrease the live migration latency, with minimal cost, by using a direct connection between two hosts in a cluster with 10 gigabit Ethernet. Aidan Finn, MVP – has a post with similar results: http://www.aidanfinn.com/?p=12228
Next up, I need to carve up my 10GBe network by connecting it to the Hyper-V virtual switch, and then create virtual adapters. Aidan has a good write-up on the concept here: http://www.aidanfinn.com/?p=12588
Here is a graphic that shows the concept from his blog:
The supported and recommended network configuration guide for Hyper-V clusters is located here:
http://technet.microsoft.com/en-us/library/ff428137(v=WS.10).aspx
Typically in the past, you would see 4 NIC’s, one for management, cluster, live migration, and virtual machines. The common alternative would be to use a single 10GBe NIC (or two in a highly available team) and then use virtual network adapters on a Hyper-V switch, and QoS to carve up weighting. In my case, I have a dedicated NIC for management (the parent partition/OS) and a dedicated NIC for Hyper-V virtual machines. On my 10GBe NIC, I want to connect that one to a Hyper-V virtual switch, and then create virtual network adapters – one for Live Migration and one for Cluster/CSV communication.
We will be using the QoS guidelines posted at: http://technet.microsoft.com/en-us/library/jj735302.aspx
John Savill has also done a nice quick walkthrough of a similar configuration: http://savilltech.com/blog/2013/06/13/new-video-on-networking-for-windows-server-2012-hyper-v-clusters/
When I start – my current network configuration look like this:
We will be attaching the 10GbE network adapter to a new Hyper-V switch, and then creating two virtual network adapters, then applying QoS to each in order to ensure that both channels have their sufficient required bandwidth in the case of contention on the network.
Open PowerShell.
To get a list of the names of each NIC:
Get-NetAdapter
To create the new switch, with bandwidth weighting mode:
New-VMSwitch “ConvergedSwitch” –NetAdapterName “10GBE NIC” –MinimumBandwidthMode Weight –AllowManagementOS $false
To see our new virtual switch:
Get-VMSwitch
You will also see this in Hyper-V manager:
Next up, Create a virtual NIC in the management operating system for Live Migration, and connect it to the new virtual switch:
Add-VMNetworkAdapter –ManagementOS –Name “LM” –SwitchName “ConvergedSwitch”
Create a virtual NIC in the management operating system for Cluster/CSV communications, and connect it to the new virtual switch:
Add-VMNetworkAdapter –ManagementOS –Name “Cluster” –SwitchName “ConvergedSwitch”
View the new virtual network adapters in powershell:
Get-VMNetworkAdapter –All
View them in the OS:
Assign a minimum bandwidth weighting to give QoS for both virtual NIC’s, but apply heavier weighting to Live Migrations in the case of contention on the network:
Set-VMNetworkAdapter –ManagementOS –Name “LM” –MinimumBandwidthWeight 90
Set-VMNetworkAdapter –ManagementOS –Name “Cluster” –MinimumBandwidthWeight 10
Set the weighting so that the total of all VMNetworkAdapters on the switch equal 100. The configuration above will (roughly) allow ~90% for the LM network, and ~10% for the Cluster network.
To view the bandwidth settings of each virtual NIC:
Get-VMNetworkAdapter -All | fl
At this point, I need to assign IP address information to each virtual NIC, and then repeat this configuration on all nodes in my cluster.
After this step is completed, and you confirm that you can ping each other’s interfaces, you can configure the networks in Failover Cluster Administrator. Rename each network appropriately, and configure Live Migration and Cluster communication settings:
In the above picture – I don’t allow cluster communication on the live migration network – but this is optional and you certainly can allow that if the primary cluster communication fails.
Test Live Migration and ensure performance and communications are working properly.
In Summary – here is all the PowerShell used:
Get-NetAdapter
New-VMSwitch “ConvergedSwitch” –NetAdapterName “10GBE NIC” –MinimumBandwidthMode Weight –AllowManagementOS $false
Get-VMSwitch
Add-VMNetworkAdapter –ManagementOS –Name “LM” –SwitchName “ConvergedSwitch”
Add-VMNetworkAdapter –ManagementOS –Name “Cluster” –SwitchName “ConvergedSwitch”
Get-VMNetworkAdapter -All | fl
Set-VMNetworkAdapter –ManagementOS –Name “LM” –MinimumBandwidthWeight 90
Set-VMNetworkAdapter –ManagementOS –Name “Cluster” –MinimumBandwidthWeight 10