Quantcast
Channel: TechNet Blogs
Viewing all articles
Browse latest Browse all 17778

Hyper-V 2012 R2 Network Architectures Series (Part 2 of 7) – Non-Converged Networks, the classical but robust approach

$
0
0

As an IT guy I have the strong believe that engineers understand graphics and charts much better than bullets points and text, so the first thing I will do is to paste the following diagram

clip_image002

At first sight you can recognize from left to right that there are 6 Physical Networks cards used in this example. You can also recognize that two of these adapter on the left are 1GB adapters and the other four green adapters are 10GB adapters. These basic considerations are really important because will dictate how your Hyper-V Cluster nodes will perform.

On top of the 6 Physical Network cards you can see that some of them are using RSS and some of them are using dVMQ. Here is where the things start to become interesting because you might wonder why I don’t suggested to do a big 4 NICs team with the 10GB adapters and dismiss or disable the 1GB adapters. At the end of the day 40GB should be more than enough right?

Well, as PFE, I like stability, high availability and robustness on the Hyper-V environments, but also separate things that have different purposes. Using the approach from the picture above will give me the following benefits:

  • You can use RSS for Mgmt, CSV and LM traffics. This will enable the host to squeeze the 10GB adapters if needed. Remember that RSS and dVMQ are mutually exclusive, so if I want RSS I need to separate Physical NICs
  • Since 2012 R2, LM and CSV can take benefit from SMB Multichannel, so I don’t need to create a Team, especially when the adapters support RSS. CSV and LM will be able to use 10GB each without external dependencies or aggregations on the Physical Switch like LACP
  • CSV and LM Cluster networks will provide enough resilience to my cluster in conjunction with the Mgmt network.
  • Mgmt network will have HA using an LACP team. This is important and possible because each Physical NIC is connected directly to a Physical Switch that can be aggregated by our Network Administrator.
  • Any file copy using SMB between Hyper-V hosts will use the CSV and LM network cards at 10GB because how SMB Multichannel algorithm work. Faster adapters take precedence, so even if it’s a copy over the Mgmt network, I will take benefit of this awesome feature and will send the copy at 20GB (10GB from each CSV and LM adapter)
  • SCVMM will always have a dedicated Mgmt network to communicate with the Hyper-V host for any required operation. So creation or deleting of any Logical Switch will never interrupt the communication between them.
  • You can dedicate two entire 10GB Physical Adapters to my Virtual Machines using a LACP Team and creating the vSwitch on top. dVMQ and vRSS will help my VMs to perform as needed while the LACP /Dynamic Team will allow me to receive and send up to 20GB from my VMs if really required. I have to be honest here and the maximum bandwidth inside a VM that I have seen using this configuration was 12GB, but is not a bad number at all.
  • You can use the SCVMM 2012 R2 to create my logical switch on top and apply any desired QoS to my VMs if needed.
  • You are not mixing Storage IOs with Network IOs

So, as you can see, this setup has a lot of benefits and best practices. Is not bad at all and maybe I already forget some other benefit… but where are the constraints or limitations here with this Non-Converged Network Architecture? Here are some of them:

  • Cost. Not a minor issue for some customer that can’t afford to have 4 10GB adapters and all the network infrastructure that this might require if we want real HA on the electronics.
  • Additional Mgmt effort. This model requires us to setup and maintain 6 NICs and their configurations. It also requires the Network administrator to maintain the LACP port groups on the Physical Switch.
  • More cables in the datacenter.
  • Replica or other Management traffic that is not SMB will only have up to 2GB throughput
  • Enterprise Hardware is going on the opposite direction. Today is more common to see 3 party solutions that multiplex the real adapters in more logical partitions, but let’s talk about it later

Well, maybe I didn’t gave you any new information regarding this configuration, but at least we can see that this Architecture is still a good choice if possible for several reasons. Is up to you and the hardware you have to use this option.

Let’s see you again in my next post where I will talk about Converged Networks Managed by SCVMM and Powershell


Viewing all articles
Browse latest Browse all 17778

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>