I am going through the setup and configuration of a clustered Windows Server 2012 R2 Hyper-V host. I’ve followed as much documentation as I can find, and the Cluster Validation is passing with flying colors, but I have three questions about the networking setup.
Here’s an overview as well as a diagram of our configuration:
We are running two Server 2012 R2 nodes on a Dell VRTX Blade Chassis. We have 4-dual port 10 GBe Intel NICS installed in the VRTX Chassis. We have two Netgear 12-Port 10 GBe switches, both uplinked to our network backbone switch.
Here’s what I’ve done on each 2012 R2 node:
-Created a NIC team using two 10GBe ports from separate physical cards in the blade chassis.
-Created a Virtual Switch using this team called “Cluster Switch” with “ManagementOS” specified.
-Created 3 virtual Nics that connect to this “Cluster Switch”: Mangement (10.1.10.x), Cluster (172.16.1.x), Live Migration (172.16.2.x)
-Set up VLAN ID 200 on the Cluster NIC using Powershell.
-Set Bandwidth Weight on each of the 3 NICS. Mangement has 5, Cluster has 40, Live Migration has 20.
-Set a Default Minimum Bandwidth for the switch at 35 (for the VM traffic.)
-Created two virtual switches for iSCSI both with “-AllowManagementOS $false” specified.
-Each of these switches is using a 10GBe port from separate physical cards in the blade chassis.
-Created a virtual NIC for each of the virtual switches: ISCSI1 (172.16.3.x) and ISCSI2 (172.16.4.x)
Here’s what I’ve done on the Netgear 10GB switches:
-Created a LAG using two ports on each switch to connect them together.
-Currently, I have no traffic going across the LAG as I’m not sure how I should configure it.
-Spread out the network connections over each Netgear switch so traffic from the virtual switch “Cluster Switch” on each node is connected to both Netgear 10 GB switches.
-Connected each virtual iSCSI switch from each node to its own port on each Netgear switch.
First Question: As I mentioned, the cluster validation wizard thinks everything is great. But what about the traffic the Host and Guest VMs use to communicate with the rest of the corporate network? That traffic is on the same subnet as the Management NIC. Should the Management traffic be on that same corporate subnet, or should it be on its own subnet? If Management is on its own subnet, then how do I manage the cluster from the corporate network? I feel like I’m missing something simple here.
Second Question: Do I even need to implement VLANS in this configuration? Since everything is on its own subnet, I don’t see the need.
Third Question: I’m confused how the LAG will work between the two 10 Gbe switches when both have separate uplinks to the backbone switch. I see diagrams that show this setup, but I’m not sure how to achieve it without causing a loop.
Thanks!