Below are the NIC configurations for HV & SOFS nodes.
Hyper-V Cluster Nodes
2 x Production - allow cluster comms, with client access
4 x SMB - do not allow cluster comms
2 x Heartbeat/Cluster/Live Migration - allow cluster comms, do not allow client access
SOFS Cluster Nodes
2 x Production - allow cluster comms, with client access
4 x SMB - do not allow cluster comms
2 x Heartbeat/Cluster/Redirected I/O - allow cluster comms, do not allow client access
Question 1
Should an issue occur with the SAS cables connected between the SOFS nodes and the JBOD, redirected storage I/O traffic will flow across any network configured to allow cluster communication. Is there a way to restrict the potential redirected I/O traffic to the Heartbeat/Cluster/Redirected I/O NICs, to eliminate any traffic being transmitted over the production NICs? Any NIC eligible to allow cluster comms is technically available for one of the 3 types of cluster comms, CSV redirected traffic included!
Question 2
When referring to an Aidan Finn blog post regarding SMB multichannel - (http://www.aidanfinn.com/2012/11/enabling-smb-multichannel-to-scale-out-file-server-cluster/), he states that the storage network should be configured to allow cluster comms, and to allow client access, for the HV nodes to talk to the SOFS nodes. But in a 6hr clustering video, with Symon Perriman and Elden Cristensen, they state that by no means should cluster comms traffic be enabled for the storage network. Who is correct?
Question 3
In a cluster aware updating scenario, when a HV node is drained of VMs in preparation for patching/reboots, are VM states copied to the other HV node via the network enabled for live migration? Asking as I discovered today that in a 2-node cluster only, no matter how many NICs are used for live migration, only one will be utilised. Concerned whether our 1Gbps NIC will cause VM states to be copied to the other node very slowly during a node patching session.
10GBe NICs would be lovely, but we simply don't have the budget.
Any suggestions welcomed.
Many thanks.