Quantcast
Channel: High Availability (Clustering) forum
Viewing all articles
Browse latest Browse all 5654

S2D Cluster without RDMA - Network design questions

$
0
0

Hi all S2D-wizards!

Let me describe what I'm trying to achieve before asking my actual questions:
4-Node Hyperconverged Hyper-V S2D Cluster based on Server 2019 Datacenter.
(Yes, I'm aware of the blocking right now when creating S2D-cluster on 2019 GA but I guess that's another discussion)
(And yes again, I've already purchased the 2019 Datacenter-licenses for this purpose...)

In each node:

  • 2x10G Mellanox Connect X3 SFP+
  • 4x10G Intel X710 SFP+
  • 2xNon-RDMA 10G Switches dedicated for storage network.
    Switches will have a 40GB Trunk between.
    Switches can do virtual stacking.
  • 2x10G Switches for VM-traffic and node-management.
    Non-RDMA - Non-stackable. Pretty basic 10G switches

My idea, which at least looks good on paper in my mind, is to create a simple team for failover on the Mellanox-ports for VM-traffic and node-management.
What would be the most suitable TEAM-type for this? SET or plain old switch independent LBFO?

I've spent the last couple of days trying to figure out how to design/create the storage-network.
There's plenty of guides available on how to configure RDMA-enabled networks but not that much for non-RDMA.
(Reason for not using RoCE is that switches are not compatible and reason for not using iWarp is NIC's are not compatible.)

Most documentation refers to creating a SET-team for storage-network and putting two different network-adapters for SMB behind the SET-team.
Why two nets for the SMB traffic? (SMB Direct?)
Does using a SET-team even make any sense when not utilizing RDMA anyway?

Is there any point in having a separate Live Migration network, if so would you put it on the management-switches or the storage-switches?

Then we the topic of bandwidth-aggregation and fail-over.
When deciding on this design, I was hoping to be able to aggregate the 4-port Intel NIC (used for S2D-cluster storage) over the stacked switches, thus creating an aggregated link of 40Gbps.
I realize this may be a configuration nightmare on the server-side so I'm ready to accept that I may have to ditch the stacking.. However i should be able to achieve 20Gbps? All four NICS connected in an active-passive fashion.
Am I dreaming here or do you agree this SHOULD be possible? If so, what kind of teaming would you recommend? Will S2D traffic be dynamically balanced across all physical nics?

Can anyone please help shed some light over my S2D-project?
Any input is greatly appreciated!

Best regards _Hallberg!


Viewing all articles
Browse latest Browse all 5654

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>