Normally when configuring a SQL cluster you put the communication to the shared storage (usually a SAN) on a separate network, and so your config looks something like this (long live ms paint!):
So in this example the 192.168.100.0/28 subnet is used for cluster inter-communication (heartbeat) and for access to the shared storage. Access to the shared storage is kept completely isolated from the corporate network and everyone's happy.
On the corporate network side the cluster registers itself onto DNS on the 10.0.0.0/24 subnet and clients can resolver the cluster name and connect to it.
My configuration is slightly different though and I am having some problems with the networking side.
In my configuration my shared storage is a file server cluster with two nodes. So the "clients" of the file server cluster are the SQL servers themselves. As such, the network configuration should, as I see it, be something like this:
In this setup the SQL servers would still connect to the shared storage on a separate subnet, as per best practices, therefore keeping the cluster traffic separate from the corporate subnet.
However, for this to work the file server cluster IP address needs to be on the 192.168.100.0/28 subnet, and still be able to register on the DNS server on 10.0.0.0/24 subnet. Now this already involves setting up a lot more things than you would normally on the cluster NICs, it is no longer just an IP address and a subnet mask, now you need a DNS server and a gateway, as well as ensuring that the DNS server is contactable from clients on the 192.168.100.0/28 subnet, meaning complete isolation is gone.
I can't find any best practices on how to get this done, and I don't think that the solution is to put this on a the cluster communication on the corporate network, nor to create a DNS server on the 192.168.100.0/28 subnet, so does someone have any ideas how to get this done or how to improve the design of the setup?