We have a 2 node 2012 R2 cluster running on a Dell VRTX chassis with M620 Blades. Cluster Storage is a 10 TB shared chassis storage and a 21 TB iSCSI Synology NAS. Cluster, Live Migration, Management and iSCSi are all on their subnets. We have about 30 VMS on the cluster. The Quorum and the chassis storage CSVs are owned by node #1. The iSCSI CSV is owned by node #2.
Cluster is functional and Live Migration works fine as long as both nodes are running.
Here’s the problem I have discovered: I need to shutdown node #1 to do some Dell recommended troubleshooting (1 port on a dual port Intel 10 gig pci card is receiving but not sending packets, but that’s another story.)
- When I tried to drain the roles from node #1, I get the error “Move of cluster role 'Cluster Group' for drain could not be completed. The operation failed with error code 0x138D.”
- I then attempted to move the CSV disks from node #1 to node #1 and it fails with the error “Clustered Storage is not connected to this node” Seems like a good clue as to the problem, but not sure why I’m getting this error.
- So I go ahead and manually live migrate all the roles to node #2 without a problem.
- I then shut down node #1.
- As soon as node #1 gets shut down, the quorum disk and two other CSV disks (which happen to be owned by node #1) go offline. This shouldn’t happen!
- Since the cluster can’t talk to the Quorum disk, the whole cluster goes down and, since 2 out of 3 CSVs are not available to node #2, many of the VMs go down.
- When node #1 comes back up, I’m able to reconnect to the cluster, and the Quorum disk is online, but the CSV disks are still offline.
- In Failover Cluster Manager, I have to “Resume” node #1 with the option “fail back roles” (even though it had no roles)
- Then I am able to online the CSV disks and the cluster is back to “normal.”
So it seems node 2 is having problems talking to the Quorum 2 out of 3 CSV disks when node 1 is missing. Definitely not very redundant!
When we built this cluster a year and a half ago, everything was validated and working flawlessly. The problems seemed to begin after a long lasting blue screen issue on node #1 which was finally traced to a bad fan on one of the 10 Gig NICs. I suspect a networking issue, but when I run the Cluster Validation, the only issue that pops up is a connection issue to our iSCSI drive (because we have a bad port on one of our NICs, working with Dell on that now.) The iSCSI CSV is owned by node #2 and doesn’t go offline when node #1 is rebooted.
Can anyone offer any insight?
Thanks!
George Moore