Using a Lenovo S3200 SAN, I have 2*2012R2 servers directly attached to the SAN via fiberchannel with 2 connections per server to the SAN.
All was going well until I moved the system from the Build Lab to the production network.
We've had multiple issues with the SAN drives going offline, and the only way to fix it seems to require shutting down both nodes and bringing them back up one at a time.
Currently we can reproduce/force the issue by trying to create a new vhdx from the Roles/New virtual Machine/New Drive wizard. If we create any size vhdx on a cluster shared volume, no matter whether it is fixed or expanding, the wizard hangs at "Creating the new virtual hard disk" and the only way to get the entire cluster back and stable is to reboot both servers.
If I remove the Cluster Shared Volume Drive out of the cluster, I can then create a vhdx file on that volume with no problem.
If I then import the volume into the cluster again, explorer will immediately hang as soon as I browse to that location.
If we repeat the process but make the drive a vhd file, the above problems do not occur (but then I'm limited to Gen1 machines).
Using powershell to create a vhdx file on either the c drive or the cluster shared storage will fail if writing to the csv location as per screenshot below.
Currently if the csv contains a failed vhdx file, the only way to get the hosts to respond correctly seems to be to reformat the drives.
Has anyone seen this before? Am I missing something?
http://absoblogginlutely.net