Been having some issues with nodes basically dropping out of clusters config.
Error showing was
"Cluster Shared Volume 'Volume1' ('Data') has entered a paused state because of '(c000020c)'. All I/O will temporarily be queued until a path to the volume is reestablished."
All nodes (Poweredge 420) connected a Dell MD3200 shared SAS storage.
Nodes point to Virtual 2012 R2 DC's
Upon running validation with just two nodes, get the same errors over and over again.
Bemused!
----------------
List Software Updates
Description: List software updates that have been applied on each node.
An error occurred while executing the test.
An error occurred while getting information about the software updates installed on the nodes.
One or more errors occurred.
Creating an instance of the COM component with CLSID {4142DD5D-3472-4370-8641-D
and
List Disks
Description: List all disks visible to one or more nodes. If a subset of disks is specified for validation, list only disks in the subset.
An error occurred while executing the test.
Storage cannot be validated at this time. Node 'zhyperv2.KISLNET.LOCAL' could not be initialized for validation testing. Possible causes for this are that another validation test is being run from another management client, or a previous validation test was
unexpectedly terminated. If a previous validation test was unexpectedly terminated, the best corrective action is to restart the node and try again.
Access is denied
-----------
The event viewer on one of the hosts shows
-------------
Cluster node 'zhyperv2' lost communication with cluster node 'zhyperv1'. Network communication was reestablished. This could be due to communication temporarily being blocked by a firewall or connection security policy update. If the problem persists
and network communication are not reestablished, the cluster service on one or more nodes will stop. If that happens, run the Validate a Configuration wizard to check your network configuration. Additionally, check for hardware or software errors related
to the network adapters on this node, and check for failures in any other network components to which the node is connected such as hubs, switches, or bridges.
The Cluster service is shutting down because quorum was lost. This could be due to the loss of network connectivity between some or all nodes in the cluster, or a failover of the witness disk.
Run the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapter. Also check for failures in any other network components to which the node is connected
such as hubs, switches, or bridges.
Only other warning is because the 4 nic ports in each node server are teamed on one ip address split over two switches - I am not concernd about this and could if required split then pairs, I think this is a red herring????