Hello Everyone,
Thanks in advance for your help and thoughts.
We have a 2 node cluster running Windows 2008 R2 that runs our virtualized appliance, the witness disk resides on another server on the same subnet. A month ago the servers were patched, security only mind you, as part of our standard business practice and since this time the cluster has failed to start.
When looking in the logs it states that it cannot reach the witness disk, however, I can reach the server that holds this disk from both servers in the node - all network connectivity / checks have been done that I can think of.
The cluster service on both nodes will not start either, when attempting to start the service on either node, it states that it cannot and entries in the log are showing event ID 1090 and 7024. After checking online about these it would appear that the local cluster database on each node would be corrupt? I ran a cluster log /g and I see entries pertaining to 'open key parameters' as well.
I am at the point, unless anyone has a better solution, that I need to rebuild the cluster. I was thinking that I would start by removing one node (evict command) and seeing if I could bring up the other node as a standalone box. However, information
that I saw online indicates that doing this was bad and that the evicted server cannot be brought back into the cluster unless Windows was reinstalled and possibly on new hardware?
Can anyone validate the evict command and what exactly it does, both good and bad, and if anyone has any ideas on how to fix our downed cluster, please let me know.
Thanks,
CU