Quantcast
Channel: High Availability (Clustering) forum
Viewing all 5654 articles
Browse latest View live

Windows Server 2012 Cluster Failover Team issue

$
0
0

Hi Guys,

We are running a cross site same subnet cluster within Windows Server 2012. We had a switch failure which caused the team to fail over to the standby teamed NIC. Pings were successful to all servers, but after 5 seconds the servers were removed from the failover cluster memebership and the . This is completely reproducible and you can even disable the NIC on one server it is removed form the cluster membership. After 1 minute the cluster service restarts and everything is fine even on the backup NIC. Does anyone have any recommendations for this? I upped the timeout to 40 seconds and still had the issue.

Thanks for any help on this.

Paul


Cluster Aware Updating - Failed to restart - The RPC Server is unavailable

$
0
0

I have a 3 node Windows Server 2012 R2 Failover Cluster, and I'm having trouble getting Cluster Aware Updating to work properly. I have been able to successfully apply updates that do not require a reboot, however, anytime I have updates thatdo require a reboot, the process fails. The error message says:

Failed to restart <ServerName>:(ClusterUpdateException) Failed to restart <ServerName>:(Win32Exception) The RPC server is unavailable ==> (Win32Exception) The RPC server is unavailable

I have verified that the firewall rule to allow automatic restarts is configured according to Technet: Requirements and Best Practices for Cluster-Aware Updating

I have also made sure that the CAU AD account has local admin rights, as well as "Force shutdown from a remote system" rights on each of the cluster nodes. In this case, I have been applying updates manually from my workstation (which is not a member of the cluster) while logged in with Domain Admin rights (as opposed to letting the cluster update itself based on a schedule). I'd like to verify that the entire process works properly before letting it update itself. What am I missing?

Cluster Validation / UDP 3443

$
0
0

Hello,

I'm running into a problem running Cluster Validation (2012 R2)

I'm getting a network communications error stating that each node is unable to contact the other over UDP3343.

The only advice I've found online instructs me to open UDP3343 on the firewall. (or disable it all together)

All firewalls are disabled, all other IP communications are fine.

a Netstat shows that UDP3343 is not listening and a NMAP shows the port is not open.

Why would UDP3343 not be listening?

Thank you!

-Ryan


-Ryan Biddle-

Change the path in registry of a file server resource?

$
0
0

Hopefully a yes or no question,

There was a drive letter conflict on the 2008r2 file cluster, now the resources that were on D: which conflicted has been changed and is now drive letter Y: but they are in a failed state.

I am getting the event id 1588 - Cluster file server resource 'FileServer-(FILESRV)(Disk)' cannot be brought online. Because the location has changed. Checked the perms are all ok.

According to the registry, the path to the resource to the is still pointing to D: drive.

Is it possible to change the path to the Y: drive then try and bring it online or will I have to remove the sharename value in registry then recreate the share?


Does the NLB cluster can deployed in multi site?

$
0
0

Hi all,

i have a MPLS net in site A and site B, i get a vlan can connect two site, in this scenario can i set the NLB cluster in multi-site? how to do it, thanks.

Can't remove Failover Cluster feature on Windows 2008 R2

$
0
0

Hello

When remove the Failover Cluster feature has following message:

Cannot remove Failover Clusting

This server is an active node in a failover cluster. Uninstalling the Failover CVlustering feature on thos node may impact the availabilty of clustered service and applications. It is recommended that you first evict the server from cluster membership. This can be done through the Failover Cluster Management snap-in by expanding the console tree under Nodes, selecting the node, clicking More Actions, and then clicking Evict.

I'm sure there no cluster formed, so how can I remove it?

 

Thanks !

File Share Witness Resouces Errors in a SQL 2012 Alwayson Availability Group Environment

$
0
0

Hi I am getting the following error in WFC Manager and in my system event log:

Event ID1564: 

File share witness resource 'File Share Witness' failed to arbitrate for the file share '\\SQL2012ClusterWitnessPath'. Please ensure that file share '\\SQL2012ClusterWitnessPath' exists and is accessible by the cluster.

Event ID 1069: 

Cluster resource 'File Share Witness' of type 'File Share Witness' in clustered role 'Cluster Group' failed.

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

Event ID 1205:

The Cluster service failed to bring clustered service or application 'Cluster Group' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service or application.

These errors showed up every hour on the hour and then suddenly stopped.  I tried looking at the cluster.log file but there wasn't anything recorded there.  The file share witness shows to be online and my AG did not fail over to another node.  The cluster has read and write permissions to the share.  I did not find any error messages about the witness share on the remote server. 

I am wondering what caused these series of events to occur?

Thanks.

Hyper-V with SOFS Appalling Write Performance

$
0
0

Hi,

We are testing our new scale out file server and are having major issues with performance.  

We have 2 server clustered running the sofs attached to a shared jbod with 8x 4tb sas drives.  We have pooled and added the disks to a csv and begun hosting test vm's on the new share.

When testing the volumes locally we get 1,000mb/s of throughput read and write in a spanned volume.

When testing accessing the same volume via the sofs share we are seeing 400mb/s of read and 10-50mb/s of write!!!

This performance is completely unacceptable.  Is this due to the sofs forcing write through and no write caching?

All servers have quad port 10gb NIC's with SMB direct.  The NIC's and network infrastructure have been tested and all run at line speed.  The performance dip is clearly as a result of the sofs tech.

If we create a standard share and copy between servers we get full speeds again.

All servers have all the latest updates and hotfixes applied.  

Any suggestions as this is going to completely kill and hopes we had of using this technology.


Creating a File Server Failover Cluster with DFS-N and Direct Attached Storage using DFS-R on Server 2012 R2

$
0
0
I have two 2012 R2 servers, each with its own DAS. Can I set up the servers in a FoC with DFS-N and using DFS-R to sync the DAS?

path to main disk drive

$
0
0
when I right click mange on computer it says cant find path to HP (C:) drive?

Hyper-v 2012 R2 Live migration issue in 2003 Domain function Level

$
0
0

hi Team ,

i recently build 2012 R2 Hyper-v Cluster with three node. Everrything working fine with out any issue . Cluster working also fine. Later i came across one issue when tried to Live migration virtual machine from one host to another . it failed all the time while quick migration is working . i gone through few articles and find it is known issue with hyper-v 2012 R2 where domain functional level is set to 2003 . although they have provided Hotfix but no solution.http://support.microsoft.com/kb/2838043

Please let me know if any one face similar issue and able to resolve by any hotfix. My host are updated .

Thanks

Ravindra


Ravi

Failed VM in Failover Cluster Manager

$
0
0

We have a problem that seems to be caused by 3 VMs in our cluster that have failed.
The cluster is a server 2012 R2 cluster.

The VMs are not in Hyper-V anymore, but they still appear in the Failover Cluster Manager. We have tried removing each VM and we receive the following error message, "The file cannot be opened because it is in the process of being deleted."

We have tried moving the failed VM to another server in the cluster but receive the same message.

Is anyone aware of a way to manually delete a specific VM from a cluster or know a solution to our problem?

Thanks

Cannot add cluster disk on windows 2012

$
0
0

I have 2 servers connected to a shared storage array using Fibre Channel.  Currently there are 2 luns on the storage device.  

From the Failover Cluster Manager, when I select “Add Disk” from Storage – Disks, both luns show up and I can add them without problems.  But if I choose to only add one of them first (it doesn't make a difference which one is added first), then it will not allow adding the second one later.  I get a message: "No disks suitable for cluster disks were found. For diagnostic information about disks available to the cluster, use Validate a Configuration Wizard to run Storage tests."

When I do add one (or both disks at the same time), they work just fine for failover clustering.

I can’t imagine this is by design. Is this a known/unknown issue, or is there something special that needs to be done?

Thanks

Replication DC

$
0
0

Related Information: In My environment 2 Domain controller and both are windows server 2008 enterprise r2

the issue: in the dc1 there are an error (id 1863)"This directory server has not received replication information from a number of directory servers within the configured latency interval."

Also when i try to log in to dc2 remotely, it gives error " the security database in the server does not have the computer account for this workstation trusted relationship " 

plz guide me to fix this case if you facing it before. 

    

Changing existint windows 2008 R2 cluster to a mult site cluster.

$
0
0

I have a 2 node windows 2008 r2 cluster running on on our Prodution site running as active and passive.   Its has sql 2012 installed on it and has the shared storage all configured and everything is working well.

We know have a DR site, and we want to change this existing 2 node cluste to a multisite cluster, so we want to setup 2 nodes in the DR sites with its own storage and then add these nodes to this existing production cluster.  So how would we change this existing cluster, would we have to change the quorum first, and change it from 'node and disk majority' to 'node and file share majority' and then create a file share witness at an external 3rd site we may have, and then add the 2 nodes to this existing cluster, or add the nodes first and then change the quorum settings.  We only want to failover to the DR site, if the prod site goes completely down, or the 2 nodes in production site go down.  If one node goes down in the the production site we only want it to failover to a node at production not to any of the nodes in DR.  

Can anyone confirm the exacts steps to take to change this exisiting cluster to a multi site cluster with these requirements if its possible.

We will use sql availability groups to replicate to the databases from prod to DR.


Unable to bring cluster group online

$
0
0

Hi,

I have a Windows 2008 2 node cluster.

We have SQL on node 1.  The cluster group is on node 2 right now.

Over the weekend, the cluster group failed and will not come back online. We were unable to connect via cluster manager through the name.  Able to connect on the server, but only see cluster events and cluster status 'down'.  The cluster service on this node is on.  Here is cluster GROUP output:

Cluster IP Address - Offline

Cluster Name - Offline

Quorum - Failed

When attempting to online the group we receive:

System error 5908 has occurred (0x00001714).
The group is unable to accept the request since it is moving to another node.

The cluster log contains many instances of this error:

00000b30.00001b7c::2014/08/18-16:31:07.502 ERR   [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Failed to wait for pending resource control call to Quorum.

The Q: drive is active on node 2 and I am able to browse and write files to it.

Here is the last 10 minutes of cluster log.  I tried to online the group during that time.  Thanks for your review.

-------------------------------------------------------------------------------------------------------------

00000b30.00002728::2014/08/18-16:24:59.737 ERR   [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Failed to wait for pending resource control call to Quorum.

'
00000b30.00002728::2014/08/18-16:24:59.737 WARN  [RCM] ResourceControl(GET_COMMON_PROPERTIES) to Quorum returned 5910.
00000b30.0000138c::2014/08/18-16:25:17.883 ERR   [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Failed to wait for pending resource control call to Quorum.

'
00000b30.0000138c::2014/08/18-16:25:17.883 WARN  [RCM] ResourceControl(STORAGE_GET_DISK_INFO_EX) to Quorum returned 5910.
000012c4.000018c0::2014/08/18-16:26:07.689 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqclus.dll
00000b30.000022f8::2014/08/18-16:26:07.689 INFO  [RCM] rcm::RcmResType::LoadDll: Got error 126; will attempt to load mqclus.dll via Wow64.
00001330.00001968::2014/08/18-16:26:07.689 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqclus.dll
00000b30.000022f8::2014/08/18-16:26:07.689 WARN  [RCM] Failed to load restype MSMQ: error 126.
000012c4.000018c0::2014/08/18-16:26:07.689 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqtgclus.dll
00000b30.000022f8::2014/08/18-16:26:07.689 INFO  [RCM] rcm::RcmResType::LoadDll: Got error 126; will attempt to load mqtgclus.dll via Wow64.
00001330.00001968::2014/08/18-16:26:07.689 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqtgclus.dll
00000b30.000022f8::2014/08/18-16:26:07.689 WARN  [RCM] Failed to load restype MSMQTriggers: error 126.
00000b30.00000f74::2014/08/18-16:26:25.305 ERR   [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Failed to wait for pending resource control call to Quorum.

'
00000b30.00000f74::2014/08/18-16:26:25.305 WARN  [RCM] ResourceControl(GET_CLASS_INFO) to Quorum returned 5910.
00000b30.000006dc::2014/08/18-16:26:41.844 INFO  [RCM] rcm::RcmApi::OnlineGroup: (Cluster Group)
00000b30.000006dc::2014/08/18-16:26:41.844 INFO  [GUM] Node 2: Processing RequestLock 2:207
00000b30.00002618::2014/08/18-16:26:41.922 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285421)
00000b30.000006dc::2014/08/18-16:26:41.922 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:42.031 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:26:43.046 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:43.046 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:26:44.060 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:44.060 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:26:45.074 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:45.074 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:26:46.088 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:46.088 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:26:47.102 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:47.102 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:26:48.117 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:48.132 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:26:49.146 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:49.146 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:26:50.161 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:50.176 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:26:51.190 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:26:51.206 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:26:54.295 INFO  [GUM] Node 2: Processing RequestLock 1:22512
00000b30.00002618::2014/08/18-16:26:54.295 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285431)
00000b30.000006dc::2014/08/18-16:27:01.223 INFO  [GUM] Node 2: Processing RequestLock 2:217
00000b30.00002618::2014/08/18-16:27:01.223 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285432)
00000b30.000006dc::2014/08/18-16:27:01.223 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:27:01.223 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:27:11.241 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:27:11.241 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:27:21.258 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:27:21.289 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:27:23.614 INFO  [GUM] Node 2: Processing RequestLock 1:22513
00000b30.00002618::2014/08/18-16:27:23.614 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285435)
00000b30.000006dc::2014/08/18-16:27:31.306 INFO  [GUM] Node 2: Processing RequestLock 2:220
00000b30.00002618::2014/08/18-16:27:31.306 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285436)
00000b30.000006dc::2014/08/18-16:27:31.306 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:27:31.306 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:27:41.323 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:27:41.355 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:27:51.372 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:27:51.372 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:27:54.305 INFO  [GUM] Node 2: Processing RequestLock 1:22514
00000b30.00002618::2014/08/18-16:27:54.305 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285439)
00000b30.000006dc::2014/08/18-16:28:01.389 INFO  [GUM] Node 2: Processing RequestLock 2:223
00000b30.00002618::2014/08/18-16:28:01.389 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285440)
00000b30.000006dc::2014/08/18-16:28:01.389 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:28:01.389 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:28:11.406 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:28:11.422 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:28:21.439 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:28:21.455 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:28:23.624 INFO  [GUM] Node 2: Processing RequestLock 1:22515
00000b30.00002618::2014/08/18-16:28:23.624 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285443)
00000b30.000006dc::2014/08/18-16:28:31.472 INFO  [GUM] Node 2: Processing RequestLock 2:226
00000b30.00002618::2014/08/18-16:28:31.472 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285444)
00000b30.000006dc::2014/08/18-16:28:31.472 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:28:31.488 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:28:41.504 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:28:41.504 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:28:51.520 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:28:51.520 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:28:54.328 INFO  [GUM] Node 2: Processing RequestLock 1:22516
00000b30.00002618::2014/08/18-16:28:54.328 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285447)
00000b30.000006dc::2014/08/18-16:29:01.536 INFO  [GUM] Node 2: Processing RequestLock 2:229
00000b30.00002618::2014/08/18-16:29:01.536 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285448)
00000b30.000006dc::2014/08/18-16:29:01.536 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:29:01.552 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:29:11.568 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:29:11.568 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:29:21.584 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:29:21.600 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:29:23.643 INFO  [GUM] Node 2: Processing RequestLock 1:22517
00000b30.00002618::2014/08/18-16:29:23.643 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285451)
00000b30.000006dc::2014/08/18-16:29:31.616 INFO  [GUM] Node 2: Processing RequestLock 2:232
00000b30.00002618::2014/08/18-16:29:31.616 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285452)
00000b30.000006dc::2014/08/18-16:29:31.616 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:29:31.616 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:29:41.632 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:29:41.632 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:29:51.648 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:29:51.695 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:29:54.331 INFO  [GUM] Node 2: Processing RequestLock 1:22518
00000b30.00002618::2014/08/18-16:29:54.331 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285455)
00000b30.000006dc::2014/08/18-16:30:01.711 INFO  [GUM] Node 2: Processing RequestLock 2:235
00000b30.00002618::2014/08/18-16:30:01.711 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285456)
00000b30.000006dc::2014/08/18-16:30:01.711 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:30:01.711 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:30:11.727 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:30:11.742 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.0000138c::2014/08/18-16:30:17.889 ERR   [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Failed to wait for pending resource control call to Quorum.

'
00000b30.0000138c::2014/08/18-16:30:17.889 WARN  [RCM] ResourceControl(GET_COMMON_PROPERTIES) to Quorum returned 5910.
00000b30.00001fdc::2014/08/18-16:30:17.905 INFO  [NM] Received request from client address 10.12.13.8.
00000b30.00000f74::2014/08/18-16:30:17.905 INFO  [NM] Received request from client address 10.12.13.8.
00000b30.0000115c::2014/08/18-16:30:17.952 INFO  [NM] Received request from client address 10.12.13.8.
000012c4.000018c0::2014/08/18-16:30:18.326 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqclus.dll
00000b30.00000f74::2014/08/18-16:30:18.326 INFO  [RCM] rcm::RcmResType::LoadDll: Got error 126; will attempt to load mqclus.dll via Wow64.
00001330.00000a10::2014/08/18-16:30:18.326 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqclus.dll
00000b30.00000f74::2014/08/18-16:30:18.326 WARN  [RCM] Failed to load restype MSMQ: error 126.
00000b30.00000f74::2014/08/18-16:30:18.326 WARN  [RCM] rcm::RcmApi::ResTypeControl: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
000012c4.000018c0::2014/08/18-16:30:18.326 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqclus.dll
00000b30.00002120::2014/08/18-16:30:18.326 INFO  [RCM] rcm::RcmResType::LoadDll: Got error 126; will attempt to load mqclus.dll via Wow64.
00001330.00000a10::2014/08/18-16:30:18.326 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqclus.dll
00000b30.00002120::2014/08/18-16:30:18.326 WARN  [RCM] Failed to load restype MSMQ: error 126.
000012c4.000018c0::2014/08/18-16:30:18.326 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqtgclus.dll
00000b30.00000f74::2014/08/18-16:30:18.326 INFO  [RCM] rcm::RcmResType::LoadDll: Got error 126; will attempt to load mqtgclus.dll via Wow64.
00001330.00000a10::2014/08/18-16:30:18.326 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqtgclus.dll
00000b30.00000f74::2014/08/18-16:30:18.326 WARN  [RCM] Failed to load restype MSMQTriggers: error 126.
00000b30.00000f74::2014/08/18-16:30:18.326 WARN  [RCM] rcm::RcmApi::ResTypeControl: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
000012c4.000018c0::2014/08/18-16:30:18.326 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqtgclus.dll
00000b30.00002120::2014/08/18-16:30:18.326 INFO  [RCM] rcm::RcmResType::LoadDll: Got error 126; will attempt to load mqtgclus.dll via Wow64.
00001330.00000a10::2014/08/18-16:30:18.326 WARN  [RHS] ERROR_MOD_NOT_FOUND(126), unable to load resource DLL mqtgclus.dll
00000b30.00002120::2014/08/18-16:30:18.326 WARN  [RCM] Failed to load restype MSMQTriggers: error 126.
00000b30.000006dc::2014/08/18-16:30:21.759 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:30:21.759 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:30:23.646 INFO  [GUM] Node 2: Processing RequestLock 1:22519
00000b30.00002618::2014/08/18-16:30:23.646 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285459)
00000b30.000006dc::2014/08/18-16:30:31.775 INFO  [GUM] Node 2: Processing RequestLock 2:238
00000b30.00002618::2014/08/18-16:30:31.775 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285460)
00000b30.000006dc::2014/08/18-16:30:31.775 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:30:31.775 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000021dc::2014/08/18-16:30:33.974 INFO  [NM] Received request from client address 10.12.13.8.
00000b30.000006dc::2014/08/18-16:30:41.791 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:30:41.822 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:30:51.838 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:30:51.838 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:30:54.334 INFO  [GUM] Node 2: Processing RequestLock 1:22520
00000b30.00002618::2014/08/18-16:30:54.334 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285463)
00000b30.000006dc::2014/08/18-16:31:01.854 INFO  [GUM] Node 2: Processing RequestLock 2:241
00000b30.00002618::2014/08/18-16:31:01.854 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285464)
00000b30.000006dc::2014/08/18-16:31:01.854 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:31:01.870 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00001b7c::2014/08/18-16:31:07.502 ERR   [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Failed to wait for pending resource control call to Quorum.

'
00000b30.00001b7c::2014/08/18-16:31:07.502 WARN  [RCM] ResourceControl(STORAGE_GET_DISK_INFO_EX) to Quorum returned 5910.
00000b30.000019f0::2014/08/18-16:31:07.861 ERR   [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Failed to wait for pending resource control call to Quorum.

'
00000b30.000019f0::2014/08/18-16:31:07.861 WARN  [RCM] ResourceControl(GET_COMMON_PROPERTIES) to Quorum returned 5910.
00000b30.000006dc::2014/08/18-16:31:11.886 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:31:11.886 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000006dc::2014/08/18-16:31:21.902 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:31:21.902 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.00002618::2014/08/18-16:31:23.665 INFO  [GUM] Node 2: Processing RequestLock 1:22521
00000b30.00002618::2014/08/18-16:31:23.665 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285467)
00000b30.000006dc::2014/08/18-16:31:31.918 INFO  [GUM] Node 2: Processing RequestLock 2:244
00000b30.00002618::2014/08/18-16:31:31.918 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 285468)
00000b30.000006dc::2014/08/18-16:31:31.918 INFO  [RCM] rcm::RcmGum::SetGroupPersistentState(Cluster Group,1)
00000b30.000006dc::2014/08/18-16:31:31.949 WARN  [RCM] rcm::RcmApi::OnlineGroup: retrying: Cluster Group, 5908.
00000b30.000022f8::2014/08/18-16:31:46.427 ERR   [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Failed to wait for pending resource control call to Quorum.

'
00000b30.000022f8::2014/08/18-16:31:46.427 WARN  [RCM] ResourceControl(STORAGE_GET_DISK_INFO_EX) to Quorum returned 5910.
00000b30.00002618::2014/08/18-16:31:54.337 INFO  [GUM] Node 2: Processing RequestLock 1:22522
00000b30.00002618::2014/08/18-16:31:54.337 INFO  [GUM] Node 2: Processing GrantLock to 1 (sent by 2 gumid: 285469)


Cluster Networking - Minimal configuration

$
0
0

I am looking for the best way to configure our 3 node Hyper-V cluster.  The cluster nodes have 2x 1Gbit NICs and 2x 10Gbit nics.  The 2 10Gbit nics are conifgured for iSCSI, so cluster validation warns that they are disabled:
These paths will not be used for cluster communication and will be ignored. This is because interfaces on these networks are connected to an iSCSI target

This made me start looking at the best way to use my 4 network adapters.

My current configuration is:

1Gig1 -Mgmt and Cluster  (10.1.2.0/24 subnet)

1Gig2 -Hyper-V switch for guest VMs (no IP defined on host)

10Gig1 -iSCSI  (10.2.21.0/24 subnet, no gateway)

10Gig2-iSCSI  (10.2.22.0/24 subnet, no gateway)

In Failover cluster manager, I have "Cluster and Client" for the 3 networks that are visible to the host machine (but cluster validation now tells me that the 2 iSCSI adapters can't be used)

A few months ago we had several issues with NIC teaming (errors about MAC addresses and a few BSOD crashes that WinDBG pointed to NIC teaming as the cause), so we moved away from using it.  Not sure if the issues have been resolved.

Is there anything "wrong" with the way it is currently set up?  Is there a better way to set it up using the 4 network cards and still keep things pretty simple?


James Right Size Solutions

2008R2 Hyper-V HA Cluster Issue

$
0
0

Hi,

Current setup is two DL380 G7 servers, currently in a HA Hyper-V Cluster, storage is an iSCSI HP G4300 G2 SAN. The Hyper-V traffic runs through an HP Team (same on both hosts), this team crashes every 5-28 days, I believe it’s the traffic through the team and not the cluster itself, usually only on one host but has happened on both, which means the VM’s crash, migrate to the other host, then they boot back up.  The event logs when the crash happen all start with a mixture of CPQTeamMP Event ID 435, 388, HP Teaming Software is at the latest software (10.90.0.0) and the latest firmware and drivers for the NIC's are installed from the latest HP SPP (2014.6.0).  This has happened for a good year or so and now it is very frustrating.

I found an article, where it looks like I need to make sure Hyper-V is installed first, then install and setup teaming, which I can’t remember if I did or not when I setup the hosts.

My question is, after I uninstall the teaming, if I uninstall Hyper-V, will this affect the host itself in the cluster and how the host sees the storage? Will I have to setup and configure all that again as well, or can I just uninstall teaming, Hyper-V, then install Hyper-V, then setup the teaming and this won't have an impact on storage or the cluster?

Thanks

Mark

Node in Cluster drops NIC connection. Troubleshooting steps?

$
0
0

Hey all,

Having a Node that occasionally has its NIC disconnect on us (about once a month).  We get nothing but the Event ID 1127 stating that the network interface fails.  Its a NIC team and I've checked the driver/firmware versions of the NIC and the NIC team software, as well as the configuration on each and they are all identical on this Node and the working Node.

I've read someplace that the NIC Power Saving setting needs to be unchecked so the NIC doesn't go to sleep but in Windows 2008 (non-R2) I'm not finding that option anywhere on these NICs to check it. 

I'm wondering if anyone could check out the Cluster log and see if they notice anything obvious and have any ideas that I can pursue for further troubleshooting:

00001770.00000b70::2014/08/05-16:56:06.358 WARN  [RES] Physical Disk <52_F_Log01>: VerifyFS: Ignoring failure to open file \\?\GLOBALROOT\Device\Harddisk53\Partition1\BellDesk_Log2_BKUP.ldf Error: 5.
00001728.000008b8::2014/08/05-16:56:12.724 WARN  [RES] Physical Disk <52_F_Data01>: VerifyFS: Ignoring failure to open file \\?\GLOBALROOT\Device\Harddisk11\Partition1\BellDeskEXC_1_BKUP.mdf Error: 5.
00001728.000008b8::2014/08/05-16:56:12.724 WARN  [RES] Physical Disk <52_F_Data01>: VerifyFS: Ignoring failure to open file \\?\GLOBALROOT\Device\Harddisk11\Partition1\BellDeskEXC_BKUP.mdf Error: 5.
00001728.000008b8::2014/08/05-16:56:12.724 WARN  [RES] Physical Disk <52_F_Data01>: VerifyFS: Ignoring failure to open file \\?\GLOBALROOT\Device\Harddisk11\Partition1\BellDeskEXC_log_BKUP.ldf Error: 5.
00001728.000008b8::2014/08/05-16:56:12.725 WARN  [RES] Physical Disk <52_F_Data01>: VerifyFS: Ignoring failure to open file \\?\GLOBALROOT\Device\Harddisk11\Partition1\ePO4_EPO_1_old.ndf Error: 5.
00000e8c.00007390::2014/08/05-16:56:13.692 INFO  [GUM] Node 4: Processing RequestLock 4:45497
00000e8c.000009b4::2014/08/05-16:56:13.694 INFO  [GUM] Node 4: Processing GrantLock to 4 (sent by 3 gumid: 478989)
00000e8c.000009b4::2014/08/05-16:56:17.327 INFO  [GUM] Node 4: Processing RequestLock 3:47054
00000e8c.000009b4::2014/08/05-16:56:17.327 INFO  [GUM] Node 4: Processing GrantLock to 3 (sent by 4 gumid: 478990)
00000e8c.0000566c::2014/08/05-16:56:20.148 INFO  [GUM] Node 4: Processing RequestLock 4:45498
00000e8c.000009b4::2014/08/05-16:56:20.150 INFO  [GUM] Node 4: Processing GrantLock to 4 (sent by 3 gumid: 478991)
00000e8c.000009b4::2014/08/05-16:56:27.272 INFO  [GUM] Node 4: Processing RequestLock 3:47055
00000e8c.000009b4::2014/08/05-16:56:27.272 INFO  [GUM] Node 4: Processing GrantLock to 3 (sent by 4 gumid: 478993)
00000e8c.000009b0::2014/08/05-16:56:48.736 DBG   [NETFTAPI] Signaled NetftRemoteUnreachable  event, local address 10.199.17.48:003853 remote address 10.199.17.45:003853
00000e8c.000009b0::2014/08/05-16:56:48.736 DBG   [NETFTAPI] Signaled NetftRemoteUnreachable  event, local address 10.199.17.48:003853 remote address 10.199.17.45:003853
00000e8c.000009b0::2014/08/05-16:56:48.736 DBG   [NETFTAPI] Signaled NetftRemoteUnreachable  event, local address 10.199.17.48:003853 remote address 10.199.17.45:003853
00000e8c.000009b0::2014/08/05-16:56:48.736 DBG   [NETFTAPI] Signaled NetftRemoteUnreachable  event, local address 10.199.17.48:003853 remote address 10.199.17.47:003853
00000e8c.000009b0::2014/08/05-16:56:48.736 DBG   [NETFTAPI] Signaled NetftRemoteUnreachable  event, local address 10.199.17.48:003853 remote address 10.199.17.47:003853
00000e8c.000009b0::2014/08/05-16:56:48.736 DBG   [NETFTAPI] Signaled NetftRemoteUnreachable  event, local address 10.199.17.48:003853 remote address 10.199.17.47:003853
00000e8c.000009c4::2014/08/05-16:56:48.736 INFO  [IM] got event: Remote endpoint 10.199.17.45:~3343~ unreachable from 10.199.17.48:~3343~
00000e8c.000009c4::2014/08/05-16:56:48.736 INFO  [IM] Marking Route from 10.199.17.48:~3343~ to 10.199.17.45:~3343~ as down
00000e8c.000009c4::2014/08/05-16:56:48.736 INFO  [NDP] Checking to see if all routes for route (virtual) local fe80::68fd:e989:2b47:f203:~0~ to remote fe80::a4fa:4845:1f98:c058:~0~ are down
00000e8c.000009c4::2014/08/05-16:56:48.736 INFO  [NDP] Route local 192.168.17.48:~0~ to remote 192.168.17.45:~0~ is up
00000e8c.000009c4::2014/08/05-16:56:48.736 INFO  [IM] Adding information for route Route from local 10.199.17.48:~3343~ to remote 10.199.17.47:~3343~, status: true, attributes: 0
00000e8c.000009c4::2014/08/05-16:56:48.736 INFO  [IM] Adding information for route Route from local 10.199.17.48:~3343~ to remote 10.199.17.45:~3343~, status: false, attributes: 0
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  [IM] Sending connectivity report to leader (node 1): <class mscs::InterfaceReport>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO    <fromInterface>fe516577-6c30-4b96-a6b0-38adb0ccee3e</fromInterface>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO    <upInterfaces><vector len='2'>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO      <item>fe516577-6c30-4b96-a6b0-38adb0ccee3e</item>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO      <item>073b1ff1-b43a-4a7b-9875-e0d6b8ac0b83</item>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </vector>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </upInterfaces>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO    <downInterfaces><vector len='1'>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO      <item>8a90dc45-ed7e-4f26-90c0-f36becfa3e0a</item>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </vector>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </downInterfaces>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO    <viewId>1704</viewId>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </class mscs::InterfaceReport>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  [IM] got event: Remote endpoint 10.199.17.47:~3343~ unreachable from 10.199.17.48:~3343~
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  [IM] Marking Route from 10.199.17.48:~3343~ to 10.199.17.47:~3343~ as down
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  [NDP] Checking to see if all routes for route (virtual) local fe80::68fd:e989:2b47:f203:~0~ to remote fe80::28da:1ec6:f11e:58da:~0~ are down
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  [NDP] Route local 192.168.17.48:~0~ to remote 192.168.17.47:~0~ is up
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  [IM] Adding information for route Route from local 10.199.17.48:~3343~ to remote 10.199.17.47:~3343~, status: false, attributes: 0
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  [IM] Adding information for route Route from local 10.199.17.48:~3343~ to remote 10.199.17.45:~3343~, status: false, attributes: 0
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  [IM] Sending connectivity report to leader (node 1): <class mscs::InterfaceReport>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO    <fromInterface>fe516577-6c30-4b96-a6b0-38adb0ccee3e</fromInterface>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO    <upInterfaces><vector len='1'>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO      <item>fe516577-6c30-4b96-a6b0-38adb0ccee3e</item>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </vector>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </upInterfaces>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO    <downInterfaces><vector len='2'>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO      <item>073b1ff1-b43a-4a7b-9875-e0d6b8ac0b83</item>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO      <item>8a90dc45-ed7e-4f26-90c0-f36becfa3e0a</item>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </vector>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </downInterfaces>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO    <viewId>1704</viewId>
00000e8c.000009c4::2014/08/05-16:56:48.737 INFO  </class mscs::InterfaceReport>
00000e8c.000011f0::2014/08/05-16:56:48.751 INFO  [GUM] Node 4: Processing RequestLock 1:67
00000e8c.000009b4::2014/08/05-16:56:48.752 INFO  [GUM] Node 4: Processing GrantLock to 1 (sent by 3 gumid: 478994)
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO  [IM] Changing the state of adapters according to result: <class mscs::InterfaceResult>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO    <up><vector len='2'>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO      <item>8a90dc45-ed7e-4f26-90c0-f36becfa3e0a</item>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO      <item>073b1ff1-b43a-4a7b-9875-e0d6b8ac0b83</item>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO  </vector>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO  </up>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO    <down><vector len='1'>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO      <item>fe516577-6c30-4b96-a6b0-38adb0ccee3e</item>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO  </vector>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO  </down>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO    <unreachable><vector len='0'>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO  </vector>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO  </unreachable>
00000e8c.00000fd4::2014/08/05-16:56:48.754 INFO  </class mscs::InterfaceResult>
0000143c.00000e1c::2014/08/05-16:56:48.755 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: WorkerThread: NetInterface fe516577-6c30-4b96-a6b0-38adb0ccee3e has failed. Failing resource.
0000143c.00000e1c::2014/08/05-16:56:48.755 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL54)>: WorkerThread: NetInterface fe516577-6c30-4b96-a6b0-38adb0ccee3e has failed. Failing resource.
0000143c.00000e1c::2014/08/05-16:56:48.755 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: WorkerThread: NetInterface fe516577-6c30-4b96-a6b0-38adb0ccee3e has failed. Failing resource.
0000143c.00000e1c::2014/08/05-16:56:48.756 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: WorkerThread: NetInterface fe516577-6c30-4b96-a6b0-38adb0ccee3e has failed. Failing resource.
0000143c.000068a0::2014/08/05-16:56:48.940 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: IP Interface 3811C70A (address 10.199.17.56) failed LooksAlive check, status 1117.
0000143c.000068a0::2014/08/05-16:56:48.940 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: IP Interface 3811C70A (address 10.199.17.56) failed IsAlive check, status 1117.
0000143c.000068a0::2014/08/05-16:56:48.940 WARN  [RHS] Resource SQL IP Address 1 (ENTSQL56) IsAlive has indicated failure.
00000e8c.000076c4::2014/08/05-16:56:48.940 INFO  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'SQL IP Address 1 (ENTSQL56)', gen(0) result 1.
00000e8c.000076c4::2014/08/05-16:56:48.940 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL56)) Online-->ProcessingFailure.
00000e8c.000076c4::2014/08/05-16:56:48.941 ERR   [RCM] rcm::RcmResource::HandleFailure: (SQL IP Address 1 (ENTSQL56))
00000e8c.000076c4::2014/08/05-16:56:48.941 INFO  [RCM] resource SQL IP Address 1 (ENTSQL56): failure count: 1, restartAction: 2.
00000e8c.000076c4::2014/08/05-16:56:48.941 INFO  [RCM] Will restart resource in 500 milliseconds.
00000e8c.000076c4::2014/08/05-16:56:48.941 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL56)) ProcessingFailure-->[Terminating to DelayRestartingResource].
00000e8c.000076c4::2014/08/05-16:56:48.941 INFO  [RCM] rcm::RcmGroup::ProcessStateChange: (ENTSQL56, Online --> Pending)
00000e8c.000076c4::2014/08/05-16:56:48.941 INFO  [RCM] TransitionToState(SQL Network Name (ENTSQL56)) Online-->[Terminating to OnlineCallIssued].
00000e8c.000076c4::2014/08/05-16:56:48.941 INFO  [RCM] TransitionToState(SQL Server (ENT56)) Online-->[Terminating to OnlineCallIssued].
00000e8c.000076c4::2014/08/05-16:56:48.941 INFO  [RCM] TransitionToState(SQL Server Agent (ENT56)) Online-->[Terminating to OnlineCallIssued].
0000143c.000068a0::2014/08/05-16:56:48.942 INFO  [RES] Network Name <SQL Network Name (ENTSQL56)>: Terminating resource...
0000143c.000068a0::2014/08/05-16:56:48.942 INFO  [RES] Network Name <SQL Network Name (ENTSQL56)>: Offline of resource continuing...
0000143c.00005bb8::2014/08/05-16:56:48.942 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Terminating resource...
0000143c.00005bb8::2014/08/05-16:56:48.943 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Deleting IP interface 3811C70A.
00000e8c.00005d78::2014/08/05-16:56:48.944 INFO  [GUM] Node 4: Processing RequestLock 4:45500
00000e8c.000076c4::2014/08/05-16:56:48.945 DBG   [NETFTAPI] received NsiDeleteInstance  for 10.199.17.56
00000e8c.000011f0::2014/08/05-16:56:48.945 INFO  [GUM] Node 4: Processing GrantLock to 4 (sent by 1 gumid: 478995)
00000e8c.000076c4::2014/08/05-16:56:48.946 WARN  [NETFTAPI] Failed to query parameters for 10.199.17.56 (status 80070490)
00000e8c.000076c4::2014/08/05-16:56:48.946 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.199.17.56
00000e8c.000076c4::2014/08/05-16:56:48.946 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.199.17.56
00000e8c.000076c4::2014/08/05-16:56:48.946 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.199.17.56
0000143c.00005bb8::2014/08/05-16:56:49.085 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Address 10.199.17.56 on adapter 10.199.17 Corp Team offline.
00000e8c.000056ac::2014/08/05-16:56:49.085 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL56)) [Terminating to DelayRestartingResource]-->DelayRestartingResource.
0000143c.00007414::2014/08/05-16:56:49.095 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x544006f
0000143c.00007414::2014/08/05-16:56:49.107 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type GPT, guid {88f87390-fc69-43d2-991a-4b128bdfeaf6}
0000143c.00007414::2014/08/05-16:56:49.118 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type GPT, guid {8293a54e-8426-4a7f-be9f-343327bebf28}
0000143c.00007414::2014/08/05-16:56:49.129 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x544001c
0000143c.00007414::2014/08/05-16:56:49.282 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: IP Interface 3A11C70A (address 10.199.17.58) failed LooksAlive check, status 1117.
0000143c.00007414::2014/08/05-16:56:49.282 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: IP Interface 3A11C70A (address 10.199.17.58) failed IsAlive check, status 1117.
0000143c.00007414::2014/08/05-16:56:49.282 WARN  [RHS] Resource SQL IP Address 1 (ENTSQL58) IsAlive has indicated failure.
00000e8c.000076c4::2014/08/05-16:56:49.282 INFO  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'SQL IP Address 1 (ENTSQL58)', gen(0) result 1.
00000e8c.000076c4::2014/08/05-16:56:49.282 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL58)) Online-->ProcessingFailure.
00000e8c.0000566c::2014/08/05-16:56:49.282 ERR   [RCM] rcm::RcmResource::HandleFailure: (SQL IP Address 1 (ENTSQL58))
00000e8c.0000566c::2014/08/05-16:56:49.283 INFO  [RCM] resource SQL IP Address 1 (ENTSQL58): failure count: 1, restartAction: 2.
00000e8c.0000566c::2014/08/05-16:56:49.283 INFO  [RCM] Will restart resource in 500 milliseconds.
00000e8c.0000566c::2014/08/05-16:56:49.283 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL58)) ProcessingFailure-->[Terminating to DelayRestartingResource].
00000e8c.0000566c::2014/08/05-16:56:49.283 INFO  [RCM] rcm::RcmGroup::ProcessStateChange: (ENTSQL58, Online --> Pending)
00000e8c.0000566c::2014/08/05-16:56:49.283 INFO  [RCM] TransitionToState(SQL Network Name (ENTSQL58)) Online-->[Terminating to OnlineCallIssued].
00000e8c.0000566c::2014/08/05-16:56:49.283 INFO  [RCM] TransitionToState(SQL Server (ENT58)) Online-->[Terminating to OnlineCallIssued].
00000e8c.0000566c::2014/08/05-16:56:49.283 INFO  [RCM] TransitionToState(SQL Server Agent (ENT58)) Online-->[Terminating to OnlineCallIssued].
0000143c.00007414::2014/08/05-16:56:49.284 INFO  [RES] Network Name <SQL Network Name (ENTSQL58)>: Terminating resource...
0000143c.00007414::2014/08/05-16:56:49.284 INFO  [RES] Network Name <SQL Network Name (ENTSQL58)>: Offline of resource continuing...
0000143c.00005bb8::2014/08/05-16:56:49.284 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Terminating resource...
0000143c.00005bb8::2014/08/05-16:56:49.284 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Deleting IP interface 3A11C70A.
00000e8c.0000566c::2014/08/05-16:56:49.285 DBG   [NETFTAPI] received NsiDeleteInstance  for 10.199.17.58
00000e8c.0000566c::2014/08/05-16:56:49.286 WARN  [NETFTAPI] Failed to query parameters for 10.199.17.58 (status 80070490)
00000e8c.0000566c::2014/08/05-16:56:49.286 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.199.17.58
00000e8c.0000566c::2014/08/05-16:56:49.286 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.199.17.58
00000e8c.0000566c::2014/08/05-16:56:49.286 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.199.17.58
0000143c.00005bb8::2014/08/05-16:56:49.291 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Address 10.199.17.58 on adapter 10.199.17 Corp Team offline.
00000e8c.0000566c::2014/08/05-16:56:49.292 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL58)) [Terminating to DelayRestartingResource]-->DelayRestartingResource.
0000143c.00006378::2014/08/05-16:56:49.304 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type GPT, guid {20586068-4927-4492-9403-e826e0d60ad9}
0000143c.00006378::2014/08/05-16:56:49.320 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x544006c
00001518.000025e4::2014/08/05-16:56:49.325 INFO  [RES] SQL Server <SQL Server (ENT56)>: [sqsrvres] OnlineThread: asked to terminate while waiting for QP.
0000143c.00006378::2014/08/05-16:56:49.339 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x544006d
0000143c.00006378::2014/08/05-16:56:49.350 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x54400b1
0000143c.00006378::2014/08/05-16:56:49.361 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x544001f
0000143c.00006378::2014/08/05-16:56:49.454 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: IP Interface 3411C70A (address 10.199.17.52) failed LooksAlive check, status 1117.
0000143c.00006378::2014/08/05-16:56:49.454 WARN  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: IP Interface 3411C70A (address 10.199.17.52) failed IsAlive check, status 1117.
0000143c.00006378::2014/08/05-16:56:49.454 WARN  [RHS] Resource SQL IP Address 1 (ENTSQL52) IsAlive has indicated failure.
00000e8c.000056ac::2014/08/05-16:56:49.454 INFO  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'SQL IP Address 1 (ENTSQL52)', gen(0) result 1.
00000e8c.000056ac::2014/08/05-16:56:49.454 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL52)) Online-->ProcessingFailure.
00000e8c.000076c4::2014/08/05-16:56:49.454 ERR   [RCM] rcm::RcmResource::HandleFailure: (SQL IP Address 1 (ENTSQL52))
00000e8c.000076c4::2014/08/05-16:56:49.455 INFO  [RCM] resource SQL IP Address 1 (ENTSQL52): failure count: 1, restartAction: 2.
00000e8c.000076c4::2014/08/05-16:56:49.455 INFO  [RCM] Will restart resource in 500 milliseconds.
00000e8c.000076c4::2014/08/05-16:56:49.455 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL52)) ProcessingFailure-->[Terminating to DelayRestartingResource].
00000e8c.000076c4::2014/08/05-16:56:49.455 INFO  [RCM] rcm::RcmGroup::ProcessStateChange: (ENTSQL52, Online --> Pending)
00000e8c.000076c4::2014/08/05-16:56:49.455 INFO  [RCM] TransitionToState(SQL Network Name (ENTSQL52)) Online-->[Terminating to OnlineCallIssued].
00000e8c.000076c4::2014/08/05-16:56:49.455 INFO  [RCM] TransitionToState(SQL Server (ENT52)) Online-->[Terminating to OnlineCallIssued].
00000e8c.000076c4::2014/08/05-16:56:49.455 INFO  [RCM] TransitionToState(SQL Server Agent (ENT52)) Online-->[Terminating to OnlineCallIssued].
0000143c.00006378::2014/08/05-16:56:49.456 INFO  [RES] Network Name <SQL Network Name (ENTSQL52)>: Terminating resource...
0000143c.00006378::2014/08/05-16:56:49.456 INFO  [RES] Network Name <SQL Network Name (ENTSQL52)>: Offline of resource continuing...
0000143c.00005bb8::2014/08/05-16:56:49.456 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Terminating resource...
0000143c.00005bb8::2014/08/05-16:56:49.456 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Deleting IP interface 3411C70A.
00000e8c.000076c4::2014/08/05-16:56:49.458 DBG   [NETFTAPI] received NsiDeleteInstance  for 10.199.17.52
00000e8c.000076c4::2014/08/05-16:56:49.458 WARN  [NETFTAPI] Failed to query parameters for 10.199.17.52 (status 80070490)
00000e8c.000076c4::2014/08/05-16:56:49.458 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.199.17.52
00000e8c.000076c4::2014/08/05-16:56:49.458 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.199.17.52
00000e8c.000076c4::2014/08/05-16:56:49.458 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.199.17.52
0000143c.00005bb8::2014/08/05-16:56:49.461 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Address 10.199.17.52 on adapter 10.199.17 Corp Team offline.
00000e8c.000076c4::2014/08/05-16:56:49.461 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL52)) [Terminating to DelayRestartingResource]-->DelayRestartingResource.
0000143c.00007108::2014/08/05-16:56:49.473 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x5440082
0000143c.00007108::2014/08/05-16:56:49.488 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x544007c
00001728.000008b8::2014/08/05-16:56:49.502 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x5440086
0000143c.00007108::2014/08/05-16:56:49.514 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x54400ba
0000143c.00007108::2014/08/05-16:56:49.526 INFO  [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x54400bb
00000e8c.0000566c::2014/08/05-16:56:49.585 INFO  [RCM] Delay-restarting SQL IP Address 1 (ENTSQL56) and any waiting dependents.
00000e8c.0000566c::2014/08/05-16:56:49.585 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL56)) DelayRestartingResource-->OnlineCallIssued.
0000143c.00007108::2014/08/05-16:56:49.585 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Bringing resource online...
00000e8c.000076c4::2014/08/05-16:56:49.586 INFO  [RCM] HandleMonitorReply: ONLINERESOURCE for 'SQL IP Address 1 (ENTSQL56)', gen(1) result 997.
00000e8c.000076c4::2014/08/05-16:56:49.586 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL56)) OnlineCallIssued-->OnlinePending.
0000143c.00003878::2014/08/05-16:56:49.586 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Online thread running.
0000143c.00003878::2014/08/05-16:56:49.589 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Checking for network match: network masks 00FFFFFF=00FFFFFF and addresses 3811C70A^0011A8C0, role 1.
0000143c.00003878::2014/08/05-16:56:49.590 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Checking for network match: network masks 00FFFFFF=00FFFFFF and addresses 3811C70A^00BDD60A, role 0.
0000143c.00003878::2014/08/05-16:56:49.590 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Checking for network match: network masks 00FFFFFF=00FFFFFF and addresses 3811C70A^0011C70A, role 3.
0000143c.00003878::2014/08/05-16:56:49.596 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Online: Opened object handle for netinterface fe516577-6c30-4b96-a6b0-38adb0ccee3e.
0000143c.00003878::2014/08/05-16:56:49.597 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Online: Registered notification for netinterface fe516577-6c30-4b96-a6b0-38adb0ccee3e.
0000143c.00003878::2014/08/05-16:56:49.597 ERR   [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: NetInterface fe516577-6c30-4b96-a6b0-38adb0ccee3e has failed.
0000143c.00003878::2014/08/05-16:56:49.597 ERR   [RHS] Online for resource SQL IP Address 1 (ENTSQL56) failed.
00000e8c.0000566c::2014/08/05-16:56:49.597 INFO  [RCM] HandleMonitorReply: ONLINERESOURCE for 'SQL IP Address 1 (ENTSQL56)', gen(1) result 5018.
00000e8c.0000566c::2014/08/05-16:56:49.597 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL56)) OnlinePending-->ProcessingFailure.
00000e8c.000056ac::2014/08/05-16:56:49.597 ERR   [RCM] rcm::RcmResource::HandleFailure: (SQL IP Address 1 (ENTSQL56))
00000e8c.000056ac::2014/08/05-16:56:49.597 INFO  [RCM] resource SQL IP Address 1 (ENTSQL56): failure count: 2, restartAction: 2.
00000e8c.000056ac::2014/08/05-16:56:49.598 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL56)) ProcessingFailure-->[Terminating to Failed].
00000e8c.000056ac::2014/08/05-16:56:49.598 INFO  [RCM] Resource SQL IP Address 1 (ENTSQL56) is causing group ENTSQL56 to failover.  Posting worker thread.
0000143c.00007108::2014/08/05-16:56:49.598 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Terminating resource...
0000143c.00007108::2014/08/05-16:56:49.598 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL56)>: Resource is already offline.
00000e8c.000056ac::2014/08/05-16:56:49.598 INFO  [RCM] rcm::RcmGroup::Failover: (ENTSQL56)
00000e8c.00007480::2014/08/05-16:56:49.598 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL56)) [Terminating to Failed]-->Failed.
00001608.00002578::2014/08/05-16:56:49.614 INFO  [RES] SQL Server <SQL Server (ENT58)>: [sqsrvres] OnlineThread: asked to terminate while waiting for QP.
00000e8c.000076c4::2014/08/05-16:56:49.792 INFO  [RCM] Delay-restarting SQL IP Address 1 (ENTSQL58) and any waiting dependents.
00000e8c.000076c4::2014/08/05-16:56:49.792 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL58)) DelayRestartingResource-->OnlineCallIssued.
0000143c.00007108::2014/08/05-16:56:49.792 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Bringing resource online...
00000e8c.00002b98::2014/08/05-16:56:49.793 INFO  [RCM] HandleMonitorReply: ONLINERESOURCE for 'SQL IP Address 1 (ENTSQL58)', gen(1) result 997.
00000e8c.00002b98::2014/08/05-16:56:49.793 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL58)) OnlineCallIssued-->OnlinePending.
0000143c.00006f80::2014/08/05-16:56:49.793 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Online thread running.
0000143c.00006f80::2014/08/05-16:56:49.796 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Checking for network match: network masks 00FFFFFF=00FFFFFF and addresses 3A11C70A^0011A8C0, role 1.
0000143c.00006f80::2014/08/05-16:56:49.797 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Checking for network match: network masks 00FFFFFF=00FFFFFF and addresses 3A11C70A^00BDD60A, role 0.
0000143c.00006f80::2014/08/05-16:56:49.798 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Checking for network match: network masks 00FFFFFF=00FFFFFF and addresses 3A11C70A^0011C70A, role 3.
0000143c.00006f80::2014/08/05-16:56:49.804 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Online: Opened object handle for netinterface fe516577-6c30-4b96-a6b0-38adb0ccee3e.
0000143c.00006f80::2014/08/05-16:56:49.804 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Online: Registered notification for netinterface fe516577-6c30-4b96-a6b0-38adb0ccee3e.
0000143c.00006f80::2014/08/05-16:56:49.804 ERR   [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: NetInterface fe516577-6c30-4b96-a6b0-38adb0ccee3e has failed.
0000143c.00006f80::2014/08/05-16:56:49.804 ERR   [RHS] Online for resource SQL IP Address 1 (ENTSQL58) failed.
00000e8c.000076c4::2014/08/05-16:56:49.804 INFO  [RCM] HandleMonitorReply: ONLINERESOURCE for 'SQL IP Address 1 (ENTSQL58)', gen(1) result 5018.
00000e8c.000076c4::2014/08/05-16:56:49.804 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL58)) OnlinePending-->ProcessingFailure.
00000e8c.0000660c::2014/08/05-16:56:49.805 ERR   [RCM] rcm::RcmResource::HandleFailure: (SQL IP Address 1 (ENTSQL58))
00000e8c.0000660c::2014/08/05-16:56:49.805 INFO  [RCM] resource SQL IP Address 1 (ENTSQL58): failure count: 2, restartAction: 2.
00000e8c.0000660c::2014/08/05-16:56:49.805 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL58)) ProcessingFailure-->[Terminating to Failed].
00000e8c.0000660c::2014/08/05-16:56:49.806 INFO  [RCM] Resource SQL IP Address 1 (ENTSQL58) is causing group ENTSQL58 to failover.  Posting worker thread.
0000143c.00007108::2014/08/05-16:56:49.806 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Terminating resource...
0000143c.00007108::2014/08/05-16:56:49.806 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL58)>: Resource is already offline.
00000e8c.0000660c::2014/08/05-16:56:49.806 INFO  [RCM] rcm::RcmGroup::Failover: (ENTSQL58)
00000e8c.00007480::2014/08/05-16:56:49.806 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL58)) [Terminating to Failed]-->Failed.
00000e8c.00007480::2014/08/05-16:56:49.961 INFO  [RCM] Delay-restarting SQL IP Address 1 (ENTSQL52) and any waiting dependents.
00000e8c.00007480::2014/08/05-16:56:49.961 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL52)) DelayRestartingResource-->OnlineCallIssued.
0000143c.00007108::2014/08/05-16:56:49.961 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Bringing resource online...
00000e8c.00003170::2014/08/05-16:56:49.962 INFO  [RCM] HandleMonitorReply: ONLINERESOURCE for 'SQL IP Address 1 (ENTSQL52)', gen(1) result 997.
00000e8c.00003170::2014/08/05-16:56:49.962 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL52)) OnlineCallIssued-->OnlinePending.
0000143c.00003720::2014/08/05-16:56:49.962 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Online thread running.
0000143c.00003720::2014/08/05-16:56:49.965 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Checking for network match: network masks 00FFFFFF=00FFFFFF and addresses 3411C70A^0011A8C0, role 1.
0000143c.00003720::2014/08/05-16:56:49.966 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Checking for network match: network masks 00FFFFFF=00FFFFFF and addresses 3411C70A^00BDD60A, role 0.
0000143c.00003720::2014/08/05-16:56:49.967 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Checking for network match: network masks 00FFFFFF=00FFFFFF and addresses 3411C70A^0011C70A, role 3.
0000143c.00003720::2014/08/05-16:56:49.973 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Online: Opened object handle for netinterface fe516577-6c30-4b96-a6b0-38adb0ccee3e.
0000143c.00003720::2014/08/05-16:56:49.973 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Online: Registered notification for netinterface fe516577-6c30-4b96-a6b0-38adb0ccee3e.
0000143c.00003720::2014/08/05-16:56:49.973 ERR   [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: NetInterface fe516577-6c30-4b96-a6b0-38adb0ccee3e has failed.
0000143c.00003720::2014/08/05-16:56:49.973 ERR   [RHS] Online for resource SQL IP Address 1 (ENTSQL52) failed.
00000e8c.00007480::2014/08/05-16:56:49.973 INFO  [RCM] HandleMonitorReply: ONLINERESOURCE for 'SQL IP Address 1 (ENTSQL52)', gen(1) result 5018.
00000e8c.00007480::2014/08/05-16:56:49.973 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL52)) OnlinePending-->ProcessingFailure.
00000e8c.00003098::2014/08/05-16:56:49.973 ERR   [RCM] rcm::RcmResource::HandleFailure: (SQL IP Address 1 (ENTSQL52))
00000e8c.00003098::2014/08/05-16:56:49.974 INFO  [RCM] resource SQL IP Address 1 (ENTSQL52): failure count: 2, restartAction: 2.
00000e8c.00003098::2014/08/05-16:56:49.974 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL52)) ProcessingFailure-->[Terminating to Failed].
00000e8c.00003098::2014/08/05-16:56:49.974 INFO  [RCM] Resource SQL IP Address 1 (ENTSQL52) is causing group ENTSQL52 to failover.  Posting worker thread.
0000143c.00007108::2014/08/05-16:56:49.974 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Terminating resource...
0000143c.00007108::2014/08/05-16:56:49.974 INFO  [RES] IP Address <SQL IP Address 1 (ENTSQL52)>: Resource is already offline.
00000e8c.0000566c::2014/08/05-16:56:49.974 INFO  [RCM] rcm::RcmGroup::Failover: (ENTSQL52)
00000e8c.00003170::2014/08/05-16:56:49.975 INFO  [RCM] TransitionToState(SQL IP Address 1 (ENTSQL52)) [Terminating to Failed]-->Failed.
000017a0.00000c50::2014/08/05-16:56:50.352 INFO  [RES] SQL Server <SQL Server (ENT52)>: [sqsrvres] OnlineThread: asked to terminate while waiting for QP.

This is just a cut out of the logs but if you need more info just let me know. 

Thanks for any help!

ODX in Windows Server 2012

$
0
0

Hi Colleagues,

I have doubt about ODX in Server 2012 Failover clustering.

I am about to deploy 2 clusters for hosting content management system, and in the storage design plan I am thinking of using ODX on server 2012. So I would like you to provide your suggestions.

  • Can we have this storage technology only on hyper-v clusters or it can be used for normal failover cluster hosting DB's ?
  • Has anyone used this technology, how reliable is it and your success stories :)
  • Could you tell me about cons on using this technology ?

Many Thanks,

Regards,

K.Deepak Balaji.


Viewing all 5654 articles
Browse latest View live