Quantcast
Channel: High Availability (Clustering) forum
Viewing all 5654 articles
Browse latest View live

CSV - System Volume Information Problem

$
0
0

Hi,

Not sure if this is the correct place to ask my question...

We are having an issue with the System Volume Information on one of our Cluster Shared Volumes growing to a large size.
It currently sits at 598GB. 

We have two hosts - Three volumes, C:\ClusterStorage\ - Volume1 - Volume2 and Volume3.

The issue only appears to be happening on Volume2 in the System Volume Information folder and not sure what's causing it.

I have checked Shadow Copies on both hosts and all are disabled.

I am hoping someone could point me in the direction to figure out what is causing the consumption of lots of space.


We run a daily windows backup on each host to separate USB drives also.

Thanks,
Adam




Internal netbios traffic on WSFC 2012

$
0
0

All hello.

Current configuration:
SQL Server of group of availability (AG) 2012 on Windows Server 2012 consisting of two nodes is developed. On each node two network interfaces, one for public access, the second for interconnect (heartbeat) are used.

First node:
Eth1 10.16.0.41
Eth2 192.168.10.1

Second node:
Eth1 10.16.0.42
Eth2 192.168.10.2

The second interface with IP 192.168.10.1 and 192.168.10.2 is private connection, allocated for internal cluster communication.

The administrator of a network noticed strange circulation of a traffic, and suggested it to block:
there is a traffic with IP 10.16.0.41 under a cluster user to the internal address 192.168.10.1 with UDP of port 137 on port 137 according to the netbios-ns appendix, in the same way addresses 10.16.0.42 on 192.168.10.2

Question:

why he addresses to himself?



MSMQ options in WSFC 2012 R2

$
0
0

Hi,

If you want to make MSMQ higly available, most guides I have seen is about creating the Message Queuing role. That's fine.

However, I'm in a process of sorting out the other options of MSMQ in a Failover Cluster and I have done those tests:

1. Created an Message Queuing resource in existing application group, made it dependent on the disk and network name already in there. Test ok.

2. The same as above but used a separate shared disk only for MSMQ. Test ok.

3. Created a Message Queuing role and moved that one into an existing application group. Test ok.

4. The other way around from point 3; Moved an existing application group into the Message Queuing group. Test ok.

"Test ok" in my case is that it didn't produce any errors, but I have not tested the MSMQ functionality because I'm actually not an application guy, more on the infrastructure level.

Also, one thing about creating a separate role for MSMQ is that you get the "Manage Message Queuing" in the GUI, that's not the case with options 2 and 3 if you don't tweak the registry or use gwmi (http://blogs.msdn.com/b/clustering/archive/2010/01/12/9946994.aspx).

Now to my question :-)

Are any or all of those other options described valid ways to set up MSMQ i a clustered environment? What is your exeprience out there?

VM Windows deactivated

$
0
0

Hi ,

We are running hyper v 2008 r2 failover Cluster. Problem is when we moves the virtual machines from one node to another node VM windows is deactivated and needs to be activated again.

Please advise

Regards

Temporarily mix Intel and AMD hosts in Hyper-V cluster for migration purposes

$
0
0

Hi, we currently have a Hyper-V cluster (2012 R2) running on old AMD-based hosts. We are going to replace these with new Intel-based servers.

Is the following scenario possible (though likely not supported)? 

  • join new (Intel-based) hosts to existing cluster
  • shut down all virtual machines
  • migrate all virtual machines to new hosts
  • remove old (AMD-based) hosts from cluster

Would be the easiest option. Otherwise I need to create a new cluster and either create new storage, or connect LUNs to new cluster and import all VMs, in any case much more work and more risk.

2012 R2 CSV "Online (No Access)" after node joins cluster

$
0
0

Okay, this has been going on for months after we performed an upgrade on our 2008 R2 clusters.  We upgraded our development cluster from 2008 R2 to 2012 (SP1) and had no issues, saw great performance increases and decided to do our production clusters. At the time, 2012 R2 was becoming prominent and we decided to just hop over 2012, thinking changes in this version weren't that drastic, we were wrong.

The cluster works perfectly as long as all nodes stay up and online.  Live migration works great, roles (including disks) flip between machines based on load just fine, etc.  When a node reboots, or the cluster service restarts, when the node goes from "Down" to "Joining" and then "Online", the CSV(s) will switch from Online to Online (No Access) and the mount point will disappear.  If you were to switch the CSV(s) to the node that just joined back into the cluster, the mount point returns and it goes back to Online.

Cluster validation checks out with flying colors and Microsoft has been able to provide 0 help whatsoever.  We have two types of FC storage, one that is being retired and one that we are switching all production machines to.  It does this with both storage units, one SUN and one Hitachi.  Since we are moving to Hitachi, we verified that the firmware was up-to-date (it is), our drivers are current (they are) and that the unit is fully functional (everything checks out).  This has not happened before 2012 R2 and we have proven it by reverting to 2012 on our development cluster.  We have started using features that come with 2012 R2 on our other clusters so we would like to figure this problem out to continue using this platform.

Cluster logs show absolutely no diagnostic information that's of any help.  The normal error message is:

Cluster Shared Volume 'Volume3' ('VM Data') is no longer accessible from this cluster node because of error '(1460)'. Please troubleshoot this node's connectivity to the storage device and network connectivity.

Per Microsoft our Hitachi system with 2012 R2 and utilizing MPIO (we have two paths) is certified for use.  This is happening on all three of our clusters (two production and one development).  They mostly have the same setup but not sure what could be causing this at this point.

Server drops connection when copying large file

$
0
0
I have 3 Windows 2012 R2 Datacentre servers. These have an iSCSI connection to a netapp SAN. They have the latest network drives and all Windows updates. It seems that as soon as I go to copy a large file, 50GB any drive either the local drive or the cluster shared storage the server drops its LAN connection. The only fix is to then disable the LAN connection and enable it again. what could be causing this? Now as soon as you click paste it drops the connection, I can't ping anything on the local network yet I can ping the SAN on that network.

Cluster Shared Volume error after server not shutting down properly

$
0
0

Hi,
We have two IBM X240 servers ( we call it server A and server B) connecting to IBM disk system:V3700 via fibre HBA.

The both servers are installing windows 2012 R2.

We have implemented VM cluster and everything is working well.

Last week this two server is down due to power shortage in my server room.

After turning on the  server A, it will come out the below error:

Windows failed to start, a recent hardware or software change might be cause.
File: \windows\system32\drivers\msdsm.sys
status: 0xc0000017
Info:the operation system could't be loaded because a critical system drive is missing or contain errors.

After using the Last Good Configuration, we can log in to the system and turn on the clustered virtual machine.

it seems everything is fine now.

So i go and start the server B and log in to the system using the same method with server A.

I found all the VM will be shut down or running error due to Cluster Shared Volume error.

Refer to below some errors captured from system system logs.

* Event 5142, Cluster Shared Volume 'Volume7' ('Cluster Disk 10') is no longer accessible from this cluster node because of error '(1460)'. Please troubleshoot this node's connectivity to the storage device and network connectivity.

* Event 5120,Cluster Shared Volume 'Volume3' ('Cluster Disk 4') has entered a paused state because of '(c00000be)'. All I/O will temporarily be queued until a path to the volume is reestablished.

Now we only can turn on only one server and shut down another server, if i turn on both server, the error will come out again & the server will go down.

Any suggestion or need me provide more information.

Thanks.


Windows 2012 Failover Clusters "An error occurred connecting to the Cluster"

$
0
0

Good Morning

I have 4 Failover clusters.  1 SQL Cluster, 1 HyperV Cluster, 1 IIS Cluster (Not NLB) and 1 File Cluster.  All running windows 2012.  The File cluster is fully upto date and has all the latest Firmware and Drivers installed and a couple off Windows Hotfixes for Windows Clustering.  The Hyper-V and SQL Cluster is scheduled for updates in the couple off weeks but both have been updated within the last couple weeks.  The problem I have on all four cluster is after a period off time, we are no-longer able to connect to the cluster from any node in the cluster.  What I mean is on the SQL Cluster after a period of time (This might be 3-4 weeks), if I connect to any of the nodes in the SQL Cluster and open fail-over cluster manager I am unable to connect to the cluster.  Fail-Over cluster manager first starts with "Connecting to Cluster" "The Operations is taking longer than expected", then after about 2-3 minutes an error comes up saying "The Operations has Failed", "An error occurred connecting to the Cluster '<Cluster-Name'>", If I then click on "See details" I get "An error occurred Trying to Display the cluster information", "One or more errors occurred","Provider load failure".

This same problem happens on all of our windows 2012 Clusters, we do have a 2012 R2 Hyper-V cluster but this is managed by a different team and I don't know if they have the same issue or not.

It seems to be one of the host that is causing the problem, normally the host that has Quorum, and if we reboot that host, which causes the cluster to fail over all the roles to a different node the cluster is then accessible again. Once the node that was rebooted comes back online everything is fine again for a period of time until this problem happens again.

If anyone has any suggest I would be very grate full.

Richard 


Failover Clustering - EventID 2049

$
0
0

Hello, I've got a four node hyper-v 2012r2 cluster using a CIFS share to store all of the virtual machines. It's working all right but I do see thousands of similar log entries in the Microsoft-Windows-FailoverClustering/Diagnostic log:

Event ID: 2049 -

[RCM [RES] SCVMM OPS69 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2

[RCM [RES] SCVMM RDS22 - SessionHost embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2

Does anybody know what this is referring to? Is there a fix for this?

Thanks in advance for any feedback?

ID 2051 FailoverClustering Errors

$
0
0
Good morning, 

I just took a quick look at my Event Viewer on my Windows Server 2012 Standard server hosting Exchange 2013 and see the following errors... every 5 minutes for days now. Im not aware of any updates installed so Im not sure what could be causing this or where to troubleshoot quite honestly; 

The server is running on VMware Virtual Platform.  

Log Name:      Microsoft-Windows-FailoverClustering/Diagnostic
Source:        Microsoft-Windows-FailoverClustering
Date:          6/8/2015 10:55:33 AM
Event ID:      2051
Task Category: None
Level:         Error
Keywords:      
User:          SYSTEM
Computer:      XCH1Server.Domain.local
Description:
[RCM] [GIM] ResType Virtual Machine has no resources, not collecting local utilization info

Now that I look at all my 6 exchange servers running Windows Server 2012 Standard, I see the same in those event logs also. 

Any suggestions folks? 

Add Node to Cluster - Keyset does not exist

$
0
0

Hi,

I am trying to add third node to a Windows 2012 fail over cluster, but gets the following error.

The server 'DR.domain.com' could not be added to the cluster.
An error occurred while adding node 'DR.domain.com' to cluster 'domain-fc'.

Keyset does not exist

The User I am using to Add Node is Domain Admin, so it may not be a permission issue.

All nodes are Windows 2012 R2 VMs on Azure


Usman Shaheen MCTS BizTalk Server http://usmanshaheen.wordpress.com


Problem with csv

$
0
0

I am trying to migrate storage for hyper-v vm's to a new csv.  When I try to move any VM's storage to the new target I get the following error..

an error occurred while moving the virtual machine storage 0x80071008 invalid parameter.

I have already unmounted the iso on the VM

Waindows server 2012 r2 failover cluster access denied

$
0
0

Dear Experts,

I cant access windows failover cluster 2012 r2. the error shows below

"You do not have administrative privilages on the cluster. contact your network administrator to request access."

Error code: 0x80070005

Access denied.

PS C:\> Get-ClusterAccess
Get-ClusterAccess : You do not have administrative privileges on the cluster. Contact your network administrator to
request access.
    Access is denied
At line:1 char:1
+ Get-ClusterAccess
+ ~~~~~~~~~~~~~~~~~
    + CategoryInfo          : AuthenticationError: (:) [Get-ClusterAccess], ClusterCmdletException
    + FullyQualifiedErrorId : ClusterAccessDenied,Microsoft.FailoverClusters.PowerShell.GetClusterAccessCommand

The current user is enterprise administrator,


monitoring NLB sessions

$
0
0
I have a Windows 2012 NLB cluster the cluster is backing two nodes. I also have it configure for signal affinity due to the requirements for the web application. Now I would like to know how do you guys monitor your NLB clusters? I am mainly interested in making sure the traffic is divided up as evenly as possible. I know about perfmon but is there anything else that you all use.

Server 2012 R2 Witness Disk won't failover

$
0
0

Hi,

I have a 2 node cluster with a shared witness disk for Quorum. when I lose connection to the disk from 1 node, the ownership fails over to the other node, this is what I expect. however if I test again shortly afterwards, it doesn't failover, it just goes into the offline state, I have to manually move it then it fails over.

it doesn't matter how many times I test this, it simply doesn't failover after that first attempt. however, several hours later (after I slept and tested it the next morning), it again fails over correctly, but subsequent tests bring it offline. I have looked through every tab in the properties for the witness, cluster name and IP address and can not find anything that could relate to this timeout of several hours. all values are at their defaults, however I did increase the number of failures within the specified time from 1 to 10... but that made no difference.

I get the following two events

ID 1038

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it

ID 1069

Cluster resource 'Cluster Disk 1' of type 'Physical Disk' in clustered role 'Cluster Group' failed.

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

How can I make the witness disk repeatedly failover more than once in several hours? i'm wondering if it's a powershell only configuration but I have no idea what command that would be.

thanks

Steve

WFCM Is Not Restarting a Process After Exit

$
0
0

I have a two node cluster for availability purposes and find that it works quite well for the most part.

I do however, have an issue where a process will shut itself down with a clean exit after an exception.  WFCM continues to show the service "online" even though services.msc show the service has stopped.

Any ideas what is going on?

MSDTC still offline

$
0
0

Hi


I have sql server 2008 sql clustered hosted on windows server 2008 r2


still showing error 1069


Failed to load restype 'MSMQ': error 21.

HandleMonitorReply: ONLINERESOURCE for 'MSDTC-PMDTC', gen(0) result 5018.
000002d8.000002b8::2015/06/05-23:17:06.369 INFO  [RCM] TransitionToState(MSDTC-PMDTC) OnlinePending-->ProcessingFailure.
000002d8.000002b8::2015/06/05-23:17:06.369 INFO  [RCM] rcm::RcmGroup::UpdateStateIfChanged: (PMDTC, Pending --> Failed)
000002d8.000002b8::2015/06/05-23:17:06.369 ERR   [RCM] rcm::RcmResource::HandleFailure: (MSDTC-PMDTC)
00000e10.000004c0::2015/06/05-23:17:06.369 INFO  [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:06:369 : [ e10. 4c0] 0x00000000 [TRACE_RESOURCE] [  TRACE_INFO] DtcOnlineThread (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@675): DTC online thread has stopped
000002d8.000002b8::2015/06/05-23:17:06.369 INFO  [RCM] resource MSDTC-PMDTC: failure count: 1, restartAction: 2.
000002d8.000002b8::2015/06/05-23:17:06.369 INFO  [RCM] Will restart resource in 500 milliseconds.
000002d8.000002b8::2015/06/05-23:17:06.369 INFO  [RCM] TransitionToState(MSDTC-PMDTC) ProcessingFailure-->[WaitingToTerminate to DelayRestartingResource].
000002d8.000002b8::2015/06/05-23:17:06.369 INFO  [RCM] rcm::RcmGroup::UpdateStateIfChanged: (PMDTC, Failed --> Pending)
000002d8.000002b8::2015/06/05-23:17:06.369 INFO  [RCM] TransitionToState(MSDTC-PMDTC) [WaitingToTerminate to DelayRestartingResource]-->[Terminating to DelayRestartingResource].
00000e10.00001508::2015/06/05-23:17:06.385 INFO  [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:06:385 : [ e10.1508] 0x00000000 [TRACE_RESOURCE] [  TRACE_INFO] DtcTerminate (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@800): Terminating DTC resource
000002d8.0000100c::2015/06/05-23:17:06.385 INFO  [RCM] HandleMonitorReply: TERMINATERESOURCE for 'MSDTC-PMDTC', gen(1) result 0.
000002d8.0000100c::2015/06/05-23:17:06.385 INFO  [RCM] TransitionToState(MSDTC-PMDTC) [Terminating to DelayRestartingResource]-->DelayRestartingResource.
000002d8.00000b04::2015/06/05-23:17:06.900 INFO  [RCM] Delay-restarting MSDTC-PMDTC and any waiting dependents.
000002d8.00000b04::2015/06/05-23:17:06.900 INFO  [RCM] TransitionToState(MSDTC-PMDTC) DelayRestartingResource-->OnlineCallIssued.
00000e10.00001038::2015/06/05-23:17:06.900 INFO  [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:06:900 : [ e10.1038] 0x00000000 [TRACE_RESOURCE] [  TRACE_INFO] DtcOnline (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@197): Bringing the DTC resource online
000002d8.00000b04::2015/06/05-23:17:06.900 INFO  [RCM] HandleMonitorReply: ONLINERESOURCE for 'MSDTC-PMDTC', gen(1) result 997.
000002d8.00000b04::2015/06/05-23:17:06.900 INFO  [RCM] TransitionToState(MSDTC-PMDTC) OnlineCallIssued-->OnlinePending.
00000e10.00000f28::2015/06/05-23:17:06.900 INFO  [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:06:900 : [ e10. f28] 0x00000000 [TRACE_RESOURCE] [  TRACE_INFO] DtcOnlineThread (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@516): DTC online thread is running
00000e10.00000f28::2015/06/05-23:17:06.900 INFO  [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:06:900 : [ e10. f28] 0x00000000 [TRACE_RESOURCE] [  TRACE_INFO] DtcOnlineThread (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@536): Ensuring single resource...
00000e10.00000f28::2015/06/05-23:17:06.900 INFO  [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:06:900 : [ e10. f28] 0x00000000 [TRACE_RESOURCE] [  TRACE_INFO] DtcOnlineThread (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@547): Setting up resource registry...
00000e10.00000f28::2015/06/05-23:17:07.009 INFO  [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:07:009 : [ e10. f28] 0x00000000 [TRACE_RESOURCE] [  TRACE_INFO] DtcOnlineThread (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@563): Setting up resource files...
00000e10.00000f28::2015/06/05-23:17:07.009 ERR   [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:07:009 : [ e10. f28] 0x800710dc [TRACE_RESOURCE] [ TRACE_ERROR] DtcOnlineThread (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@569): Failed to create resource files
00000e10.00000f28::2015/06/05-23:17:07.009 ERR   [RHS] Online for resource MSDTC-PMDTC failed.
000002d8.0000170c::2015/06/05-23:17:07.009 WARN  [RCM] HandleMonitorReply: ONLINERESOURCE for 'MSDTC-PMDTC', gen(1) result 5018.
000002d8.0000170c::2015/06/05-23:17:07.009 INFO  [RCM] TransitionToState(MSDTC-PMDTC) OnlinePending-->ProcessingFailure.
00000e10.00000f28::2015/06/05-23:17:07.009 INFO  [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:07:009 : [ e10. f28] 0x00000000 [TRACE_RESOURCE] [  TRACE_INFO] DtcOnlineThread (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@675): DTC online thread has stopped
000002d8.0000170c::2015/06/05-23:17:07.009 INFO  [RCM] rcm::RcmGroup::UpdateStateIfChanged: (PMDTC, Pending --> Failed)
000002d8.0000170c::2015/06/05-23:17:07.009 ERR   [RCM] rcm::RcmResource::HandleFailure: (MSDTC-PMDTC)
000002d8.0000170c::2015/06/05-23:17:07.009 INFO  [RCM] resource MSDTC-PMDTC: failure count: 2, restartAction: 2.
000002d8.0000170c::2015/06/05-23:17:07.009 INFO  [RCM] TransitionToState(MSDTC-PMDTC) ProcessingFailure-->[WaitingToTerminate to Failed].
000002d8.0000170c::2015/06/05-23:17:07.009 INFO  [RCM] rcm::RcmGroup::UpdateStateIfChanged: (PMDTC, Failed --> Pending)
000002d8.0000170c::2015/06/05-23:17:07.009 INFO  [RCM] TransitionToState(MSDTC-PMDTC) [WaitingToTerminate to Failed]-->[Terminating to Failed].
00000e10.00001508::2015/06/05-23:17:07.009 INFO  [RES] Distributed Transaction Coordinator <MSDTC-PMDTC>: 06-06-2015 02:17:07:009 : [ e10.1508] 0x00000000 [TRACE_RESOURCE] [  TRACE_INFO] DtcTerminate (d:\w7rtm\com\complus\dtc\shared\mtxclu\src\dtcresource.cpp@800): Terminating DTC resource
000002d8.0000170c::2015/06/05-23:17:07.009 INFO  [RCM] Resource MSDTC-PMDTC is causing group PMDTC to failover.  Posting worker thread.
000002d8.0000170c::2015/06/05-23:17:07.009 INFO  [RCM] rcm::RcmGroup::Failover: (PMDTC)
000002d8.00001430::2015/06/05-23:17:07.009 INFO  [RCM] HandleMonitorReply: TERMINATERESOURCE for 'MSDTC-PMDTC', gen(2) result 0.
000002d8.00001430::2015/06/05-23:17:07.009 INFO  [RCM] TransitionToState(MSDTC-PMDTC) [Terminating to Failed]-->Failed.
000002d8.00001430::2015/06/05-23:17:07.009 INFO  [RCM] rcm::RcmGroup::UpdateStateIfChanged: (PMDTC, Pending --> Failed)
000002d8.0000170c::2015/06/05-23:17:07.040 WARN  [RCM] Failing over group PMDTC, failoverCount 1, last time 2015/06/06-02:17:07.040.
000002d8.0000170c::2015/06/05-23:17:07.040 INFO  [RCM] rcm::RcmGroup::Move: (PMDTC, 2)
000002d8.0000170c::2015/06/05-23:17:07.102 INFO  [RCM] rcm::RcmGroup::Move: Bringing group 'PMDTC' offline first...
000002d8.0000170c::2015/06/05-23:17:07.102 INFO  [RCM] TransitionToState(DTC) Online-->OfflineCallIssued.
000002d8.0000170c::2015/06/05-23:17:07.102 INFO  [RCM] rcm::RcmGroup::UpdateStateIfChanged: (PMDTC, Failed --> Pending)
000002d8.0000170c::2015/06/05-23:17:07.102 INFO  [RCM] TransitionToState(PMDTC) Online-->OfflineCallIssued.
000002d8.0000170c::2015/06/05-23:17:07.102 INFO  [RCM] 'IP Address 10.4.225.223' cannot go offline yet; 'PMDTC' is in state OfflineCallIssued.
000002d8.0000170c::2015/06/05-23:17:07.102 INFO  [RCM] TransitionToState(IP Address 10.4.225.223) Online-->WaitingToGoOffline.
00000828.000007b4::2015/06/05-23:17:07.102 INFO  [RES] Physical Disk <DTC>: Offline request.
00000828.00000f34::2015/06/05-23:17:07.102 INFO  [RES] Physical Disk: DriveLetter mask: 0x200000
00000828.0000112c::2015/06/05-23:17:07.102 INFO  [RES] Network Name <PMDTC>: Taking resource offline...
00000828.00000398::2015/06/05-23:17:07.102 INFO  [RES] Network Name <PMDTC>: TimerQueueTimer rescheduled to fire after 600 secs
00000828.00000398::2015/06/05-23:17:07.102 INFO  [RES] Network Name <PMDTC>: Offline of resource continuing...
000002d8.00000e3c::2015/06/05-23:17:07.102 INFO  [RCM] HandleMonitorReply: OFFLINERESOURCE for 'DTC', gen(0) result 997.
000002d8.00000e3c::2015/06/05-23:17:07.102 INFO  [RCM] TransitionToState(DTC) OfflineCallIssued-->OfflinePending.
000002d8.000002b8::2015/06/05-23:17:07.102 INFO  [RCM] HandleMonitorReply: OFFLINERESOURCE for 'PMDTC', gen(2) result 997.
000002d8.000002b8::2015/06/05-23:17:07.102 INFO  [RCM] TransitionToState(PMDTC) OfflineCallIssued-->OfflinePending.
00000828.00000398::2015/06/05-23:17:07.181 INFO  [RES] Network Name <PMDTC>: DNS name PMDTC successful removed from LSA
00000828.00000398::2015/06/05-23:17:07.181 INFO  [ClNet] Adapter Local Area Connection* 9 RFC2863 operational status = 1.
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet] Created adapter: DeviceGuid:     46BE6EB5-4725-45FF-8FE5-74B59097E912
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DeviceName:     Microsoft Failover Cluster Virtual Adapter
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  ConnectoidName: Local Area Connection* 9
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  Netbios/TCP:    1
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DNS Suffix:
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      fec0:0:0:ffff::1%1
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      fec0:0:0:ffff::2%1
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      fec0:0:0:ffff::3%1
00000828.00000398::2015/06/05-23:17:07.181 INFO  [ClNet] Adapter LAN RFC2863 operational status = 1.
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet] Created adapter: DeviceGuid:     0BC53ED7-5CBE-437B-8C1F-F67E8C5CD420
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DeviceName:     Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #37
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  ConnectoidName: LAN
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  Netbios/TCP:    1
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DNS Suffix:
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      10.6.125.62
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      10.6.125.61
00000828.00000398::2015/06/05-23:17:07.181 INFO  [ClNet] Adapter HB RFC2863 operational status = 1.
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet] Created adapter: DeviceGuid:     1358526B-EE0C-45D4-937B-5F20B361A9BB
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DeviceName:     Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #38
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  ConnectoidName: HB
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  Netbios/TCP:    0
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DNS Suffix:
00000828.00000398::2015/06/05-23:17:07.181 INFO  [ClNet] Adapter isatap.{1358526B-EE0C-45D4-937B-5F20B361A9BB} RFC2863 operational status = 2.
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet] Created adapter: DeviceGuid:     846D2E37-5A7C-4D66-9EB7-5FD7FC29C3A4
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DeviceName:     Microsoft ISATAP Adapter #2
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  ConnectoidName: isatap.{1358526B-EE0C-45D4-937B-5F20B361A9BB}
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  Netbios/TCP:    0
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DNS Suffix:
00000828.00000398::2015/06/05-23:17:07.181 INFO  [ClNet] Adapter isatap.{46BE6EB5-4725-45FF-8FE5-74B59097E912} RFC2863 operational status = 2.
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet] Created adapter: DeviceGuid:     E8236BA9-0674-43A0-A07B-CA61FD9BE6C4
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DeviceName:     Microsoft ISATAP Adapter
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  ConnectoidName: isatap.{46BE6EB5-4725-45FF-8FE5-74B59097E912}
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  Netbios/TCP:    0
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DNS Suffix:
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      fec0:0:0:ffff::1%1
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      fec0:0:0:ffff::2%1
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      fec0:0:0:ffff::3%1
00000828.00000398::2015/06/05-23:17:07.181 INFO  [ClNet] Adapter isatap.{0BC53ED7-5CBE-437B-8C1F-F67E8C5CD420} RFC2863 operational status = 2.
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet] Created adapter: DeviceGuid:     AE997A47-D030-4D0E-92D7-50110E896BC8
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DeviceName:     Microsoft ISATAP Adapter #3
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  ConnectoidName: isatap.{0BC53ED7-5CBE-437B-8C1F-F67E8C5CD420}
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  Netbios/TCP:    0
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DNS Suffix:
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      10.6.125.62
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DnsServer:      10.6.125.61
00000828.00000398::2015/06/05-23:17:07.181 INFO  [ClNet] Adapter Teredo Tunneling Pseudo-Interface RFC2863 operational status = 2.
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet] Created adapter: DeviceGuid:     CACD9FCB-E362-43CD-A380-12A12AF73665
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DeviceName:     Teredo Tunneling Pseudo-Interface
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  ConnectoidName: Teredo Tunneling Pseudo-Interface
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  Netbios/TCP:    0
00000828.00000398::2015/06/05-23:17:07.181 DBG   [ClNet]                  DNS Suffix:
00000828.00000398::2015/06/05-23:17:07.181 INFO  [RES] Network Name <PMDTC>: adapter LAN (2)
00000828.00000398::2015/06/05-23:17:07.181 INFO  [RES] Network Name <PMDTC>: FQDN name PMDTC.moka.com.kw removal with LSA was successful
00000828.00000398::2015/06/05-23:17:07.181 INFO  [RES] Network Name <PMDTC>: Deleted server name PMDTC from all transports.
00000828.00000398::2015/06/05-23:17:07.181 INFO  [RES] Network Name <PMDTC>: Deleted workstation name PMDTC from transport 0.
00000828.00000398::2015/06/05-23:17:07.181 INFO  [RHS] Resource PMDTC has come offline. RHS is about to report resource status to RCM.
000002d8.00000638::2015/06/05-23:17:07.181 INFO  [RCM] HandleMonitorReply: OFFLINERESOURCE for 'PMDTC', gen(2) result 0.
00000828.00000398::2015/06/05-23:17:07.181 INFO  [RES] Network Name <PMDTC>: Resource is now offline
000002d8.00000638::2015/06/05-23:17:07.181 INFO  [RCM] TransitionToState(PMDTC) OfflinePending-->OfflineSavingCheckpoints.
000002d8.00000f68::2015/06/05-23:17:07.181 INFO  [RCM] TransitionToState(PMDTC) OfflineSavingCheckpoints-->Offline.
000002d8.00001250::2015/06/05-23:17:07.181 INFO  [RCM] TransitionToState(IP Address 10.4.225.223) WaitingToGoOffline-->OfflineCallIssued.
00000828.0000112c::2015/06/05-23:17:07.181 INFO  [RES] IP Address <IP Address 10.4.225.223>: Taking resource offline...
00000828.0000112c::2015/06/05-23:17:07.181 INFO  [RES] IP Address <IP Address 10.4.225.223>: Deleting IP interface DFE1040A.
000002d8.000002b8::2015/06/05-23:17:07.181 DBG   [NETFTAPI] received NsiDeleteInstance  for 10.4.225.223
000002d8.000002b8::2015/06/05-23:17:07.181 WARN  [NETFTAPI] Failed to query parameters for 10.4.225.223 (status 80070490)
000002d8.000002b8::2015/06/05-23:17:07.181 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.4.225.223
00000828.0000112c::2015/06/05-23:17:07.196 INFO  [RES] IP Address <IP Address 10.4.225.223>: Address 10.4.225.223 on adapter Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #37 offline.
000002d8.00001430::2015/06/05-23:17:07.196 INFO  [RCM] HandleMonitorReply: OFFLINERESOURCE for 'IP Address 10.4.225.223', gen(0) result 0.
000002d8.00001430::2015/06/05-23:17:07.196 INFO  [RCM] TransitionToState(IP Address 10.4.225.223) OfflineCallIssued-->OfflineSavingCheckpoints.
000002d8.00001430::2015/06/05-23:17:07.196 INFO  [RCM] TransitionToState(IP Address 10.4.225.223) OfflineSavingCheckpoints-->Offline.
00000828.00000f34::2015/06/05-23:17:07.321 INFO  [RES] Physical Disk: HardDiskpScopeShareCallback: Enter resourceName PMDTC
00000828.00000f34::2015/06/05-23:17:07.321 ERR   [RES] Physical Disk: Resource PMDTC is not in online or pending state.
00000828.00000f34::2015/06/05-23:17:07.321 INFO  [RES] Physical Disk <DTC>: HardDiskpCloseSVIHandles: Exit
00000828.00000f34::2015/06/05-23:17:08.350 INFO  [RES] Physical Disk: ReleaseDisk: stop reserve succeeded on device 1 (sig 486f61a2)
00000828.00000f34::2015/06/05-23:17:08.350 INFO  [RHS] Resource DTC has come offline. RHS is about to report resource status to RCM.
000002d8.00001430::2015/06/05-23:17:08.350 INFO  [RCM] HandleMonitorReply: OFFLINERESOURCE for 'DTC', gen(0) result 0.
000002d8.00001430::2015/06/05-23:17:08.350 INFO  [RCM] TransitionToState(DTC) OfflinePending-->OfflineSavingCheckpoints.
000002d8.00001430::2015/06/05-23:17:08.350 INFO  [RCM] TransitionToState(DTC) OfflineSavingCheckpoints-->Offline.
000002d8.00001430::2015/06/05-23:17:08.350 INFO  [RCM] rcm::RcmGroup::UpdateStateIfChanged: (PMDTC, Pending --> Failed)
000002d8.00000638::2015/06/05-23:17:08.350 INFO  [RCM] rcm::RcmGum::GroupMoveOperation(PMDTC,2)
000002d8.00001430::2015/06/05-23:17:08.366 WARN  [RCM] rcm::RcmApi::ResourceControl: retrying: PMDTC, 5908.


MCP MCSA MCSE MCT MCTS CCNA

Live migration of 'Virtual Machine ADVM-01 ' failed. Event ID : 21502

$
0
0

I've HA Cluster running on Windows 2012 R2 with configured fail over cluster. it's running Windows 2008 , 2008 R2 , 2012 VMs.

already installed the Integration Services. when i tried to Live Migrate to other Node , it's getting failed.

in the event viewer below error message shows.

" Live migration of 'Virtual Machine ADVM-01' failed.

Virtual machine migration operation for 'ADVM-01' failed at migration source 'NODE01'. (Virtual machine ID D840382C-194B-4B4F-8BF5-19552537D0EF)

'ADVM-01' failed to delete configuration: The request is not supported. (0x80070032). (Virtual machine ID D840382C-194B-4B4F-8BF5-19552537D0EF) "

please advise me.


Regards, COMDINI

Failover Cluster Virtual Adapter - This device is not working, Error Code 31

$
0
0

Hi there,

I have two identical servers with 2012 R2 installed and was having an issue with the Failover Cluster Virtual Adapter on one of them. I must have Hyper-V installed for some other software I am running, but we are not running an VMs. I installed the Hyper-v role first, set up NIC teaming and then installed the MS Failover Cluster feature. The problem I'm having is that one of the systems is showing an error with the Failover Cluster Virtual Adapter.

Opening Device Manager, the "Hyper-V Virtual Ethernet Adapter" is shown under the Network Adapter section with a yellow triangle, and the following message:

This device is not working properly because Windows cannot load the drivers required for this device. (Code 31)

I have since reinstalled MS FO Clustering but that did not fix it. I found an article about disabling IP offloading on the physical NICs part of the Teaming but that did not resolve it either after a MS FO Clustering reinstall. I have just uninstalled the FO Cluster and Hyper-V roles and trying a reinstall of both to see if it was a problem somewhere in Hyper-V. 

A similar issue like this thread : https://social.technet.microsoft.com/Forums/en-US/1cc9cc76-6e5f-4198-9936-c6ffcb67328c/hyperv-virtual-ethernet-adapter-not-working-probperly-code-31-on-host?forum=virtualmachingmgrhyperv

Thanks,

Viewing all 5654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>