Quantcast
Channel: High Availability (Clustering) forum
Viewing all 5654 articles
Browse latest View live

Storage Replica not configuring

$
0
0

hi,

I am trying to configure storage replica but i am having no luck. what I have is the following:

2 servers, each server has 1 SAS for the OS, 1 SAS (250GB) for log disk, 1 SATA SSD (1TB for data)

i can see the instruction state i can configure this with DAS (locally installed disks right?) there is no mention is requiring SAS or SATA so i'm assuming it works with both.

what I am trying to achieve as my final goal is a 2 node cluster which has this storage replica configured so I can make a CSV out of it (this part I may be a little confused over whether is possible), but i can't even get passed the replica config at the moment. i am following the scenario for server to server storage replica, but i get this following error when trying to configure the replica group and partnership

New-SRPartnership : Unable to create replication group rg01, detailed reason: Element not found. .
At line:1 char:1
+ New-SRPartnership -SourceComputerName hyper-vrep1 -SourceRGName rg01  ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (MSFT_WvrAdminTasks:root/Microsoft/...T_WvrAdminTasks) [New-SRPartnershi
   p], CimException
    + FullyQualifiedErrorId : Windows System Error 1168,New-SRPartnership

my instructions are coming from https://technet.microsoft.com/en-us/library/mt126103.aspx

the Storage Spaces Direct looks an interesting feature and that looks like it definitely will allow me to make a CSV shared acorss the nodes with local storage (hyper-converged scenario) but all the instructions looks like it requires 4 servers, i only have 2... i was wondering if that 4 servers was a recommended or required amount, i can't get that to work with 2 servers either but i might not be matching my requirements.

any insight into this would be much appreciated

thanks

Steve


Event ID : 21502 "failed to register the service principal name: The user name or password is incorrect. (0x8007052E)."

$
0
0

Hi

we have configured the two node 2012 Hyper-V but after configuring the Replica it is showing in failed state and we are getting event ID

Event ID : 21502

'Hyper-V Replica Broker xxxxxxxxReplica' failed to register the service principal name: The user name or password is incorrect. (0x8007052E).

Windows 2012 3(three) Node custer Quorum Configuration

$
0
0

Hi

I have 3 Node windows 2012 R2 cluster. now i wanted to perform the quorum configuration so if any body having document or steps please let us know.

can not start one vm from save states

$
0
0

After I change my Server processor, one of my vm can not restore, the event id is 24004 and 12050, my Hyper-V server is server 2012R2, latest update installed.

The virtual machine 'vmname 'is using processor-specific features not supported on physical computer 'hostname'. To allow for migration of this virtual machine to physical computers with different processors, modify the virtual machine settings to limit the processor features used by the virtual machine. (Virtual machine ID 0DDAA6B3-B278-4A2B-90D1-F422AA1A2955)

Please help

Cluster sensitivity to planned outages

$
0
0

In the case of a virtualized Microsoft cluster using a converged infrastructure (Cisco UCS with network and storage over the network), is there a way to tweak the cluster services to be a bit more forgiving?

The scenario is that a node lose connectivity to the other node and also loses connectivity to the quorum, so the cluster service stops as the minimum amount of votes is not met.

Any suggestions/recommendations in such a case?  I find it hard to believe this has never come up before, but I've searched and have not been able to find anything useful.

Inquiry Regarding Hyper-V Witness Disk Configuration In Windows Server 2012 R2

$
0
0

Good afternoon,

I am in the process of re-setting up my 5th Hypervisor which will be running Windows Server 2012 R2 Datacenter. I will be using this hypervisor to re-build my Hyper-V failover cluster which will include 4 existing Hypervisors which I will be upgrading to Windows Server 2012 R2.

The current 4 hypervisor which I will be upgrading currently have a witness disk configured and what I need to know is whether I can use the existing witness disk or whether I should create a new one?

If I should create a brand new Hyper-V witness disk will these steps suffice:

1. Create a new LUN 2GB in size.

2. Create a 2GB virtual disk in the 2GB LUN

3. Attach the LUN to the 5th Hypervisor via Failover Cluster Manager

4. Right click on the cluster name, Configuration, Configure the Quorum.

Please let me know if I am going about this correctly.

Thanks in advance!

Cluster not work !

$
0
0

Hi all,

I created cluster with 2 nodes (Windows Server 2012 R2 Standard)

- Node 1: 2 x CPU E5-2630 2.3 GHz

- Node 2: 1 x CPU E5-2620 2.0 GHz

- CSV storage on IBM v3700 by iSCSI

2 nodes update all hot fix from online Microsoft.

And its passed validate OK.

But when i try to live migrating, i face the error as some event ID:

-1069

Cluster resource 'Virtual Machine New2' of type 'Virtual Machine' in clustered role 'New2' failed. The error code was '0x5' ('Access is denied.').

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

- 1205

The Cluster service failed to bring clustered role 'New2' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered role.

- 1254

Clustered role 'New2' has exceeded its failover threshold.  It has exhausted the configured number of failover attempts within the failover period of time allotted to it and will be left in a failed state.  No additional attempts will be made to bring the role online or fail it over to another node in the cluster.  Please check the events associated with the failure.  After the issues causing the failure are resolved the role can be brought online manually or the cluster may attempt to bring it online again after the restart delay period.

I check in event log and error with Hyper V Role:

- 3080

'New2' could not create or access saved state file C:\ClusterStorage\Volume1\New2\Virtual Machines\EB6D4BA0-6AB6-42C6-9999-89FD9C6246C0\EB6D4BA0-6AB6-42C6-9999-89FD9C6246C0.vsv. (Virtual machine ID EB6D4BA0-6AB6-42C6-9999-89FD9C6246C0)

- 3040

'New2' could not initialize. (Virtual machine ID EB6D4BA0-6AB6-42C6-9999-89FD9C6246C0)

So, is there any body face same error and already resolve, please support me !

Cluster Heartbeat is using the wrong network

$
0
0

Hi there.

I have a very strange phenomena at one of my customers and can't really understand why this happened.

This is a a Windows 2008 R2 two node cluster (actually patched)

We have a Public network, a cluster network and a live migration network. Storage is Fibrechannel.

Live Migration network is a crossover 10GBit connection, the rest are non-teamed 1GBit switch connection.

All three networks are cluster use enabled. The cluster network was the network with the lowest metric.

Due to some network problems we decided to change the network for the clustertraffic to use the 10GBit connection. So I've lowered the metric for the livemigration network.
Today we tested the config. As we disabled the switchport for the "old" cluster network we've run immediately into a splitbrain situation! All the other networks where still up and running. So the nodes had three routes in total to see each other. But anyway, we've disabled a non-primary-heartbeat network so nothing should have happened.....

I've digged through the clusterlogs and see the entries that the route from the "old cluster network" vanished and that both nodes decided that the other one is dead!
Cluster validation test is fine!

When I use the network monitor to see where the heartbeat packets are actually running I can see them on the "old cluster network" and not on the LiveMigration network which they should using...........

So according to all settings I've checked Heartbeats should be running through the Livemigration Network and all the other networks should be redundancy. But the heartbeats are still running via the cluster network and redundancy is not working.

Does anybody have an idea?


Regards
Olaf



I/O Performance throttled down consistently in every minute with the same pattern. (Windows Clustering 2012 R2 + SQL Clustering 2012 R2)

$
0
0

Dear Expert...

I just done installation of Windows Clustering 2012 R2 + SQL Clustering 2012 R2. However, when i do I/O performance test. the result is not perfect. As per graph below, the I/O seems to have throttled down consistently in every minute with the same pattern. Any idea?

Note: The cluster is connect to SAN storage. The SAN Storage tested fine with other standalone server.

Many Thanks!


2012r2 cluster, move storage between LUN's without downtime.

$
0
0

I have a 2012r2 Cluster with nodes masked to 2 LUN's (on different iSCISI SANS). VM are running from both LUNS. What is the best way to move the running VM with its storage to LUN1 without downtime?


HP P4000 Lefthand + Server 2012 R2 + Failover Clustering

$
0
0

Hi Everyone,

Here's some background:

We have 2x HP BL460c G7 blades in a C3000 Chassis, attached via 2x VirtualConnect modules to 6x HP P4500 G2 LeftHands, via iSCSI. There's a AD 2012 R2 Directory Service in place and the 2x Blades are running Server 2012 R2 Full and connected to the domain. Both blades have had the failover clustering role installed on them as well.

I have installed the MPIO feature on both nodes and the HP MPIO DSM and enabled Multipath in the MPIO control panel. Multipathing works fine; all the paths show up to each Lefthand node via the 2x iSCSI NICs on each blade. The multipathing setup matches HP and Microosft's best practices guides online.

All HP Lefthands are running OS version 12.0 and are parth of a single Management Group, with a single cluster. Volumes have been created and assigned to each of the Blades.

All HP Blades are running the latest firmware for both the BIOS, FlexCards and Network Adapters, the C3000 OAs are at the latest version and the Virtual Connect Modules are running 4.41 (Latest).

Our Issue:

When I go to create a new failover cluster and run the validation check, I get exact errors from this Blog: http://blogs.technet.com/b/askcore/archive/2009/04/15/windows-2008-failover-cluster-validation-fails-on-validate-scsi-3-persistent-reservation.aspx.

The blog and HP's documentation says it supports SPC-3 on HP P4500 G2 Lefthands.

We have no idea what we are missing, apart from the fact that the Lefthand doesn't show up in the MPIO configuration window as an SPC-3 compliant device.

In the following Guide: http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA4-4517ENW.pdf, which refers to Fibre Channel, the P4000 Lefthand is actually listed as an SPC-3 compliant Device, but they would be due to having a HP HBA connecting via FibreChannel to the Lefthand P4000 storage, whereas iSCSI is via the Microsoft iSCSI initiator, which does not have drivers from HP, because it's a Microsoft software device and the only 'drivers' required is the DSM, which is not a driver at all.

Normally we put ESXi in front of Lefthands, but we are doing Hyper-V this time with Windows Server 2012 R2 nodes and this is the first time we have seen this error. I have done Hyper-V 2008 R2 with a HP P2000 G3 and not seen these issues, although that was SAS, not iSCSI - so we have HP HBAs + P2000.

Anyone else come across this before?

We cannot proceed with building the cluster until this error is cleared. I am about to raise this with HP, but I believe this is more a Microsoft issue, as we are at the mercy of the Microsoft iSCSI initiator on this one. I am sure there is someone out there who has done a Failover Cluster with a HP P4000 iSCSI lefthand before... comeon! :P

Thanks,

Stefano

unable to destroy windows 2008 r2 failover cluster after SAN rebuild

$
0
0

I created a windows 2008 r2 failover cluster for a sql 2008 active/passive cluster. This two node cluster was using a SAN device for a quorum disk resource as well as MSDTC resource.

 

Well....I decided to reconfigure the SAN device, but I didn't destroy the cluster first. Now that the quorum disk and mstdc disk are completely gone, the cluster is obviously not working. But, I can't even destroy the cluster and start again. I've tried from the Windows Clustering tool, as well as the command line. I was able to get the cluster service to start using the "/fixquorum" parameter. After doing this I was able to remove the passive node from the cluster, but it wouldn't let me destroy the cluster because the default resource group and msdtc are still attached as resources. I tried to delete these resources from both the GUI tool, as well as command line. It will either freeze for several minutes and crash the program, or once it even BSOD'd the server.

 

Can someone advise on how to destroy this cluster so I can start over?

Live Migration and Storage Migration

$
0
0

Can anyone help me please with the answer to the below question

You have a datacenter with 6 servers.Each server has Hyper-V Role installed and runs Win 2012 R2. The servers are configured as below:

Hostname                              Processor Manufacturer                               Storage Type

Host1                                Intel         Local Disk 

Host2AMD iSCSI Disk

Host3AMD Local Disk

Host4IntelCSV

Host5IntelCSV

Host6Intel                                                         iSCSI Disk


Host 4 and Host 5 are part of a cluster named Cluster 1.Cluster 1 hosts a VM, VM1. You need to move VM1 to another Hyper-V Host.The solution must minimize the downtime of VM1.To which server and by which method should you move VM1?

1) To Host3 by using a storage migration

2) To Host6 by using a storage migration

3) To Host2 by using a Livemigration

4) To Host1 by using a quick migration

According to me, Number 4 is eliminated as quick migration involves some donwtime, compared to live migraiton

Number 3 is also eliminated because of different processor manufacturers.

Is the answer Number 2--> To Host6 by using a storage migration? 

                 


Pallab Chakraborty

Server 2012 R2 Witness Disk won't failover

$
0
0

Hi,

I have a 2 node cluster with a shared witness disk for Quorum. when I lose connection to the disk from 1 node, the ownership fails over to the other node, this is what I expect. however if I test again shortly afterwards, it doesn't failover, it just goes into the offline state, I have to manually move it then it fails over.

it doesn't matter how many times I test this, it simply doesn't failover after that first attempt. however, several hours later (after I slept and tested it the next morning), it again fails over correctly, but subsequent tests bring it offline. I have looked through every tab in the properties for the witness, cluster name and IP address and can not find anything that could relate to this timeout of several hours. all values are at their defaults, however I did increase the number of failures within the specified time from 1 to 10... but that made no difference.

I get the following two events

ID 1038

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it

ID 1069

Cluster resource 'Cluster Disk 1' of type 'Physical Disk' in clustered role 'Cluster Group' failed.

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

How can I make the witness disk repeatedly failover more than once in several hours? i'm wondering if it's a powershell only configuration but I have no idea what command that would be.

thanks

Steve

Cluster Shared Volume error after server not shutting down properly

$
0
0

Hi,
We have two IBM X240 servers ( we call it server A and server B) connecting to IBM disk system:V3700 via fibre HBA.

The both servers are installing windows 2012 R2.

We have implemented VM cluster and everything is working well.

Last week this two server is down due to power shortage in my server room.

After turning on the  server A, it will come out the below error:

Windows failed to start, a recent hardware or software change might be cause.
File: \windows\system32\drivers\msdsm.sys
status: 0xc0000017
Info:the operation system could't be loaded because a critical system drive is missing or contain errors.

After using the Last Good Configuration, we can log in to the system and turn on the clustered virtual machine.

it seems everything is fine now.

So i go and start the server B and log in to the system using the same method with server A.

I found all the VM will be shut down or running error due to Cluster Shared Volume error.

Refer to below some errors captured from system system logs.

* Event 5142, Cluster Shared Volume 'Volume7' ('Cluster Disk 10') is no longer accessible from this cluster node because of error '(1460)'. Please troubleshoot this node's connectivity to the storage device and network connectivity.

* Event 5120,Cluster Shared Volume 'Volume3' ('Cluster Disk 4') has entered a paused state because of '(c00000be)'. All I/O will temporarily be queued until a path to the volume is reestablished.

Now we only can turn on only one server and shut down another server, if i turn on both server, the error will come out again & the server will go down.

Any suggestion or need me provide more information.

Thanks.


Windows 2012 File Server cluster - issue with adding file share

$
0
0

Greetings All,

I have physical 2-node (nodes A and B) Windows 2012 Standard cluster with File Server role installed (standard SMB, no CSV/CAFS). Both nodes are alike, hardware and software-wise with the same patching level. LUNs presented for clustering are served by iscsi (EMC VNX).

The issue that I'm running into is when I try to create new shares. While running 'Add File Share -> Quick' wizard from Failover Cluster Manager on node B with file server being active on node A, I don't see my (online) volume available for selection in the 'Share location'->'Select by volume'. It is grayed out. The same when I try to run the wizard locally on the active node. Only when I run the wizard from node A with File Server being active on node B I'm able to see the volume in the wizard. Running the wizard locally on active node yields the same problem.

I'm using Domain Admin account for all above.

As a separate note, I get the following warning message on the first screen when running add file share wizard:

"Unable to retrieve all data needed to run the wizard. Error details: Cannot retrieve information from server (A or B, depending on which node I run it from). Error occurred during enumeration of SMB shares. The WinRM client sent a request to HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support WS-Management protocol"

Obviously I don't have additional roles/features installed to provide support for HTTP based requests but I don't think this is mandatory. Besides, I get the same error even in the only scenario mentioned above that allows me to create shares. 

Also, when trying to create new shares from 'Server Manager-> File and Storage Services ->Shares' hangs indefinitely with "loading data...'

Any help with getting new share creation/making volume appear when running the wizard across both nodes consistently working would be greatly appreciated and rewarded.

Thanks,

Peter

NLB Problem on Windows Server 2008 R2

$
0
0

Hello,

Exchange CAS array (NBL) has been already configured with node CAS node. When i was trying to add third node, in NLB console showing Misconfiguration. Then i delete 2nd node which was working well then i was trying to add third node but same issue and when it showing misconfiguration then the NLB NIC got Ping loss. NLB was configured in unicast mode.

All the server is in physical environment

The error below:


Processing update 19 from "NLB Manager on abc.bcd.com"
Starting update...
Going to modify cluster configuration...
Modification failed.

Update failed with status code 0x8004100a.

Please suggest

General questions on how to plan highly available environment in small businesses.

$
0
0

Preface: Questions are in the text and marked bold.

I read a lot about Hyper-V clustering, but never have I seen practical tips or best practices when it comes to building a highly available environment in small businesses. Creating a huge environment that comprises multiple servers is not viable in small businesses where budgets are usually low.

I am starting with the general idea of creating a failover cluster and virtualising the required service which are Active-Directory services, file services, print services, certification services, etc (sketch #1). So if one cluster node fails the company is still able to access critical services and have a productive business day.

sketch #1

(above: sketch #1)

I am assuming that the Active-Directory domain for the failover cluster (here:cluster.local) is independant from the domain containing the direct business services (files services, and so on). My first question is if this thought is legit?Do you use a separate AD domain for the cluster and another for the business data? Or do you use a single AD domain that comprises both, the failover cluster and the business services?

Continuing with that thought, I create a new highly available virtual machine (HAVM) withine the cluster. That will be the domain controller for the business domainad.example.com (seen in sketch #2 below). In the same way I create two other HAVMsDC2.ad.example.com and PRINT.ad.example.com, which represents an additional domain controller and a print server.My question here is, if this is still common practise?

Now, as far as I know, there are some services that can or should be made available not as a HAVM. Such would be MSSQL, which brings it's own technology in regards of high availability. Since the company is on a small budget, I cannot buy two additional servers, but instead add another CPU and some RAM into each cluster node. Another services that (AFAIK) should not simply be installed into a highly available machine are file services. For MSSQL I would create a normal (non-HA) virtual machine on each node (CL1.cluster.local+CL2.cluster.local) install MSSQL on both machines and let them handle the HA stuff themselves. For file services I would also create two normal virtual machines as shown in the picturesketch#2 below (FILECL1.ad.example.com+FILECL2.ad.example.com). With those two VMs I would create another cluster and only make file services highly available (FILE.ad.example.com).Also on this practise I do not know whether this is the actual solution or just bad practise?

sketch#2

(above: sketch #2)


Clustered Tiered Storage Space - no optimization report

$
0
0

I have a clustered SOFS Server and I am currently using CSV's for Hyper-V.  I have enabled the optimization task and added the report per https://technet.microsoft.com/en-us/library/dn789160.aspx.  The report does not generate.  I have also tried to run the defrag command per the technet article manually.  This issue seems to be that when using CSV's and a cluster defrag has no drives to run against.  Anything constructive you can suggest would be helpful.

How do you monitor the tiers in Storage Spaces on a cluster with CSV's?

Thanks,

Gary

 

Freezal

Event 1146 - Call ISALIVE timed out for resource 'SQL'

$
0
0

Hello guys!

In a Windows Server 2008 R2 environment with SQL Server 2005 the Cluster service fails after a healthcheck timeout. There is no latency or path lose at 'SQL' resource (LUN), all logs have been checked (Windows/Cluster/SAN/Sw/HBA) and no driver or update done:

00000dfc.000021b8::2015/05/26-03:40:59.799 INFO  [VSS] OnIdentify Returned true
00000dfc.000023f8::2015/05/26-05:35:07.403 INFO  [NM] Received request from client address SAO-PAULO-01.
00000dfc.00001c64::2015/05/26-10:16:24.296 INFO  [NM] Received request from client address SAO-PAULO-01.
00000dfc.00001368::2015/05/26-10:56:22.015 INFO  [NM] Received request from client address SAO-PAULO-01.
00000e3c.00000d54::2015/05/26-12:17:06.000 ERR   [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out for resource 'SQL'.
00000e3c.00000d54::2015/05/26-12:17:06.000 INFO  [RHS] Enabling RHS termination watchdog with timeout 1200000 and recovery action 3.
00000e3c.00000d54::2015/05/26-12:17:06.000 ERR   [RHS] Resource SQL handling deadlock. Cleaning current operation and terminating RHS process.
00000e3c.00000d54::2015/05/26-12:17:06.000 ERR   [RHS] About to send WER report.
00000dfc.00002220::2015/05/26-12:17:06.000 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'SQL', gen(0) result 4.
00000dfc.00002220::2015/05/26-12:17:06.000 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'SQL' consecutive failure count 1.
00000e3c.00000d54::2015/05/26-12:17:08.121 ERR   [RHS] WER report is submitted. Result : WerReportQueued.
00000dfc.00002220::2015/05/26-12:18:29.545 ERR   [RCM] rcm::RcmMonitor::RhsRpcResourceControl Error 1726 communicating with RHS process 3644.
00000dfc.00001808::2015/05/26-12:18:29.550 DBG   [NETFTAPI] received NsiDeleteInstance  for 192.168.177.167
00000dfc.00001808::2015/05/26-12:18:29.550 WARN  [NETFTAPI] Failed to query parameters for 192.168.177.167 (status 80070490)
00000dfc.00001808::2015/05/26-12:18:29.550 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 192.168.177.167
00000dfc.00001808::2015/05/26-12:18:29.551 DBG   [NETFTAPI] received NsiDeleteInstance  for 192.168.177.168
00000dfc.00001808::2015/05/26-12:18:29.551 WARN  [NETFTAPI] Failed to query parameters for 192.168.177.168 (status 80070490)
00000dfc.00001808::2015/05/26-12:18:29.551 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 192.168.177.168
00000dfc.000009a8::2015/05/26-12:18:29.552 DBG   [NETFTAPI] received NsiDeleteInstance  for 10.10.2.3
00000dfc.000009a8::2015/05/26-12:18:29.552 WARN  [NETFTAPI] Failed to query parameters for 10.10.2.3 (status 80070490)
00000dfc.000009a8::2015/05/26-12:18:29.552 DBG   [NETFTAPI] Signaled NetftLocalRemove  event for 10.10.2.3
00000dfc.00002540::2015/05/26-12:18:29.553 ERR   [RCM] rcm::RcmMonitor::RecoverProcess: Recovering monitor process 3644 / 0xe3c
00000dfc.00002540::2015/05/26-12:18:29.555 INFO  [RCM] Created monitor process 5616 / 0x15f0
00000dfc.00002220::2015/05/26-12:18:29.555 ERR   [RCM] rcm::RcmResource::Control: RPC_S_CALL_FAILED(1726)' because of 'result'
00000dfc.00002220::2015/05/26-12:18:29.555 ERR   [RCM] rcm::RcmResControl::DoResourceControl: RPC_S_CALL_FAILED(1726)' because of 'ResourceControl( GET_PRIVATE_PROPERTIES ) failed for resource 'SPCLUDtc'.'
00000dfc.00002220::2015/05/26-12:18:29.555 WARN  [RCM] ResourceControl(GET_PRIVATE_PROPERTIES) to SPCLUDtc returned 1726.
000015f0.000015a4::2015/05/26-12:18:29.656 INFO  [RHS] Initializing.

Should we trust that DLL could be corrupted? (happens 5 times with 4 or 5 days interval)

Is there any other information or troubleshooting about it?

Thanks!


Luiz Mercante | MCITP SQL 2008 | MCTS SQL 2008 | MTA Database Fundamentals | MCTS Windows Apps | MCTS Windows Network | MCP 2003 | sqldicas@outlook.com | http://sqldicas.com.br --> Se a resposta foi útil de alguma forma, classifique como resposta ou vote como útil.

Viewing all 5654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>