No shared disks visible in the Cluster Configuration Storage dialog
When installing the Oracle 10g clusterware the "Cluster Configuration Storage" dialog shows no shared disks.
We are using:
Windows 2003 Server
HP Eva 4400
Hello,
all disks in cluster are visible from all nodes (2 of them).
We tested it with unpartioned and partioned disks (primary and extended). No way to make them visible for the OUI.
Automount is enabled in Windows like required from Oracle.
Besides, we are using Standard Edition. Therefore we have to work with ASM.
Any more information needed.
Thanx in advance.
Similar Messages
-
Windows 2003 Standard Edition (Cluster Configuration Storage page)
I am trying to install RAC R2 on windows Server 2003 (Standard Edition). I am using FireWire 800 SIIG to connect to Maxtor OneTouch III External HDD.
When installing cluster Services, i do not see the Cluster Storage Devices. When i go to Computer Management, i see all the partitions of the raw device.
One "Cluster Configuration Storage" page, the Available Disks show no partitions.
Oracle installtion documentation says "On the Cluster Configuration Storage page, identify the disks that you want to use for the Oracle Clusterware files and, optionally, Oracle Cluster File System (OCFS) storage. Highlight each of these disks one at a time and click Edit to open the Specify Disk Configuration page where you define the details for the selected disk"
In my case, i do not see any disks. What am i missing?
Any Thoughts. Please advise
Thanks
-Prasad
Message was edited by:
pinjamYou have a more fundamental problem, Firewire disks will not work for RAC on Windows. The storage needs to be shared, Firewire disks can't be shared on Windows. On Linux, Oracle took the open source firewire driver and modified it to allow more than one host to connect. On Windows the driver is closed source so they can't do that.
I presume you are wanting to try-out RAC on Windows, If so another solution may be to download one of the many iSCSI Servers that are available. Microsoft ship an iSCSI Initiator for Windows, this allows you to share a 'block device' which is what RAC needs - then you can choose your RAC Database storage method of choice, ASM, OCFS, RAW. I prefer ASM -
Local Cache Visibility from the Cluster
Hi, can you give me an explanation for the following Coherence issue, please ?
I found in the documentation that the Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node and is accessible from a single JVM.
On te other hand, I also found the following statement:
“ Clustered caches are accessible from multiple JVMs (any cluster node running the same cache service). The cache service provides the capability to access local caches from other cluster nodes.”
My questions are:
If I have local off-heap NIO memory cache or NIO File Manager cache on the one Coherence node, can it be visible from other Coherence nodes as a clustered cache ?
Also, if I have NIO File Manager cache on a shared disk, is it possible to configure all nodes to work with that cache ?
Best Regards,
Tomislav MilinovicTomislav,
I will answer your questions on top of your statements, OK?
"Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node and is accessible from a single JVM"
Considering the partitioned (distributed) scheme, Coherence is a truly peer-to-peer technology in which data is spread across a cluster of nodes, the primary data is stored in a local JVM of one node, and its backup is stored in another node, preferably in another site, cluster or rack.
"Clustered caches are accessible from multiple JVMs (any cluster node running the same cache service). The cache service provides the capability to access local caches from other cluster nodes"
Yes, no matter if the data is stored locally in a single node of the cluster, but when you access that data through its key, Coherence automatically finds that data in the cluster and brings to you. Its transparently for the developer the location of data, but one thing is certain: you have a global view of caches, meaning that from every single member, you have access to all data stored. This is one of the magic that the Coherence protocol (called TCMP) does for you.
"If I have local off-heap NIO memory cache or NIO File Manager cache on the one Coherence node, can it be visible from other Coherence nodes as a clustered cache ?"
As I said earlier, yes, you can access all the data stored from any node of the cluster. The way in which each node store its data (called as backing map scheme) can differ. One node can use an elastic data as backing map scheme, and another node can use Off-Heap NIO Memory Manager as backing map. This is just the way about each node store its data. For the architectural point of view, its a nice choice to use the same backing map scheme across multiple nodes, because each backing map scheme can have different behaviors when you read and/or write data. One could be faster and another could be slower.
"Also, if I have NIO File Manager cache on a shared disk, is it possible to configure all nodes to work with that cache ?"
There is no need for that, since data is available to all cluster nodes without any effort. Having said that, this would be a bad strategy choice. Coherence is a shared-nothing technology which uses that model to scale and give you predictable latency. If you start using a shared-disk as storage for data, you will lose the essence of shared-nothing benefits, and create a huge bottleneck in the data mgmt layer, since will occur dispute per I/O in each read/write.
Cheers,
Ricardo Ferreira -
Hello,
Alert: The backup operation for the cluster configuration data has been canceled due to an abort request
Alert description: The backup operation for the cluster configuration data has been canceled. The cluster Volume Shadow Copy Service (VSS) writer received an abort request.
This is the backup of VSS which is sending this alert every morning.
Event ID 1544
All fixes I found are applied..
kb2277439 has already been applied
978527 is there too
975921 is there too..
any other id
Cluster Node /Status gives both nodes up A & B
The error is coming only on Node A...
Any idea?
Thanks,
Dom
System Center Operations Manager 2007 / System Center Configuration Manager 2007 R2 / Forefront Client Security / Forefront Identity ManagerHi,
Which backup software do you use to do a backup? Please also try to apply those hotfix on the Cluster:
A transient communication failure causes a Windows Server 2008 R2 failover cluster to stop working
http://support.microsoft.com/kb/2550886
The network location profile changes from "Domain" to "Public" in Windows 7 or in Windows Server 2008 R2
http://support.microsoft.com/kb/2524478
Recommended hotfixes and updates for Windows Server 2008 R2 SP1 Failover Clusters
http://support.microsoft.com/kb/2545685/EN-US
Regards,
Mandy
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
No partitions avalilable on Page Cluster configuration storage
hello all,
I am trying to install RAC R2 on windows Server 2003 (Standard Edition) with 2 nodes.
When installing Clusterware. I can't see all the partitions of the raw device but When i go to Computer Management, i see all of them.
One "Cluster Configuration Storage" page, the Available Disks show no partitions.
I have check the next document:
http://download.oracle.com/docs/cd/B19306_01/install.102/b14207/storage.htm#sthref244 But Unfortunately I can´t solve the problem
Please Advice, Thanks in advance.
Regards
Ivan RI presume you have tried and failed to install a few times and each time the list of partitions available has decreased.
if you have access to metalink you should see note :341214.1 on how to correctly clean up a failed install. -
I am unable to live migrate via SCVMM 2012 R2 to one Host in our 5 node cluster. The job fails with the errors below.
Error (10698)
The virtual machine () could not be live migrated to the virtual machine host () using this cluster configuration.
Recommended Action
Check the cluster configuration and then try the operation again.
Information (11037)
There currently are no network adapters with network optimization available on host.
The host properties indicate network optimization is available as indicated in the screen shot below.
Any guidance on things to check is appreciated.
Thanks,
GlennHere is a snippet of the cluster log when from the current VM owner node of the failed migration:
00000e50.000025c0::2014/02/03-13:16:07.495 INFO [RHS] Resource Virtual Machine Configuration VMNameHere called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
00000b6c.00001a9c::2014/02/03-13:16:07.495 INFO [RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine Configuration VMNameHere', gen(0) result 0/0.
00000e50.000025c0::2014/02/03-13:16:07.495 INFO [RHS] Resource Virtual Machine VMNameHere called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
00000b6c.00001a9c::2014/02/03-13:16:07.495 INFO [RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine VMNameHere', gen(0) result 0/0.
00000b6c.00001a9c::2014/02/03-13:16:07.495 INFO [RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine VMNameHere', gen(0) result 0/0.
00000b6c.000020ec::2014/02/03-13:16:07.495 INFO [GEM] Node 3: Sending 1 messages as a batched GEM message
00000e50.000025c0::2014/02/03-13:16:07.495 INFO [RES] Virtual Machine Configuration <Virtual Machine Configuration VMNameHere>: Current state 'MigrationSrcWaitForOffline', event 'MigrationSrcCompleted', result 0x8007274d
00000e50.000025c0::2014/02/03-13:16:07.495 INFO [RES] Virtual Machine Configuration <Virtual Machine Configuration VMNameHere>: State change 'MigrationSrcWaitForOffline' -> 'Online'
00000e50.000025c0::2014/02/03-13:16:07.495 INFO [RES] Virtual Machine <Virtual Machine VMNameHere>: Current state 'MigrationSrcOfflinePending', event 'MigrationSrcCompleted', result 0x8007274d
00000e50.000025c0::2014/02/03-13:16:07.495 INFO [RES] Virtual Machine <Virtual Machine VMNameHere>: State change 'MigrationSrcOfflinePending' -> 'Online'
00000e50.00002080::2014/02/03-13:16:07.510 ERR [RES] Virtual Machine <Virtual Machine VMNameHere>: Live migration of 'Virtual Machine VMNameHere' failed.
Virtual machine migration operation for 'VMNameHere' failed at migration source 'SourceHostNameHere'. (Virtual machine ID 6901D5F8-B759-4557-8A28-E36173A14443)
The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host 'DestinationHostNameHere': No connection could be made because the tar
00000e50.00002080::2014/02/03-13:16:07.510 ERR [RHS] Resource Virtual Machine VMNameHere has cancelled offline with error code 10061.
00000b6c.000020ec::2014/02/03-13:16:07.510 INFO [RCM] HandleMonitorReply: OFFLINERESOURCE for 'Virtual Machine VMNameHere', gen(0) result 0/10061.
00000b6c.000020ec::2014/02/03-13:16:07.510 INFO [RCM] Res Virtual Machine VMNameHere: OfflinePending -> Online( StateUnknown )
00000b6c.000020ec::2014/02/03-13:16:07.510 INFO [RCM] TransitionToState(Virtual Machine VMNameHere) OfflinePending-->Online.
00000b6c.00001a9c::2014/02/03-13:16:07.510 INFO [GEM] Node 3: Sending 1 messages as a batched GEM message
00000b6c.000020ec::2014/02/03-13:16:07.510 INFO [RCM] rcm::QueuedMovesHolder::VetoOffline: (VMNameHere with flags 0)
00000b6c.000020ec::2014/02/03-13:16:07.510 INFO [RCM] rcm::QueuedMovesHolder::RemoveGroup: (VMNameHere) GroupBeingMoved: false AllowMoveCancel: true NotifyMoveFailure: true
00000b6c.000020ec::2014/02/03-13:16:07.510 INFO [RCM] VMNameHere: Removed Flags 4 from StatusInformation. New StatusInformation 0
00000b6c.000020ec::2014/02/03-13:16:07.510 INFO [RCM] rcm::RcmGroup::CancelClusterGroupOperation: (VMNameHere)
00000b6c.00001a9c::2014/02/03-13:16:07.510 INFO [GEM] Node 3: Sending 1 messages as a batched GEM message
00000b6c.000021a8::2014/02/03-13:16:07.510 INFO [GUM] Node 3: executing request locally, gumId:3951, my action: /dm/update, # of updates: 1
00000b6c.000021a8::2014/02/03-13:16:07.510 INFO [GEM] Node 3: Sending 1 messages as a batched GEM message
00000b6c.00001a9c::2014/02/03-13:16:07.510 INFO [GEM] Node 3: Sending 1 messages as a batched GEM message
00000b6c.000022a0::2014/02/03-13:16:07.510 INFO [RCM] moved 0 tasks from staging set to task set. TaskSetSize=0
00000b6c.000022a0::2014/02/03-13:16:07.510 INFO [RCM] rcm::RcmPriorityManager::StartGroups: [RCM] done, executed 0 tasks
00000b6c.00000dd8::2014/02/03-13:16:07.510 INFO [RCM] ignored non-local state Online for group VMNameHere
00000b6c.000021a8::2014/02/03-13:16:07.526 INFO [GUM] Node 3: executing request locally, gumId:3952, my action: /dm/update, # of updates: 1
00000b6c.000021a8::2014/02/03-13:16:07.526 INFO [GEM] Node 3: Sending 1 messages as a batched GEM message
00000b6c.000018e4::2014/02/03-13:16:07.526 INFO [RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine VMNameHere', gen(0) result 0/0.
No entry is made on the cluster log of the destination node.
To me this means the nodes cannot talk to each other, but I don’t know why.
They are on the same domain. Their server names resolve properly and they can ping eachother both by name and IP. -
RAC with ASM and shared disks?
Hi all,
Can someone clarify this little point please. If I use ASM as my storage with a RAC database, I have to configure these nodes to shared disks. At least this is what the UG says ...
When you create a disk group for a cluster or add new disks to an existing clustered disk group, you only need to prepare the underlying physical storage on shared disks. The shared disk requirement is the only substantial difference between using ASM in a RAC database compared to using it in a single-instance Oracle database. ASM automatically re-balances the storage load after you add or delete a disk or disk group.
With my 9i databases, I used HCAMP to allow for concurrent VG access among the nodes. My questions are ...
1) How can I share this storage as stated above without using HACMP? My understanding is with 10g I no longer have to use it.
2) Can Oracle's clusterware be used to share storage? I have not seen any indication that it does.
3) Does this mean I still have to use HCAMP with 10g crs to allow shared storage?
Thank you"...meaning visible to all the participating nodes, which you don't need HACMP..."
This is one step forward, but still not clear. On unix, storage is presented to ASM as raw volumes. As such, how can these volumes be visible on all nodes without using HCAMP (or whatever 3rd party clusterware you are using). Presenting raw volumes on several nodes is something that is not done at OS level without using some clusterware functionality.
I do understand that storage or LUNs can be shared at the SAN fabric level. But then, these LUNs are carved in bug chunks and I would like to be able to allocate storage at much granular level using raw partitions.
So all in all, here are my questions ...
1) On unix platforms, can ASM disks be LUNs, raw volumes, or may be both?
2) If raw volumes, how are these shared (or made visible) without using 3rd party clusterware? Having managed 9i RAC, it was the function of HACMP to make these volumes visible on all nodes, otherwise, we had to imp/exp VGs on all nodes to make them visible.
Thank you -
Hyper-V Failover Cluster Configuration Confirmation
Dear All,
I have created a Hyper-V Failover Cluster and I want you to confirm if the configuration I have done is okay and I have not missed
out anything that is mandatory for a Hyper-V Failover Cluster to work. My configuration is below:
1. Presented Disks to servers, formatted and taken offline
2. Installed necessary features, such as failover clustering
3. Configured NIC Teaming
4. Created cluster, not adding storage at the time of creation
- Added disks to the cluster
- Added disks as CSV
- Renamed disks to represent respective CSV volumes
- Assigning each node a CSV volume
- Configured quorum automatically which configured the disk witness
- There were two networks so renamed them to Management and Cluster Communication
- Exposed Management Network to Cluster and Clients
- Exposed Cluster Communication Network to Cluster only
5. Installed Hyper-V
- Changed Virtual Disks, Configuration and Snapshots Location
- Assigned one CSV volume to each node
- Configured External switch with allow management option checked
1. For minimum configuration, is this enough?
2. If I create a virtual machine and make it highly available from hyper-v console, would it be highly available and would live
migrate, etc.?
3. Are there any configuration changes required?
4. Please, suggest how it can be made better?
Thanks in advanHi ,
Please refer to following steps to build a hyper-v failover cluster :
Step 1: Connect both physical computers to the networks and storage
Step 2: Install Hyper-V and Failover Clustering on both physical computers
Step 3: Create a virtual switch
Step 4: Validate the cluster configuration
Step 5: Create the cluster
Step 6: Add a disk as CSV to store virtual machine data
Step 7: Create a highly available virtual machine
Step 8: Install the guest operating system on the virtual machine
Step 9: Test a planned failover
Step 10: Test an unplanned failover
Step 11: Modify the settings of a virtual machine
Step 12: Remove a virtual machine from a cluster
For details please refer to following link:
http://technet.microsoft.com/en-us//library/jj863389.aspx
Hope it helps
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Oracle9i installation on Sun cluster not showing the cluster config screen
Hi All,
We are trying to install Oracle 9i RAC on a 2 node Sun cluster. The oracle installer never shows up the cluster configuration screen. However if we run the preInstall checker script from oracle (Installprecheck.sh) , it reports that it has indeed detected the cluster. Can anyone please help us out of this predicament.
-KishoreHi Kishore,
I 'm assuming the following:
. Sun Cluster 3.1 will be used to support Oracle 9i RAC
. Oracle is being installed on a file system instead of raw devices
To answer your question, a clustered file system must be present in order for Oracle 9i RAC Installer to recognize that a cluster exists before it displays the cluster configuration screen at the beginning of the installation.
Prior to QFS 4.2, Sun did not support a clusteed file system. All Oracle RAC systems for Solaris were raw device based. You're in luck now because Sun has released Sun QFS 4.2 which support a clustered file system. -
Add shared disks without shutdown vm
Hi all,
we have a Pc with VMWare 2 and two VMS with
Oracle 10g on RAC and CentOS 4.7 . There are two nodes (or vms): rac1 and rac2.
Each VM has shared SCSI disks.
Is there any way to add shared disks without shutdown the VMs?
thank you.Well,
Look the link:
http://www.vmware.com/support/ws5/doc/ws_disk_add_virtual.html
You must shutdown the instance.
but in your case, I dont know, maybe shutdown one VM, add the disk, and restart.
So, shutdown other vm, add the same disk, and restart. After, you can add the disk to disk group
if you are using ASM.
Please, correct me if I'm wrong.
Cheers -
How the cluster works when shared storage disk is offline to the primary ??
Hi All
I have configured Cluster as below
Number of nodes: 2
Quorum devices: one Quorum server, shared disks
Resource Group with HA-storage, Logical host name, Apache
My cluster works fine when either the nodes looses connectivity or crashes but when I deny access for primary node ( on which HA storage is mounted ) to the shared disks.
The Cluster didnt failover the whole RG to other node.
I tried to add the HAstorage disks to the quorum devices but it didnt help
Anyways i can't able to do any i/o on the HAstorage on the respective node
NOTE:This is the same case even on Zone cluster
Please guide me, below is the O/P of # cluster status command === Cluster Nodes ===
--- Node Status ---
Node Name Status
sol10-1 Online
sol10-2 Online
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
sol10-1:vfe0 sol10-2:vfe0 Path online
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
sol10-1 1 1 Online
sol10-2 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
d6 0 1 Offline
server1 1 1 Online
d7 1 1 Offline
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
global sol10-1 No Online
sol10-2 No Offline
=== Cluster Resources ===
Resource Name Node Name State Status Message
global-data sol10-1 Online Online
sol10-2 Offline Offline
global-apache sol10-1 Online Online - LogicalHostname online.
sol10-2 Offline Offline
=== Cluster DID Devices ===
Device Instance Node Status
/dev/did/rdsk/d6 sol10-1 Fail
sol10-2 Ok
/dev/did/rdsk/d7 sol10-1 Fail
sol10-2 Ok
Thanks in advance
Sidnot sure what you mean with "deny access" but could be reboot of path failures is disabled. This should
enable that:
# clnode set -p reboot_on_path_failure=enabled +
HTH,
jono -
RAC Instalation Problem (shared accross all the nodes in the cluster)
All experts
I am trying for installing Oracle 10.2.0 RAC on Redhat 4.7
reff : http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnLinux
All steps successfully completed on all nodes (rac1,rac2) every thing is okey for each node
on single node rac instalation successfull.
when i try to install on two nodes
on specify Oracle Cluster Registry (OCR) location showing error
the location /nfsmounta/crs.configuration is not shared accross all the nodes in the cluster. Specify a shared raw partation or cluster file system file that is visible by the same name on all nodes of the cluster.
I create shared disks on all nodes as:
1 First we need to set up some NFS shares. Create shared disks on NAS or a third server if you have one available. Otherwise create the following directories on the RAC1 node.
mkdir /nfssharea
mkdir /nfsshareb
2. Add the following lines to the /etc/exports file. (edit /etc/exports)
/nfssharea *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/nfsshareb *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
3. Run the following command to export the NFS shares.
chkconfig nfs on
service nfs restart
4. On both RAC1 and RAC2 create some mount points to mount the NFS shares to.
mkdir /nfsmounta
mkdir /nfsmountb
5. Add the following lines to the "/etc/fstab" file. The mount options are suggestions from Kevin Closson.
nas:/nfssharea /nfsmounta nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
nas:/nfsshareb /nfsmountb nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
6. Mount the NFS shares on both servers.
mount /mount1
mount /mount2
7. Create the shared CRS Configuration and Voting Disk files.
touch /nfsmounta/crs.configuration
touch /nfsmountb/voting.disk
Please guide me what is wrongI think you did not really mount it on the second server. what is the output of 'ls /nfsmounta'.
step 6 should be 'mount /nfsmounta', not 'mount 1'. I also don't know if simply creating a zero-size file is sufficient for ocr (i have always used raw devices, not nfs for this) -
Hi, I'm having a problem in a VM Guest cluster using Windows Server 2012 R2 and virtual disk sharing enabled.
It's a SQL 2012 cluster, which has around 10 vhdx disks shared this way. all the VHDX files are inside LUNs on a SAN. These LUNs are presented to all clustered members of the Windows Server 2012 R2 Hyper-V cluster, via Cluster Shared Volumes.
Yesterday happened a very strange problem, both the Quorum Disk and the DTC disks got the information completetly erased. The vhdx disks themselves where there, but the info inside was gone.
The SQL admin had to recreated both disks, but now we don't know if this issue was related to the virtualization platform or another event inside the cluster itself.
Right now I'm seen this errors on one of the VM Guest:
Log Name: System
Source: Microsoft-Windows-FailoverClustering
Date: 3/4/2014 11:54:55 AM
Event ID: 1069
Task Category: Resource Control Manager
Level: Error
Keywords:
User: SYSTEM
Computer: ServerDB02.domain.com
Description:
Cluster resource 'Quorum-HDD' of type 'Physical Disk' in clustered role 'Cluster Group' failed.
Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it. Check the resource and group state using Failover Cluster
Manager or the Get-ClusterResource Windows PowerShell cmdlet.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
<EventID>1069</EventID>
<Version>1</Version>
<Level>2</Level>
<Task>3</Task>
<Opcode>0</Opcode>
<Keywords>0x8000000000000000</Keywords>
<TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
<EventRecordID>14140</EventRecordID>
<Correlation />
<Execution ProcessID="1684" ThreadID="2180" />
<Channel>System</Channel>
<Computer>ServerDB02.domain.com</Computer>
<Security UserID="S-1-5-18" />
</System>
<EventData>
<Data Name="ResourceName">Quorum-HDD</Data>
<Data Name="ResourceGroup">Cluster Group</Data>
<Data Name="ResTypeDll">Physical Disk</Data>
</EventData>
</Event>
Log Name: System
Source: Microsoft-Windows-FailoverClustering
Date: 3/4/2014 11:54:55 AM
Event ID: 1558
Task Category: Quorum Manager
Level: Warning
Keywords:
User: SYSTEM
Computer: ServerDB02.domain.com
Description:
The cluster service detected a problem with the witness resource. The witness resource will be failed over to another node within the cluster in an attempt to reestablish access to cluster configuration data.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
<EventID>1558</EventID>
<Version>0</Version>
<Level>3</Level>
<Task>42</Task>
<Opcode>0</Opcode>
<Keywords>0x8000000000000000</Keywords>
<TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
<EventRecordID>14139</EventRecordID>
<Correlation />
<Execution ProcessID="1684" ThreadID="2180" />
<Channel>System</Channel>
<Computer>ServerDB02.domain.com</Computer>
<Security UserID="S-1-5-18" />
</System>
<EventData>
<Data Name="NodeName">ServerDB02</Data>
</EventData>
</Event>
We don't know if this can happen again, what if this happens on disk with data?! We don't know if this is related to the virtual disk sharing technology or anything related to virtualization, but I'm asking here to find out if it is a possibility.
Any ideas are appreciated.
Thanks.
Eduardo RojasHi,
Please refer to the following link:
http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx#.Ux172HnxtNA
Best Regards,
Vincent Wu
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. -
SQL 2008 R2 cluster installation failure - Failed to find shared disks
Hi,
The validation tests in the SQL 2008R2 cluster installation (running Windows 2008 R2) fails with the following error. The cluster has one root mount point with multiple mount points :
"The cluster on this computer does not have a shared disk available. To continue, at least one shared disk must be available'.
The "Detail.txt" log has alot of "access is denied" errors and here is just a sample. Any ideas what might be causing this issue?
2010-09-29 12:54:08 Slp: Initializing rule : Cluster shared disk available check
2010-09-29 12:54:08 Slp: Rule applied features : ALL
2010-09-29 12:54:08 Slp: Rule is will be executed : True
2010-09-29 12:54:08 Slp: Init rule target object: Microsoft.SqlServer.Configuration.Cluster.Rules.ClusterSharedDiskFacet
2010-09-29 12:54:09 Slp: The disk resource 'QUORUM' cannot be used as a shared disk because it's a cluster quorum drive.
2010-09-29 12:54:09 Slp: Mount point status for disk 'QUORUM' could not be determined. Reason: 'The disk resource 'QUORUM' cannot be used because it is a cluster quorum drive.'
2010-09-29 12:54:09 Slp: System Error: 5 trying to find mount points at path
\\?\Volume{e1f5ca48-c798-11df-9401-0026b975df1a}\
2010-09-29 12:54:09 Slp: Access is denied.
2010-09-29 12:54:09 Slp: Mount point status for disk 'SQL01_BAK01' could not be determined. Reason: 'The search for mount points failed. Error: Access is denied.'
2010-09-29 12:54:10 Slp: System Error: 5 trying to find mount points at path
\\?\Volume{e1f5ca4f-c798-11df-9401-0026b975df1a}\
2010-09-29 12:54:10 Slp: Access is denied.
2010-09-29 12:54:10 Slp: Mount point status for disk 'SQL01_DAT01' could not be determined. Reason: 'The search for mount points failed. Error: Access is denied.'
2010-09-29 12:54:10 Slp: System Error: 5 trying to find mount points at path
\\?\Volume{e1f5ca56-c798-11df-9401-0026b975df1a}\
2010-09-29 12:54:10 Slp: Access is denied.
Thanks,
PKHi,
We were asked by the PSS engineer to give the following privileges the account used to install SQL Server - i am referring to the user domain account as apposed to the SQL service account. These privileges were already applied to the
SQL service account prior to the SQL installation. Assigning these privileges to the user account resolved the issue.
Act as Part of the Operating Sywstem = SeTcbPrivileg
Bypass Traverse Checking = SeChangeNotify
Lock Pages In Memory = SeLockMemory
Log on as a Batch Job = SeBatchLogonRight
Log on as a Service = SeServiceLogonRight
Replace a Process Level Token = SeAssignPrimaryTokenPrivilege
Thanks for everyones assistance.
Cheers,
PK -
Shared disk (SAN EVA 4400) configuration for ASM on linux RHEL5
Hi all,
I want to install RAC database (oracle 10g on RHEL5). Now we are configurind shared storage for the both servers (nodes). In disk configuration (SAN on EVA 4400) we have created the volume group for the disk to use for ASM and present the disk group to the servers. Actually when we check the server we cannot see the disk (shared storage).
[root@cdr-analysis01 ~]# ls /dev/cciss/ -lR
/dev/cciss/:
total 0
brw-r----- 1 root disk 104, 0 Jul 1 2009 c0d0
brw-r----- 1 root disk 104, 1 Jul 1 14:29 c0d0p1
brw-r----- 1 root disk 104, 2 Jul 1 14:29 c0d0p2
brw-r----- 1 root disk 104, 3 Jul 1 14:29 c0d0p3
brw-r----- 1 root disk 104, 4 Jul 1 2009 c0d0p4
brw-r----- 1 root disk 104, 5 Jul 1 14:29 c0d0p5
brw-r----- 1 root disk 104, 6 Jul 1 14:29 c0d0p6
brw-r----- 1 root disk 104, 7 Jul 1 2009 c0d0p7
brw-r----- 1 root disk 104, 8 Jul 1 14:29 c0d0p8
(they are local disks not the shared disk)
My questions are:
- How to mount the disk to be seen in the server
- How to configure the disk for ASM is there any document for this. In my knowledge the disk to use for ASM doesn't need to be formated with a file system and shouldn't see when we run the df command.
Does anybody can help me as it's very urgent for us.
Thank you
RaitsarevoHi all,
NOw the disk is presented to server. but my actual problem is how to create shared partition of this disk for the servers.
Our actual status is we have created one volume group with 500 Gb in the sun and this volume group is called vg_oradata. we will use this storage for OCR, voting disk and the databse file (ASM). A si know from documentation i have to create one shared partition for OCR, one shared partition for Voting disk and another for database.
That i want to do is creating for example :
ocr_partition (25 Gb) : for OCR
vote_partition (25 Gb): for Voting disk
oradata_part1 (150 Gb): for database (ASM)
oradata_part2 (150 Gb): for database (ASM)
oradata_part3 (150 Gb): for database (ASM)
My problem is how to create this partition because when i'm trying to create it from Logical Volume Manager of Linux i cannot find them in to servers.
I mean i'm new in system administrator and our SAN vendors also doesn't know how to it in LInux system.
Could you help me please if possible.
My system is Linux Red Hat Enterprise 5 and i'm going to use Oracle 1Og RAC.
Thank you
Lucienot
Maybe you are looking for
-
Messages are present but I can't see them
Something happened that has affected one of my POP mail accounts. I was doing something (not sure what) and suddenly, all of the messages that were stored in the mailbox disappeared. Vanished. At the same time, an "arrow" appeared just to the left of
-
Good evening, I've got a late model MacBook Pro and an 800mhz PowerBook at present. The PowerBook hosts my iTunes library, ipods (got a few), and SlimServer for my SqueezeBox (slimdevices.com). I'm rather peeved that Leopard will not support the Powe
-
Dynamic Vlan Assigment on 2950 with acs 4.2
Hello to everyone We have a problem with Cisco 2950G 48 EI and ACS (version 4.2) providing dynamic Vlan assignment based on groups On the ACS we configured the following attributes for the specific group 64 = VLAN 65 = 802 81 = VLAN Name We tried for
-
We are not able to pass the presentation variables(date & Date1) to the det
Hi, We have followed the below link to use the date between prompt and passed the presentation variables in the filter condition of the report criteria. http://obiee101.blogspot.com/2009/03/obiee-between-dates-prompt.html But we are not able to pass
-
10.7.2/iCloud has crippled my new MBP
My new MBP (13" MBP 2.3 GHz i5, 4GB Ram) has been nearly paralized since upgrading to 10.7.2, iCloud & iOS 5 (I'm not sure if iOS 5 has anything to do with this). My MBP is crawling and giving me the spinning beach-ball WAAAY too often. In Console