Urgent ---StagedTargets, Targets, cluster
Hi everyone,
please help me.
Till now I did not have cluster in my project. I had only one weblogic server
running in my domain and it is server1.
Now I want to make a cluster of 2 servers say server1 and server2 where server2
will be running one special ejb component say spejb.jar. All other ejbs will be
deployed on server1.
As per my knowledge, the stagedTargets indicate the servers having latest version
of the application and in this case I want to mention server1 because i will be
keeping all the latest versions in server1. Does the following piece of config.xml
work correctly in this scenario ?
<Application Deployed="true" Name="ejb/SPEJB"
Path="./config/domain/applications/ejb" StagedTargets="server1">
<EJBComponent Name="ejb/SPEJB" Targets="server1" URI="spejb.jar"/>
</Application>
I guess, targets should be server2 and that is the only change i will have to
do ?
If I mention target="server1", I guess there is no point in having server2.
Since I want to run spejb.jar only on server2, I guess I do not have to mention
clustername as the targets.
I would highly appreciate if anyone can guide me in this matter.
thanks
Hi all
Anybody has met the same problem? Do you have any advises? Thank you very much!
Similar Messages
-
The source or target cluster ID is not present on the system!
Hi,
During System copy, we are facing issue "Run Java Migration Tool kit"
Phase of Sapinst.
I could found exact error messages sequences which may help you in
understanding the problem.:
==========================================
#Start preparing switch.
#The switch.properties could not be read. Reason: switch.properties (No
such file or directory)
The source or target cluster ID is not present on the system! The
current (source) cluster ID is 787075 and the new (target) cluster ID
is 3137675
==========================
Last error message might be quite familiar to you as this is very
common error message however just wanted to inform you that the "SAP
Note 966752 - Java system copy problems with the Java Migration
Toolkit" could not help me out.
all the possible entries have been updated and retried but no success.
Please suggest if you have face similar kind of problem.
Thanks.
Cheers !!!
AshishHi,
cluster ID is just combination of below parameters:
In our case, my source system (ABC) was refreshed from another system (XYZ) recently
so while installing target system ( DEF), I changed the source system details from ABC to XYZ in below file and retried the
SAPinst screen. System copy has got completed successfully.
Open the file <installation directory>/jmt/cluster_id_switch.properties and edit the line
src.ci.sid=
src.ci.instance.number=
src.ci.instance.name=
src.ci.host=
If in your case source system is not refreshed recently; You may try with functional host name or OS host name etc. details for above parameters.
If this does not work check details of "SAP Note 966752 - Java system copy problems with the Java
Migration Toolkit" which says almost the same thing but I could not get that as the statements related to
box number are bit confusing and contradictory.
Cheers !!!
Ashish -
Resource plan not migrating to all targeted cluster members
Hi,
We got an environment setup like:
Physical machine 1: AdminServer, OSB1, WLS1
Physical machine 2: OSB2, WLS2
Where we have put OSB[1,2] into a cluster and WLS[1,2] into a cluster. The servers are running WLS 10.3.3.4.
The problem is that: When creating a new connectionFactory for the DBAdapter, the plan.xml is only updated on 1 machine (not both of them). In my setup, it is always physical machine 1 that gets the updated plan.xml
I looked at the target for the deployment, and it is set for AdminServer and OSBCluster (meaning, OSB1 and OSB2). And when going into the "Monitoring -> Outbound Connection Pools", I can also see that only the AdminServer and OSB1 has an outbound connection pool.
To fix this, I've had to use SCP from OSB1 -> OSB2 for the plan.xml and then doing a redeploy of the DBAdapter. Then, I see an outbound connection from OSB2.
Have I miss understood something? Shouldn't the configurations (plan.xml) be replicated across all targeted members, in case one servers fails?
Best Regards,
Mathiashello,
and what about ServerAffinity (to be set on the ConnectionFactory-level)? Should be "false". Antoher explanation is that your cluster is out of sync (check, if multicast has been set correctly).
load balancing and server affinity for dd
regards,
makiey -
URGENT: Choosing target directory according to file name.
Hi
I have a requirement for transfering a file to the directory according to the filename.
Example
If the file name is
FIABC it should be moved to directory DI_ABC in the target server and if it is
FIXYZ it should be moved to directory DI_XYZ.
No mapping is involved in this case .
Do we have some other way around than variable substituion
Regards
Abhishek mahajanHi,
For this we have to configure Two Communication Channels one for one Location and the other for other location.
Use X-path concept in Interface Determination and call the corresponding inbound Message Interface and Interface Mappings based on the condition
1) Create Two Inbound MI's
2) Create One MM
3) Create 2 Interface Mappings
4) Create 2 Receiver CC and One Sender CC
5) Create One Receiver Det
6) Create 2 Interface Det
7) Create One Sender Agreement
8) Create Two Receiver Agreements
Regards
Seshagiri -
Urgent - SAP_ Microsoft Cluster error
Dear Friends,
We Installing our Production servers by using Microsoft cluster service. Our SID is P00
Servers name was SAPP and SAPDB
Today morning we try to start the instance in MMC it could not start. then we see the cluster service status by using Cluster manager we noticed the following errors
SAP P00_00 Service failed
SAP P00_00 Instance Offline
We checked the SAPoscol service it is running. Then we checked the Event viewer same error is repeated in event viewer.
We need your help to sort out this problem
thanks in advance
G.G.KarthickbabuDear Bernd,
Thanks for your quick response
All other instance are in online except
SAP P00_00 Service failed -(Generic Service)
SAP P00_00 Instance Offline -(SAP Resource)
in groups under cluster manger window
by
G.G.Karthickbabu -
Hi
I'm trying to install a cluster on a lab environment, i have to physical servers, and would like to use them as cluster nodes, on one of this nodes i would like to install iscsi target server to use for sharing disk to the cluster itself is this possible?
because i did all the configurations but after installing the cluster the iscsi target server doesn't work anymore.
thanksHi
I'm trying to install a cluster on a lab environment, i have to physical servers, and would like to use them as cluster nodes, on one of this nodes i would like to install iscsi target server to use for sharing disk to the cluster itself is this possible?
because i did all the configurations but after installing the cluster the iscsi target server doesn't work anymore.
thanks
Bad news: You cannot do it with a Microsoft built-in solutions because you do need indeed to have physical shared storage to make Microsoft iSCSI target clustered. Something like on Robert Smit blog here:
Clustering Microsoft iSCSI Target
https://robertsmit.wordpress.com/2012/06/26/clustering-iscsi-target-on-windows-2012-step-by-step/
...or here:
MSFT iSCSI Target in HA
https://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
...or very detailed walk thru here:
MSFT iSCSI Target in High Availability Mode
https://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Good news: you can take a third-party solution from various companies (below) and create an HA iSCSI volumes on just a pair of nodes. See:
StarWind Virtual SAN
http://www.starwindsoftware.com/starwind-virtual-san-free
(this setup is FREE of charge, you just need to be an MCT, MVP or MCP to obtain your free 2-node key)
...to have a setup like this:
Also SteelEye has similar one here:
SteelEye #SANLess Clusters
http://us.sios.com/products/datakeeper-cluster/
DataCore SANsyxxxx
http://www.datacore.com/products/SANsymphony-V.aspx
You can also spawn a VMs running FreeBSD/HAST or Linux/DRBD to build a very similar setup yourself (Two-node setups should be Active-Passive to avoid brain split, Windows solutions from above all maintain own pacemaker and heartbeats to run Active-Active
on just a pair of nodes).
Good luck and happy clustering :)
StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Glassfish 2.1 Admin Console Javascript Error when selecting cluster targets
I recently updated to Firefox 4 and when using the GlassFish 2.1 Admin Console and you have to manage the cluster targets for specific resources there is a javascript error which prevents you from selecting target cluster options. Basically you select what you want, but then it removes it immediately after.
Using FireBug I can see the error below:
uncaught exception: [Exception... "Could not convert JavaScript argument - 0 was passed, expected object. Did you mean null? arg 1 [nsIDOMHTMLSelectElement.add]" nsresult: "0x80570035 (NS_ERROR_XPC_BAD_CONVERT_JS_ZERO_ISNOT_NULL)" location: "JS frame :: http://localhost:4848/theme/META-INF/dojo/dojo.js :: <TOP_LEVEL> :: line 645" data: no]
Line 0
This functionality works fine in other browsers.
Thanks.Hi there,
Can you check the JDK version and see the compatible with your weblogic installation ?
You can refer the certification matrix from Oracle to check the compatible version of JDK. Here you go - Oracle Fusion Middleware 12c (12.1.3.0.0) Certification Matrix.
Most probably, this issue will occur when JDK is not compatible with weblogic installation..
Lakshman -
Hi all,
I've setup an OSB cluster with 2 managed servers.
From WLS console I created successfully the below JMS resources:
1. new file store, name and directory: MyFileStore_m01, target: bnk01osbm01 migratable
2. new file store, name and directory: MyFileStore_m02, target: bnk01osbm02 migratable
3. new jms server, name: MyJMSServer_m01, persistent store: MyFileStore_m01, target: bnk01osbm01 migratable
4. new jms server, name: MyJMSServer_m02, persistent store: MyFileStore_m02, target: bnk01osbm02 migratable
5. new jms module, name: MyJMSSystemModule, targets: cluster bnk01osbcluster + all servers in the cluster
6. new xa conn factory, name: MyXAConnectionFactory, jndi name: MyXAConnectionFactory
(targets as proposed by WLS itself: cluster bnk01osbcluster + all servers in the cluster)
+ XA Connection Factory Enabled: true
7. new Distributed Queue, name: MyDistributedQueue1, jndi name: MyDistributedQueue1, load balancing policy: round-robin
(targets as proposed by WLS itself: cluster bnk01osbcluster + all servers in the cluster)
+ allocate members uniformly: true
and I recorded that into the included recorded_by_wls.py. As said no error has been raised.
Then I removed the above JMS resources and I executed the recorded_by_wls.py from command line getting the error that I reported at the end of the email.
I changed slightly the original py produced by WLS by adding only the below first 5 lines:
print "Starting the script ..."
connect("weblogic", "weblogic", "t3://wasdv1r3n1.dev.b-source.net:7031")
edit()
startEdit()
cd('/')
I included also the recorded_by_wls.py source at the end of the email.
For information I reported also what I saw on WLS console when I created manually my JMS resources.
Do you have an idea why a successfully recorded WLS session fails when executed from a command line ?
Any help will be very appreciate.
Thanks in advance.
Regards
ferp
Error:
java weblogic.WLST recorded_by_wls.py Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
Starting the script ...
Connecting to t3://wasdv1r3n1.dev.b-source.net:7031 with userid weblogic ...
Successfully connected to Admin Server 'bnk01osbadm' that belongs to domain 'bnk01osb'.
Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.
Location changed to edit tree. This is a writable tree with
DomainMBean as the root. To make changes you will need to start an edit session via startEdit().
For more help, use help(edit)
Starting an edit session ...
Started edit session, please be sure to save and activate your changes once you are done.
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released once the activation is completed.
Activation completed
Starting an edit session ...
Started edit session, please be sure to save and activate your changes once you are done.
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released once the activation is completed.
This Exception occurred at Mon Sep 07 15:31:33 CEST 2009.
weblogic.application.ModuleException: ERROR: Unable to add destination MyJMSSystemModule!wlsbJMSServer_auto_2@MyDistributedQueue1 to the back end wlsbJMSServer_auto_2
at weblogic.jms.module.JMSModule.prepareUpdate(JMSModule.java:618)
at weblogic.jms.module.ModuleCoordinator.prepareUpdate(ModuleCoordinator.java:487)
at weblogic.application.internal.flow.DeploymentCallbackFlow$5.next(DeploymentCallbackFlow.java:514)
at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37)
at weblogic.application.internal.flow.DeploymentCallbackFlow.prepareUpdate(DeploymentCallbackFlow.java:280)
at weblogic.application.internal.BaseDeployment$PrepareUpdateStateChange.next(BaseDeployment.java:679)
at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37)
at weblogic.application.internal.BaseDeployment.prepareUpdate(BaseDeployment.java:444)
at weblogic.application.internal.SingleModuleDeployment.prepareUpdate(SingleModuleDeployment.java:16)
at weblogic.application.internal.DeploymentStateChecker.prepareUpdate(DeploymentStateChecker.java:221)
at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepareUpdate(AppContainerInvoker.java:149)
at weblogic.deploy.internal.targetserver.operations.DynamicUpdateOperation.doPrepare(DynamicUpdateOperation.java:130)
at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)
at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:723)
at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1190)
at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:248)
at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)
at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:157)
at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:12)
at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:45)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
Caused by: weblogic.jms.common.JMSException: [JMSExceptions:045050]A destination of name MyJMSSystemModule!wlsbJMSServer_auto_2@MyDistributedQueue1 has a jms-create-destination-identifier of name MyJMSSystemModule!wlsbJMSServer_auto_2@MyDistributedQueue1. However, another destination of name MyJMSSystemModule!wlsbJMSServer_auto_2@MyDistributedQueue1 has the same jms-create-destination-identifier. Two destinations with the same jms-create-destination-identifier cannot be co-located on the JMSServer named wlsbJMSServer_auto_2.
at weblogic.jms.backend.BackEnd.addDestination(BackEnd.java:1527)
at weblogic.jms.backend.BEDestinationRuntimeDelegate.prepare(BEDestinationRuntimeDelegate.java:195)
at weblogic.jms.backend.udd.UDDEntity.prepare(UDDEntity.java:444)
at weblogic.jms.module.JMSModule$EntityState.setState(JMSModule.java:1704)
at weblogic.jms.module.JMSModule$EntityState.setState(JMSModule.java:1667)
at weblogic.jms.module.JMSModule$EntityState.access$100(JMSModule.java:1608)
at weblogic.jms.module.JMSModule.addEntity(JMSModule.java:936)
at weblogic.jms.module.JMSModule.addEntity(JMSModule.java:841)
at weblogic.jms.module.JMSModule.access$1500(JMSModule.java:87)
at weblogic.jms.module.JMSModule$JMSModuleListener.prepareUpdate(JMSModule.java:1485)
at weblogic.descriptor.internal.DescriptorImpl$Update.prepare(DescriptorImpl.java:481)
at weblogic.descriptor.internal.DescriptorImpl.prepareUpdateDiff(DescriptorImpl.java:195)
at weblogic.descriptor.internal.DescriptorImpl.prepareUpdate(DescriptorImpl.java:174)
at weblogic.jms.module.JMSModule.prepareUpdate(JMSModule.java:614)
Problem invoking WLST - Traceback (innermost last):
File "/products/software/tmp/recorded_by_wls.py", line 92, in ?
File "<iostream>", line 364, in activate
WLSTException: Error occured while performing activate : Error while Activating changes.ERROR: Unable to add destination MyJMSSystemModule!wlsbJMSServer_auto_2@MyDistributedQueue1 to the back end wlsbJMSServer_auto_2 Use dumpStack() to view the full stacktrace
recorded_by_wls.py source
print "Starting the script ..."
connect("weblogic", "weblogic", "t3://wasdv1r3n1.dev.b-source.net:7031")
edit()
startEdit()
cd('/')
cmo.createFileStore('MyFileStore_m01')
cd('/FileStores/MyFileStore_m01')
cmo.setDirectory('MyFileStore_m01')
set('Targets',jarray.array([ObjectName('com.bea:Name=bnk01osbm01 (migratable),Type=MigratableTarget')], ObjectName))
activate()
startEdit()
cd('/')
cmo.createFileStore('MyFileStore_m02')
cd('/FileStores/MyFileStore_m02')
cmo.setDirectory('MyFileStore_m02')
set('Targets',jarray.array([ObjectName('com.bea:Name=bnk01osbm02 (migratable),Type=MigratableTarget')], ObjectName))
activate()
startEdit()
cd('/')
cmo.createJMSServer('MyJMSServer_m01')
cd('/Deployments/MyJMSServer_m01')
cmo.setPersistentStore(getMBean('/FileStores/MyFileStore_m01'))
set('Targets',jarray.array([ObjectName('com.bea:Name=bnk01osbm01 (migratable),Type=MigratableTarget')], ObjectName))
activate()
startEdit()
cd('/')
cmo.createJMSServer('MyJMSServer_m02')
cd('/Deployments/MyJMSServer_m02')
cmo.setPersistentStore(getMBean('/FileStores/MyFileStore_m02'))
set('Targets',jarray.array([ObjectName('com.bea:Name=bnk01osbm02 (migratable),Type=MigratableTarget')], ObjectName))
activate()
startEdit()
cd('/')
cmo.createJMSSystemResource('MyJMSSystemModule')
cd('/SystemResources/MyJMSSystemModule')
set('Targets',jarray.array([ObjectName('com.bea:Name=bnk01osbcluster,Type=Cluster')], ObjectName))
activate()
startEdit()
cd('/JMSSystemResources/MyJMSSystemModule/JMSResource/MyJMSSystemModule')
cmo.createConnectionFactory('MyXAConnectionFactory')
cd('/JMSSystemResources/MyJMSSystemModule/JMSResource/MyJMSSystemModule/ConnectionFactories/MyXAConnectionFactory')
cmo.setJNDIName('MyXAConnectionFactory')
cd('/JMSSystemResources/MyJMSSystemModule/JMSResource/MyJMSSystemModule/ConnectionFactories/MyXAConnectionFactory/SecurityParams/MyXAConnectionFactory')
cmo.setAttachJMSXUserId(false)
cd('/JMSSystemResources/MyJMSSystemModule/JMSResource/MyJMSSystemModule/ConnectionFactories/MyXAConnectionFactory')
cmo.setDefaultTargetingEnabled(true)
activate()
startEdit()
cd('/JMSSystemResources/MyJMSSystemModule/JMSResource/MyJMSSystemModule/ConnectionFactories/MyXAConnectionFactory/TransactionParams/MyXAConnectionFactory')
cmo.setTransactionTimeout(3600)
cmo.setXAConnectionFactoryEnabled(true)
activate()
startEdit()
cd('/JMSSystemResources/MyJMSSystemModule/JMSResource/MyJMSSystemModule')
cmo.createUniformDistributedQueue('MyDistributedQueue1')
cd('/JMSSystemResources/MyJMSSystemModule/JMSResource/MyJMSSystemModule/UniformDistributedQueues/MyDistributedQueue1')
cmo.setJNDIName('MyDistributedQueue1')
cmo.setLoadBalancingPolicy('Round-Robin')
cmo.setDefaultTargetingEnabled(true)
activate()
startEdit()
-- source end --
ON WLS console
Name Type Target
MyFileStore_m01 FileStore bnk01osbm01 (migratable)
MyFileStore_m02 FileStore bnk01osbm02 (migratable)
FileStore_auto_1 FileStore bnk01osbm01
FileStore_auto_2 FileStore bnk01osbm02
WseeFileStore_auto_1 FileStore bnk01osbm01
WseeFileStore_auto_2 FileStore bnk01osbm02
Name Persistent Store Target Current Server
MyJMSServer_m01 MyFileStore_m01 bnk01osbm01 (migratable) bnk01osbm01
MyJMSServer_m02 MyFileStore_m02 bnk01osbm02 (migratable) bnk01osbm02
wlsbJMSServer_auto_1 FileStore_auto_1 bnk01osbm01 bnk01osbm01
wlsbJMSServer_auto_2 FileStore_auto_2 bnk01osbm02 bnk01osbm02
WseeJmsServer_auto_1 Wsee FileStore_auto_1 bnk01osbm01 bnk01osbm01
WseeJmsServer_auto_2 Wsee FileStore_auto_2 bnk01osbm02 bnk01osbm02
Name Type
MyJMSSystemModule System
configwiz-jms System
jmsResources System
WseeJmsModule System
Name Type JNDI Name Subdeployment Targets
MyDistributedQueue1 Uniform Distributed Queue MyDistributedQueue1 Default Targetting bnk01osbcluster
MyXAConnectionFactory Connection Factory MyXAConnectionFactory Default Targetting bnk01osbclusterHi Ferp,
Do you have an idea why a successfully recorded WLS session fails when executed from a command line ?It should not fail. Each and every successfully (activation has been done without reporting an error/conflict) recorded session will execute on other servers successfully until a unique name constraint gets violated.
In your case this violation has happened. Your script is trying to create a distributed destination "MyDistributedQueue1" under module "MyJMSSystemModule", targetted to JMS Server "wlsbJMSServer_auto_2" but a Distributed Queue with same JNDI name already exists there.
Please check.
Regards,
Anuj -
Cluster Shared Volume disappeared after taking the volume offline for Validation Tests.
Hi,
After an unknown issue with one of our Hyper-V 4 Node cluster running on Server 2008 R2 SP1 with fibre channel NEC D3-10 SAN Storage all our cluster shared volumes were in redirecting mode and I was unable to get them back online. Only after rebooting all
the nodes one by one the disks came back online. Eventlog messages indicated that I had to test my cluster validation. After shutting down all the virtual machines I set all the cluster shared volumes offline and started the complete validation test. The following
warnings/errors appeared during the test.
An error occurred while executing the test.
An error occurred retrieving the
disk information for the resource 'VSC2_DATA_H'.
Element not found (Validate Volume Consistency Test)
Cluster disk 4 is a Microsoft MPIO based disk
Cluster disk 4 from node has 4 usable path(s) to storage target
Cluster disk 4 from node has 4 usable path(s) to storage target
Cluster disk 4 is not managed by Microsoft MPIO from node
Cluster disk 4 is not managed by Microsoft MPIO from node (Validate Microsoft MPIO-based disks test)
SCSI page 83h VPD descriptors for cluster disk 4 and 5 match (Validate SCSI device Vital Product Data (VPD) test)
After the test the cluster shared volume was disappeared (the resource is online).
Cluster events that are logged
Cluster physical disk resource 'DATA_H' cannot be brought online because the associated disk could not be found. The expected signature of the disk was '{d6e6a1e0-161e-4fe2-9ca0-998dc89a6f25}'. If the disk was replaced or restored, in the Failover Cluster
Manager snap-in, you can use the Repair function (in the properties sheet for the disk) to repair the new or restored disk. If the disk will not be replaced, delete the associated disk resource. (Event 1034)
Cluster disk resource found the disk identifier to be stale. This may be expected if a restore operation was just performed or if this cluster uses replicated storage. The DiskSignature or DiskUniqueIds property for the disk resource has been corrected.
(Event 1568)
In disk management the disk is unallocated, unknown, Reserved. When the resource is on one node and i open disk management i get the warning that i have to initialize the disk. I did not do this yet.
Reading from other posts i think that the partition table got corrupted but i have no idea how to get it back. I found the following information but it's not enough for me to go ahead with: Using a tool like TestDisk to rewrite the partition table. then
rewriting the uniqueID to the disk brought everything back. But still no explaination as to why we had our "High Availability" Fail Over cluster down for nearly 2 Days. This happened to us twice within the past week.
Anybody that an idea how to solve this? I think my data is still intact.
Thanx for taking the time to read this.
DJITS.Hi,
Error information you provided indicate disk connection failure issue, please confirm shared disk 4 is available:
To review hardware, connections, and configuration of a disk in cluster storage:
On each node in the cluster, open Disk Management (which is in Server Manager under Storage) and see if the disk is visible from one of the nodes (it should be visible from one node but not multiple nodes). If it is visible to
a node, continue to the next step. If it is not visible from any node, still in Disk Management on a node, right-click any volume, click Properties, and then click the Hardware tab. Click the listed disks or LUNs to see if all expected disks or LUNs appear.
If they do not, check cables, multi-path software, and the storage device, and correct any issues that are preventing one or more disks or LUNs from appearing. If this corrects the overall problem, skip all the remaining steps and procedures.
Review the event log for any events that indicate problems with the disk. If an event provides information about the disk signature expected by the cluster, save this information and skip to the last step in this procedure.
To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Management. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and
then click Continue.
In the Failover Cluster Management snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Management, click Manage a Cluster, and then select or specify the cluster that
you want.
If the console tree is collapsed, expand the tree under the cluster you want to manage, and then click Storage.
In the center pane, find the disk resource whose configuration you want to check, and record the exact name of the resource for use in a later step.
Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
Type:
CLUSTER RESOURCE DiskResourceName /PRIV >path\filename.TXT
For DiskResourceName, type the name of the disk resource, and for path\filename, type a path and a new filename of your choosing.
Locate the file you created in the previous step and open it. For a master boot record (MBR) disk, look in the file for DiskSignature. For a GPT disk, look in the file for DiskIdGuid.
Use the software for your storage to determine whether the signature of the disk matches either the DiskSignature or DiskIdGuid for the disk resource. If it does not, use the following procedure to repair the disk configuration.
For more information please refer to following MS articles:
Event ID 1034 — Cluster Storage Functionality
http://technet.microsoft.com/en-us/library/cc756229(v=WS.10).aspx
Hope this helps!
TechNet Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Lawrence
TechNet Community Support -
Cluster windows 2008 NODE MAJORITY
hallo, i have a windows 2008 cluster with three nodes (A-B-C) in NODE MAJORITY (default windows). I have installed oracle 10g rel. 2 Node A is active with DB1 , node B is active with DB2, node C isi passive for node A or B.
have installed failsafe and , apparently , the resource move from node A to C, and resource move from node B to C.
But when i open Failsafe panel if the services OracleMSCService is not on node with Node Majority it's not open.
I want to configure the service to move with Cluster Node Majority. How to do ??
or i obliged to create a cluster with NODE MAJORITY and DISK QUORUM ???
or i obliged o reduce the cluster at two node with disk quorum ????
IS VERY URGENT , FRIDAY THE CLUSTER GOES TO PRODUCTIONDUPLICATE POSTING
{message:id=4173094}
This is not acceptable usage of OTN. Please cease with posting the exact same message in multiple OTN forums. -
Guest two node Failover Cluster with SAS HBA and Windows 2012 R1
Hi all, i have two IBM x3560 brand new servers with V3700 IBM Storage. The Servers are connected to the storage through four SAS HBA adabters (two HBA's on each server). I want to create a two node guest Fileserver Failover Cluster. I can present the
LUN's to the guest machines, but when i 'm running the cluster creation wizard it can't see any disk. I can see the disks on disk management console. Is there any way to achive this (the cluster creation) using my SAS HBA presented
disks, or i have to use iSCSI to present the disks to my cluster?
Thank you in advance, George
Hi all, i have two IBM x3560 brand new servers with V3700 IBM Storage. The Servers are connected to the storage through four SAS HBA adabters (two HBA's on each server). I want to create a two node guest Fileserver Failover Cluster. I can present the
LUN's to the guest machines, but when i 'm running the cluster creation wizard it can't see any disk. I can see the disks on disk management console. Is there any way to achive this (the cluster creation) using my SAS HBA presented
disks, or i have to use iSCSI to present the disks to my cluster?
Thank you in advance, George
1) Update to R2 and use shared VHDX which is a better way to go. See:
Shared VHDX
http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
Clustering
Options
http://blogs.technet.com/b/josebda/archive/2013/07/31/windows-server-2012-r2-storage-step-by-step-with-storage-spaces-smb-scale-out-and-shared-vhdx-virtual.aspx
2) If you want to stick with non-R2 (which is a BAD idea b/c tons of reasons) you can spawn an iSCSI target on top of your storage, make it clustered and make it provide LUs to your guest VMs. See:
iSCSI Target in Failover
http://technet.microsoft.com/en-us/library/gg232632(v=ws.10).aspx
iSCSI Target Failover Step-by-Step
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
3) Use third-party software providing clustered storage (active-active) out-of-box.
I would strongly recommend to upgrade to R2 and use shared VHDX.
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Dears,
I have my source cluster based on WIndows 2008 R2 with SP1, and my new target cluster is Windows 2012 R2.
when I run the copy wizard it validate if I can migrate the file server, however, it is giving me those in yellow mark:
File Server IP Address is not eligible to be copied. (Failed to read 'EnableDhcp' property)
FileServer01 is not eligible to be copied.
Is this normal?
How to continue migrating file server role?Hi Jean M,
Could you try to manually configure the IPs by referring to the previous 2008 R2 node, then and bring the new file-servers online?
More related information:
Best practices for migration of cluster windows 2008 R2 / 2012 - As melhores Praticas para migrar um Cluster de Windows 2008 para Windows 2012
http://blogs.technet.com/b/hugofe/archive/2012/12/06/best-practices-for-migration-of-cluster-windows-2008-r2-2012-as-melhores-praticas-para-migrar-um-cluster-de-windows-2008-para-windows-2012.aspx
How to Move Highly Available (Clustered) VMs to Windows Server 2012 with the Cluster Migration Wizard
http://blogs.msdn.com/b/clustering/archive/2012/06/25/10323434.aspx
Migration Paths for Migrating to a Failover Cluster Running Windows Server 2012 R2
https://technet.microsoft.com/en-us/library/dn530781.aspx
I’m glad to be of help to you!
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected] -
SQL 2014 Cluster to Cluster protection
Hi,
I got 2 physical servers in Cluster with SQL 2014 on source site and same environment of two physical servers on target site. I want target cluster to protect target source. The site are remote from each other. How can I do replication and FO & FB? Can
always on protect source cluster to target cluster? can it monitor and do FO to remote cluster. Both cluster on the same domain.
Thanks
ErezPlease check Symantec Backup Exec 2014 it works with CSV.
remove the CSV from the failover cluster manager and then try to point the tool and see if it is able to see the volumes without CSV.
http://www.symantec.com/business/support/index?page=content&id=TECH205833
Hope this helps.
Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach -
ChaRM:cCTS Configuration Import Targets Issue.
Hi ChaRM Experts,
Did any one configured cCTS for ChaRM in Solution Manager SP10 ?.
I have configured cCTS in our solution manager and created clusters for DEV, QAS and PRD and assigned satellite systems then distributed plugins every thing is fine till here.
Then defined transport routes between the clusters => CDV (consolidation)-> CQS (Delivery)-> CPD.
Then i am trying to insert source client and target client for CQS in Import Targets Tab, Followed below steps.
1) Call transaction STMS and choose Overview -> Transport Routes from the menu
2) Switch to edit mode
3) Double click on a target cluster , e.g the quality CQS
4) In the dialog box, select tab Import Targets
5) Choose Insert Row -> Insert Row is invisible i am unable to specify source and targets clients
Could some one please help how to make visible Insert Row icon ?.
Regards
gsrHello,
I can see in your screenshot that no cluster is assigned to the system for which you are trying to assign the import target.
You should verify that the cluster is validated and then try again to assign the import target.
I managed to assign them in SP10 without problems.
Regards
Renato -
Architecture for using MDB with a server cluster
Hi,
I would like to get some advice on the architecture that is the most desirable
for a scenario that I have here.
I have 3 machines, each running a managed server belonging to the same cluster.
My admin server runs on a 4th machine. I have deployed my MDB on the cluster and
I'm trying to find a way to configure the 3 + 1 machines to give the best end-to-end
time for processing the JMS messages.
To investigate this, I'm running some simple benchmark tests with a client application
that sends JMS messages to the system. The client repeatedly sends messages to
the MDB's queue and the MDB puts processed messages to another queue that the
client is listening to.
Now, my question is: where should I set the target for my JMS server (and hence
my queues) and the connection factories? I can think of a few possiblities:
1. Connection factories target: cluster; JMS server target: one of the servers
in the cluster
-> Potential drawback - The server with the JMS server will be handling and redistributing
the JMS messages to other servers in the cluster. This means that a portion of
its processing power is used to do this instead of actually having the MDB process
the JMS message. (Please correct me if i'm wrong)
2. Connection factories target: cluster; JMS server target: have one JMS server
for each server in the cluster and make use of distributed destinations
-> Potential drawback - My client establishes connection with the MDB's queue
only once before it sends its messages to it. Probably as a result of this and
the way WebLogic clusters load-balance themselves, all the messages end up being
routed to the same server. This option appears to be out since 2 of the 3 servers
are not utilized at all.
3. Connection factories target: admin server; JMS server target: admin server
-> Potential drawback - The MDB has to maintain a queue connection with a server
that is not part of the cluster. (Again, please correct me if I'm wrong.) I'm
not sure if this introduces extra time taken for the MDB to receive its messages
and for it to send the processed messages to the queue.
I'd appreciate it if someone could advice me on the most desirable architecture
to use here. From my understanding of the problem, option 3 seems to be the answer,
but I may be wrong. Perhaps there is no significant difference in terms of performance
that 3 can give, compared to 1 and 2.
One last question. I'd like to understand, in option 1, if the admin server plays
any part in load-balancing the cluster. Are the JMS messages received on the cluster's
JMS queue forwarded to the admin server before they are rerouted to the server
that is supposed to process it?
Cheers,
C.Y.
3. Connection factories target: admin server; JMS server target: admin server
> -> Potential drawback - The MDB has to maintain a queue connection with a
server
> that is not part of the cluster. (Again, please correct me if I'm wrong.)
I'm
> not sure if this introduces extra time taken for the MDB to receive its
messages
> and for it to send the processed messages to the queue.
Admin server is not supposed to participate in the cluster. I wouldn't
deploy anything on the admin server.
I think my personal preferene would be connection factories to the cluster
and use distributed destinations.
Regards...
Maybe you are looking for
-
1. I have 170 photos in a slideshow. I have a global time of 2.5 sec except for 53 that I would like to make 1.7 sec. Do I have to set each of the 53 slides individually or can I set them all at once while not distrubing the remaining slides' timin
-
Hi, I want to invoke an applet from jsp page. my jsp page is like this:- <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <h2>Hello World!</h2
-
How to get the SSRC in RTP session
Hi i am using RTPManager to send voice between two machine , how to get the SSRC of the receiver end , and how to get SSRC of the receiving end. with regards Rekha kailas
-
Mandatory condition ZR00 is missing
Hi, When the user is creating an order he is getting Mandatory condition ZR00 is missing.But Condition record is there.I have checked it.Does any one has answer for this,This is very critical. Regards Ram Pedarla
-
How to import files in Outlook 2010??
I converted all my Mac Thunderbird archived mbox files to pst format using an application named Gladwev thunderbird to pst convreter which I got from http://thunderbirdtopst.com/. As the converter final step shows that I have completed the conversion