Cluster with one storedge

is there any possibility to run sun cluster with one storedge?
if yes pls post me a link to some doc about the config.
scenario :
<node0> ----scsi----<storedge3120>-----<node1>
|_____|
Thanx a lot

Hi,
This can be done have a look at the following.
http://docs.sun.com/source/816-7956-13/ch04_cable.html#10046
HTH
Tom

Similar Messages

  • Cluster with one 2 Node RAC and a Single Instance using ASM

    Hi there,
    i am not sure with one planned installation and want to ask, weather i am on the right track.
    Some Facts:
    Clusterware 11g
    ASM 11g
    Database 10gR2
    AIX 5.3
    3 Machines
    2 Storages DS4700
    My Plan
    On Node 1 and Node 2 we install a RAC Database for an ERP Software
    On Node 3 we install a single Instance Database for a Logistic Software
    So i will install on all three Nodes Clusterware and an 3 Instances ASM - Cluster
    I create 2 Diskgroups, one for the FRA and one for the Data, both on Luns on the DS4700
    The RAC-Database and the Logistic-Database are using the same Diskgroups.
    Is this the way to go for this circumstances?
    The alternative is, as far as i see
    Clusterware on an 3 Servers
    One 2 Node ASM for the ERP Software
    one Single Node ASM for the Logistcs
    4 Diskgroups, because of the 2 ASM-Database 2 for the RAC and 2 for the Single Instance.
    Please give me some hints, which way i should prefer.
    My tendence is going to the first alternative. I like the idea to share the Diskgroups over more than on Database because of easy administration.
    The load of the 2 Databases are completly different, the logistc software will nearly do nothing compared to the ERP Software, so this should'nt be a problem.
    But maybe i oversee something, so please do not hesitate to tell me, i am completly wrong ;)
    Thanks a lot
    Jörg

    Chris Slattery wrote:
    why clusterware on 3rd machine ?
    I'd have separate DGs but that's just me.If you wish to install ASM you need OCS installed on the machine, even if it is just one node at all.
    It is a kind of a dependency, no OCS, no ASM
    cu
    Jörg

  • Unity cluster with one node using name and the other using iP

    Hey Guys,
    I have two unity connection boxes setup in a cluster, one is know by name and the other by IP address. I'd like to change so that both are using the IP address. The node in the cluster that is using the node name is the publisher. Can I simply go into the cluster settings and change that to an IP address or will that cause big issues.
    Thanks,
    BR

    Hi Brent,
    Is there any specific reason why you would want to do this kind of configuration? (Just Curious to know)
    I tried doing the same on my Unity Connection cluster (version 8.5(1) SU2 ) in lab.
    After making the change in the cluster settings page for the Pub , I did the following :
    1)Switch the Primary Role back and forth between the Pub and the Sub :  SUCCESS
    2)Checked the service status on the pub and sub using "utils service list" : ALL GOOD
    3)Rebooted the Pub followed by the Sub and performed test 1 and 2 again : SUCCESS
    4)Checked the output of "show cuc cluster status" on the Pub and Sub again: SUCCESS
    However, I could not find any document that says this change is really required under any circumstances.
    I hope that helps!!!
    Regards,
    Saurabh Agnihotri

  • Sun Cluster with three nodes

    I need a manual or advices for introducing a third node in a RAC with Sun Cluster. I don't know if qourum votes readjust automatic or I have add new quorum votes manualy, if I have to add a thirn mediator in svm ... etc
    A lot of thanks and sorry for my english

    After you have added your nodes to the cluster you will need to expand the RGs node list to include the new nodes if you need the RG to run on them. This is not automatic. Something like:
    # clrg set -n <nodelist> <rg_name>
    Is what you need.
    I'm not sure I understand what you said about the quorum count. Only nodes and quorum devices (QD) or quorum servers (QS) get a vote, cabinets do not. So each node gets a vote and a QD/QS gets a vote count equal to the number of nodes it connects to minus 1. Thus with a two node cluster, you have 3 votes with one QD. With a 4 node cluster with one fully connected QD/QS, you have 7 votes (after re-adding it).
    Hope that helps,
    Tim
    P.S. <shameless plug> I can recommend a good book on the product: "Oracle Solaris Cluster Essentials" ;-)

  • Cluster with only 2 NICs, is it possible?

    I am trying to setup a non-production 2 node cluster, problem is I only have 2 NIC ports avail since I can't spend any money on it. Is it possible to work around that and get the second node to join the cluster with one public and one private interface? Not having any luck and am curious if anyone has done the same and knows of some sort of hack to get around it? When node 2 first boots and tries to join the cluster, it is unable to communicate over the cluster interconnect. I understand this isn't the desired setup and voids the whole HA scenario, but this is just a proof of concept before moving a new production app over to SC3.2, which will not have this limitation.
    Thanks in advance for any input.

    A little more research in the interconnect not working I've found two common causes that people have discovered, ipfilter and kernel patch 138888-01. I am not using ipfilter, the service is disabled, and as I understand it, kernel patch 138888-01 and above only cause this issue if connected via a switch. My cluster nodes are connected directly by a crossover cable. Is this still possibly my issue, or should I keep digging? One node is a T2000 with e1000g interfaces, so that also lines up.
    Thanks again..

  • Cluster with Oracle one

    Hi,
    is possible to do a windows cluster with Oracle Database 11g Standard Edition One?
    thanks...

    Hi,
    is possible to do a windows cluster with Oracle Database 11g Standard Edition One?AFAIK, Yes you can but with certain limitations.
    Refer MOS tech note and check for all questions related to standard edition:
    *RAC: Frequently Asked Questions [ID 220970.1]*
    thanks,
    X A H E E R

  • WLS Cluster with Message Driven Beans and MQSeries on more than one Host

              With the Examples of http://developer.bea.com/jmsproviders.jsp and http://developer.bea.com/jmsmdb.jsp
              a MDB can be
              configured to work with MQSeries with one WLS Server. This works only, if a Queuemanager
              is started at the same Host that runs the WLS Server too.
              And the QueueConnectionFactory (QCF) is configured to TRANSPORT(BIND).
              In my configuration should be two WLS Servers and one JMS Queue (MQS) with the
              Queuemanager.
              A Message Driven Bean is deployed on both WLS Servers wich should get the Messages
              of this Queue.
              If one of the two WLS Servers fails the other WLS Server with the corresponding
              MDB should get the Messages of the
              MQSeries Queue.
              If the QCF is configured to TRANSPORT(Client) the Message Driven Bean can't start
              and the following Exception is thrown:
              <Jul 18, 2001 3:52:49 PM CEST> <Error> <J2EE> <Error deploying EJB Component :
              mdb_deployed
              weblogic.ejb20.EJBDeploymentException: Error deploying Message-Driven EJB:; nested
              exception is:
              javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager for
              'btsun1a:TEST'
              javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager for 'btsun1a:TEST'
              at com.ibm.mq.jms.services.ConfigEnvironment.newException(ConfigEnvironment.java:434)
              I'm wondering, because their is a MQQueueManager on btsun1a; all Servers throws
              the same Exception when the MDB is deployed.
              The configuration of JMSadmin on both Hosts is the following:
              dis qcf(myQCF2)
              HOSTNAME(btsun1a)
              CCSID(819)
              TRANSPORT(CLIENT)
              PORT(1414)
              TEMPMODEL(SYSTEM.DEFAULT.MODEL.QUEUE)
              QMANAGER(TEST)
              CHANNEL(JAVA.CHANNEL)
              VERSION(1)
              dis q(myQueue)
              CCSID(819)
              PERSISTENCE(APP)
              TARGCLIENT(JMS)
              QUEUE(MYQUEUE)
              EXPIRY(APP)
              QMANAGER(TEST)
              ENCODING(NATIVE)
              VERSION(1)
              PRIORITY(APP)
              I think only TRANSPORT(CLIENT) can be used when i don't wan't to install a Queue
              and a QueueManager on each WLS Server.
              Does anybody know a problem of WLS 6.0 SP2 to cope with TRANSPORT(CLIENT)?
              

              With the Examples of http://developer.bea.com/jmsproviders.jsp and http://developer.bea.com/jmsmdb.jsp
              a MDB can be
              configured to work with MQSeries with one WLS Server. This works only, if a Queuemanager
              is started at the same Host that runs the WLS Server too.
              And the QueueConnectionFactory (QCF) is configured to TRANSPORT(BIND).
              In my configuration should be two WLS Servers and one JMS Queue (MQS) with the
              Queuemanager.
              A Message Driven Bean is deployed on both WLS Servers wich should get the Messages
              of this Queue.
              If one of the two WLS Servers fails the other WLS Server with the corresponding
              MDB should get the Messages of the
              MQSeries Queue.
              If the QCF is configured to TRANSPORT(Client) the Message Driven Bean can't start
              and the following Exception is thrown:
              <Jul 18, 2001 3:52:49 PM CEST> <Error> <J2EE> <Error deploying EJB Component :
              mdb_deployed
              weblogic.ejb20.EJBDeploymentException: Error deploying Message-Driven EJB:; nested
              exception is:
              javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager for
              'btsun1a:TEST'
              javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager for 'btsun1a:TEST'
              at com.ibm.mq.jms.services.ConfigEnvironment.newException(ConfigEnvironment.java:434)
              I'm wondering, because their is a MQQueueManager on btsun1a; all Servers throws
              the same Exception when the MDB is deployed.
              The configuration of JMSadmin on both Hosts is the following:
              dis qcf(myQCF2)
              HOSTNAME(btsun1a)
              CCSID(819)
              TRANSPORT(CLIENT)
              PORT(1414)
              TEMPMODEL(SYSTEM.DEFAULT.MODEL.QUEUE)
              QMANAGER(TEST)
              CHANNEL(JAVA.CHANNEL)
              VERSION(1)
              dis q(myQueue)
              CCSID(819)
              PERSISTENCE(APP)
              TARGCLIENT(JMS)
              QUEUE(MYQUEUE)
              EXPIRY(APP)
              QMANAGER(TEST)
              ENCODING(NATIVE)
              VERSION(1)
              PRIORITY(APP)
              I think only TRANSPORT(CLIENT) can be used when i don't wan't to install a Queue
              and a QueueManager on each WLS Server.
              Does anybody know a problem of WLS 6.0 SP2 to cope with TRANSPORT(CLIENT)?
              

  • Strange behavior with Quick Cluster and one shared machine

    Hi, I've been struggling for a while with my compressor 4.0.7 distributed processing setup... Here is my configuration:
    1 Mac Mini running as a Quick Cluster with services (3 instances of compressor shared)
    1 Macbook Pro running Services Only (7 instances of compressor shared)
    1 Macbook Pro (much older) running Services Only (2 instances of compressor shared)
    All 3 systems are running Mavericks, but I had this same problem when all 3 were Mountain Lion.. Had to trash the app and all of the prefs in Library\Application Support\Apple Qmaster and Compressor and reinstall to get the app running on all of the upgraded machines, so a fresh start didn't fix this issue.
    The Mac Mini and the older Macbook Pro can see the cluster, submit jobs and are both utilized for rendering when jobs are submitted from the Mini or the older Macbook Pro
    The newer MBP can see the cluster in Share Monitor and in qadministrator, but cannot see any of the jobs in the history (the other Macbook Pro can see all details identically to the Mac Mini).  When a job gets submitted from this system it appears as "Not Available" on the Mac Mini share monitor and it only utilizes 1 local process to do the rendering, which stalls out after about 1 minute. Activity Monitor shows all 7 instances of compressord are running and not frozen but have no activity.
    Jobs submitted from the Mini and older MBP attempt to use the newer MBP for distributed rendering but stall out after about 30 seconds with a host error.  The shared volume never appears on the newer MBP. Qadministrator on the Mini can see the newer MBP and all of the listed services as available.
    Now here is the part that really blows my mind:
    After submitting a job to the cluster from the newer MBP, which will stall out and need to be cancelled as mentioned above; submitting a job from the mini will actually successfully use the services on the newer MBP. Share monitor on the newer MBP still does not display any jobs on the server cluster. Rebooting the newer MBP puts me right back in the "I won't play with those other macs" tantrum.
    Anyone else see this issue and have a fix for it? Workarounds are nice but this is very, very annoying when I get into crunch time.

    Do you have log files?
              - Prasad
              Chris Dempsey wrote:
              > We have 2 WebLogic 4.5.1 servers in a cluster with none of the Service
              > Packs installed. When a client uses the deployed entity beans or
              > servlets they work every other time. The times they do not work nothing
              > happens. No exceptions, no responses to the client ( i.e. HTTP 404s ),
              > nothing. I suspect something in the cluster setup since we do not have
              > these same problems on non-clustered entity beans or servlets. We have
              > made sure all the entity beans have the Shared Database flag set on and
              > added the delayUpdatesUntilEndOfTx false to the enviroment of the DD.
              > That didn't fix the problem. Any ideas?
              >
              > Thanks in advance,
              > Dallas Dempsey
              > DEM - Houston, TX
              

  • Persistent Store Problems for MYSQL Enhanced Cluster With OpenMQ 4.4

    I am trying to implement an enhanced cluster with failover. I have edited the config files for each broker instance for a persistent store. I have appended the following to each of the config.properties files:
    imq.brokerid=myclusterinstanceINSTANCE1 # I substitute INSTANCE2 for INSTANCE1 for broker #2
    imq.persist.store=jdbc
    imq.persist.jdbc.dbVendor=mysql
    imq.persist.jdbc.mysql.property.url=jdbc:mysql://xxx.xxx.xxx.xx:3306/test
    imq.persist.jdbc.mysql.user=user1
    imq.persist.jdbc.mysql.needpassword=true
    imq.persist.jdbc.mysql.password=mypass
    imq.cluster.ha=true
    imq.cluster.clusterid=mycluster
    imq.cluster.brokerlist=xxx.xxx.xxx.x:37676,yyy.yyy.yyy.y:37676
    I then create the persistence storage with "imqdbmgr create tbl". When I view the data in the tables it creates, I have one row. Under Store_Version, I have 410. Under LOCK_ID, it has NULL. When I go to start the brokers with imqbrokerd, I get the following error:
    ERROR [B3198]: Error initializing cluster manager:
    com.sun.messaging.jmq.jmsserver.util.BrokerException: [B4239]: Failed to load persistent store version from database table MQVER41Cmycluster
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.VersionDAOImpl.getStoreVersion(VersionDAOImpl.java:310)
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.DBTool.updateStoreVersion410IfNecessary(DBTool.java:350)
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.JDBCStore.checkStore(JDBCStore.java:3599)
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.JDBCStore.<init>(JDBCStore.java:127)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at java.lang.Class.newInstance0(Class.java:355)
    at java.lang.Class.newInstance(Class.java:308)
    at com.sun.messaging.jmq.jmsserver.persist.StoreManager.getStore(StoreManager.java:157)
    at com.sun.messaging.jmq.jmsserver.Globals.getStore(Globals.java:967)
    at com.sun.messaging.jmq.jmsserver.cluster.ha.HAClusterManagerImpl.initialize(HAClusterManagerImpl.java:181)
    at com.sun.messaging.jmq.jmsserver.Globals.initClusterManager(Globals.java:903)
    at com.sun.messaging.jmq.jmsserver.Broker._start(Broker.java:777)
    at com.sun.messaging.jmq.jmsserver.Broker.start(Broker.java:410)
    at com.sun.messaging.jmq.jmsserver.Broker.main(Broker.java:1971)
    Caused by: java.lang.NullPointerException
    at com.mysql.jdbc.ResultSetImpl.findColumn(ResultSetImpl.java:1103)
    at com.mysql.jdbc.ResultSetImpl.getInt(ResultSetImpl.java:2777)
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.VersionDAOImpl.getStoreVersion(VersionDAOImpl.java:298)
    ... 16 more
    I believe this error is attributed to the NULL value under LOCK_ID. I think that the value under LOCK_ID should be the name of the broker from the config file (even though I specified them in the config files). Any ideas?? THANKS!

    Just some pointers -- maybe this will be of use:
    If you haven't already read it, please take a look at the [ MySQL setup guide|https://mq.dev.java.net/OpenMQ_MySQLCluster_Setup_Guide.html] .
    We recommend using NDB Data-store of MySQL Cluster, though this isn't an absolute requirement. Due to some issues we have found with earlier versions, we recommend using MySQL Cluster, 7.0.9 or better (the current version is 7.0.16, or 7.1.5). Either of these would contain Connector/J.
    I'd also recommend using the latest version -- MQ 4.4update2 (just in case you happen to have an older copy). There were many minor improvements in the integration with MySQL from the original 4.4 release, to update 2. This is linked at the MQ download page: [https://mq.dev.java.net/downloads.html]

  • Can we install a new mssql cluster on the same windows cluster which already containes a mssql cluster with named instance

    We have a MSSQL 2008R2 Enterprise edition with a two node active passive fail-over cluster running on 2008R2 windows cluster with out any issues,
    Now my question is can we add one more MSSSQL cluster instance for the same setup with out disturbing the existing one ?
    Also give thoughts on load sharing as the second node is mostly ideal now except fail-over scenarios,
    Why we go for this situation is because of the collation setting which can be set only one per instance(Database collation setting change not working), we need a different default collation for the new setup

    hi,
    >>Now my question is can we add one more MSSSQL cluster instance for the same setup with out disturbing the existing one ?
    Yes it is possible .You need to add new drives as cluster aware and install SQL server and put data and log files on thse drives.YOu would need to create named instance of SQL server and need to create different resource group.Both old installation and new
    onw would work separately.
    >>Also give thoughts on load sharing as the second node is mostly ideal now except fail-over scenarios,
    Good point indeed.You are about to create Multi instance cluster and should plan for scenario where one node is down and other node is handling load for both instances.Memory and CPU should be enough to handle the load.
    >>Why we go for this situation is because of the collation setting which can be set only one per instance(Database collation setting change not working), we need a different default collation for the new setup .
    Just for collation if you are installing new instance seems little wierd to me.You can manage collation at column ,database and at server level.
    http://technet.microsoft.com/en-us/library/aa174903(v=sql.80).aspx
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • [SOLVED] Can't add a node to the cluster with error (Exchange 2010 SP3 DAG Windows Server 2012)

    Hi there!
    I have a problem which makes me very angry already :)
    I have two servers Exchange 2010 SP3 with MB role started on Windows Server 2012. I decided to create a DAG.
    I have created the prestaged AD object for the cluster called msc-co-exc-01c, assigned necessary permissions and disabled it. Allowed through the Windows Firewall traffic between nodes and prepared the File Share Witness server.
    Then I have tried to add nodes. The first node has been added successfully, but the second node doesn't want to be added :). Now I can add only one node to the DAG. I tried to add different servers first, but only the first one was added.
    LOGS on the second nodes: 
    Application Log
    "Failed to initialize cluster with error 0x80004005." (MSExchangeIS)
    Failover Clustering Diagnostic Log
    "[VER] Could not read version data from database for node msc-co-exc-04v (id 1)."
    CMDLET Error:
    Summary: 1 item(s). 0 succeeded, 1 failed.
    Elapsed time: 00:06:21
    MSC-CO-EXC-02V
    Failed
    Error:
    A database availability group administrative operation failed. Error: The operation failed. CreateCluster errors may result from incorrectly configured static addresses. Error: An error occurred while attempting a cluster operation. Error: Cluster API '"AddClusterNode()
    (MaxPercentage=100) failed with 0x5b4. Error: This operation returned because the timeout period expired"' failed. [Server: msc-co-exc-04v.int.krls.ru]
    An Active Manager operation failed. Error An error occurred while attempting a cluster operation. Error: Cluster API '"AddClusterNode() (MaxPercentage=100) failed with 0x5b4. Error: This operation returned because the timeout period expired"' failed..
    This operation returned because the timeout period expired
    Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.3.174.1&t=exchgf1&e=ms.exch.err.ExC9C315
    Warning:
    Network name 'msc-co-exc-01c' is not online. Please check that the IP address configuration for the database availability group is correct.
    Warning:
    The operation wasn't successful because an error was encountered. You may find more details in log file "C:\ExchangeSetupLogs\DagTasks\dagtask_2014-11-17_13-54-56.543_add-databaseavailabiltygroupserver.log".
    Exchange Management Shell command attempted:
    Add-DatabaseAvailabilityGroupServer -MailboxServer 'MSC-CO-EXC-02V' -Identity 'msc-co-exc-01c'
    Elapsed Time: 00:06:21
    UPD:
    when Exchange servers ran on the same Hyper-V node, the DAG is working well, but if I move one of VM to another node, It stops working.
    I have installed Wireshark and captured trafic of cluster interface. When DAG members on the same HV-node, there is inbound and outbound traffic on the cluster interface, but if I move one of DAG member to another node, in Wireshark I see only outbound traffic
    on both nodes.
    It's confused me, because there is normal connectivity between these DAG members through the main interface.
    Please, help me if you can.

    Hi, Jared! Thank you for the reply.
    Of course I did it already :) I have new info:
    when Exchange servers ran on the same Hyper-V node, the DAG is working well, but if I move one of VM to another node, It stops working.
    I have installed Wireshark and captured trafic of cluster interface. When DAG members on the same HV-node, there is inbound and outbound traffic on the cluster interface, but if I move one of DAG member to another node, in Wireshark I see only outbound traffic
    on both nodes.
    It's confused me, because there is normal connectivity between these DAG members through the main interface.

  • Array of Cluster with event structure

    Hi,
    I am having Array of cluster with cluster having 1 String Control , 1 Combo box, 1 Led control and 2 Numeric control. In the combo box i am having two options to select ('Binary' and 'PWM').Whenever Binary is selected then Led control has be enabled and whenever PWM is selected 2 Numeric control has to be enabled.
    Is there any way to do this??
    Pleas help me...
    Regards
    Meenatchi

    actually in my application, the front panel has to have 16 rows of controls (1 string control, 1 combo box, 1 Led control, 2 numeric control).so i planned to keep all those controls in a cluster and create one array.If i didnt so i will have 16x5 controls in my front panel and i have to put 16 event cases for each combo box to do the enable and disable of controls..
    is there any simple way to do this..i have attached my front panel view
    Attachments:
    Untitled10.vi ‏139 KB

  • Having issue with start weblogic cluster with tangosol cluster

    Hi,
    Oracle Coherence Version 3.3.1/389p1
    Grid Edition: Development mode
    We are using Weblogic 8.1.5 with Tangosol 3.3.1 on Linux servers.
    And we added the initializing logic in the servlet's init() method to get all NamedCaches and put into the ServletContext.
    When we start weblogic cluster, the first weblogic member will startup successfully with following messages :
    <Nov 7, 2007 10:12:30 AM EST> <Info> <HTTP> <BEA-101047> <[2007-11-07 10ServletContext(id=259640596,name=clusterqa,context-path=)] initObjects: init>:12:31.565 Oracle Coherence 3.3.1/389p1 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from resource "zip:/home/server/clusterqa/wls81/DOCVIEW/docqa1/.wlnotdelete/extract/docqa1_DOC_clusterqa/jarfiles/WEB-INF/lib/coherence.jar!/tangosol-coherence.xml"
    2007-11-07 10:12:31.598 Oracle Coherence 3.3.1/389p1 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from file "/home/www/WEB-INF/lib/tangosol-coherence-override.xml"
    Oracle Coherence Version 3.3.1/389p1
    Grid Edition: Development mode
    Copyright (c) 2000-2007 Oracle. All rights reserved.
    2007-11-07 10:12:31.938 Oracle Coherence GE 3.3.1/389p1 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from file "/home/www/WEB-INF/lib/pub-search-cache-config.xml"
    2007-11-07 10:12:31.983 Oracle Coherence GE 3.3.1/389p1 <Info> (thread=Main Thread, member=n/a): sun.misc.AtomicLong is not supported on this JVM; using a synchronized counter. Though safe to ignore, you may upgrade to BEA's 1.5 JVM to fix this issue.
    2007-11-07 10:12:33.267 Oracle Coherence GE 3.3.1/389p1 <Warning> (thread=Main Thread, member=n/a): UnicastUdpSocket failed to set receive buffer size to 1428 packets (2096304 bytes); actual size is 89 packets (131071 bytes). Consult your OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performance.
    2007-11-07 10:12:34.118 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/a
    2007-11-07 10:12:37.508 Oracle Coherence GE 3.3.1/389p1 <Info> (thread=Cluster, member=n/a): Created a new cluster with Member(Id=1, Timestamp=2007-11-07 10:12:33.323, Address=10.5.176.86:8088, MachineId=48982, Edition=Grid Edition, Mode=Development, CpuCount=4, SocketCount=2) UID=0x0A05B056000001161AAB782BBF561F98
    2007-11-07 10:12:37.736 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2007-11-07 10:12:38.168 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=DistributedCache, member=1): Service DistributedCache joined the cluster with senior service member 1
    <Nov 7, 2007 10:12:38 AM EST> <Info> <HTTP> <BEA-101047> <[ServletContext(id=259640596,name=clusterqa,context-path=)] xslProcessor: init>
    But trying to start the second weblogic member server, the startup process is stucked after tangosol cache initialization and the second weblogic member server never up running. Please see following messages :
    <Nov 7, 2007 9:49:38 AM EST> <Info> <HTTP> <BEA-101047> <[ServletContext(id=153019550,name=clusterqa,context-path=)] initDSNames: init>
    <Nov 7, 2007 9:49:42 AM EST> <Info> <HTTP> <BEA-101047> <[ServletContext(id=153019550,name=clusterqa,context-path=)] initObjects: init>
    2007-11-07 09:49:43.156 Oracle Coherence 3.3.1/389p1 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from resource "zip:/home/server/clusterqa/wls81/DOCVIEW/docqa2/.wlnotdelete/extract/docqa2_DOC_clusterqa/jarfiles/WEB-INF/lib/coherence.jar!/tangosol-coherence.xml"
    2007-11-07 09:49:43.188 Oracle Coherence 3.3.1/389p1 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from file "/home/www/WEB-INF/lib/tangosol-coherence-override.xml"
    Oracle Coherence Version 3.3.1/389p1
    Grid Edition: Development mode
    Copyright (c) 2000-2007 Oracle. All rights reserved.
    2007-11-07 09:49:43.528 Oracle Coherence GE 3.3.1/389p1 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from file "/home/www/WEB-INF/lib/pub-search-cache-config.xml"
    2007-11-07 09:49:43.571 Oracle Coherence GE 3.3.1/389p1 <Info> (thread=Main Thread, member=n/a): sun.misc.AtomicLong is not supported on this JVM; using a synchronized counter. Though safe to ignore, you may upgrade to BEA's 1.5 JVM to fix this issue.
    2007-11-07 09:49:44.829 Oracle Coherence GE 3.3.1/389p1 <Warning> (thread=Main Thread, member=n/a): UnicastUdpSocket failed to set receive buffer size to 1428 packets (2096304 bytes); actual size is 89 packets (131071 bytes). Consult your OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performance.
    2007-11-07 09:49:45.419 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/a
    2007-11-07 09:49:45.555 Oracle Coherence GE 3.3.1/389p1 <Info> (thread=Cluster, member=n/a): Failed to satisfy the variance: allowed=16, actual=47
    2007-11-07 09:49:45.555 Oracle Coherence GE 3.3.1/389p1 <Info> (thread=Cluster, member=n/a): Increasing allowable variance to 19
    2007-11-07 09:49:46.040 Oracle Coherence GE 3.3.1/389p1 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2007-11-07 09:49:45.69, Address=10.5.176.85:8088, MachineId=48981, Edition=Grid Edition, Mode=Development, CpuCount=4, SocketCount=2) joined cluster with senior Member(Id=1, Timestamp=2007-11-07 09:45:10.205, Address=10.5.176.86:8088, MachineId=48982, Edition=Grid Edition, Mode=Development, CpuCount=4, SocketCount=2)
    Could you please explain why it happens, and what should I do to resolve this issues ?
    Many Thanks,
    Bing

    Hi, Gene
    Thank you for the response. I will send you our full log files and thread dumps.
    I just want to give you more details about our cases :
    1. This only happened without starting cache servers (com.tangosol.net.DefaultCacheServer).
    2. And our application which are running on weblogic cluster will just call "CacheFactory.getCache("XXX")", and running as the Tangosol DataClient.
    3. All weblogic member servers will be up running successfully if our cache servers are up running.
    Also I tried to test another case :
    Suppose all weblogic instances and cache server instances are up running. Now I trying to restart (kill weblogic instance process and restart) one of the weblogic member, It will up running successfully only if add some sleep times after killing weblogic processes and restarting it. Looks like tangosol cluster need certain time to aware the member has left cluster, then the restart process will be successful.
    Questions :
    1. Should we start our weblogic cluster only after cache server cluster is up running ?
    2. How do we decide how many time we should wait before start new process to join the cache cluster ?
    Could you please help to explain this one for me and let us if there anyway we can do to avoid the problem.
    Many Thanks !!!
    Bing

  • Round Robin was not happening for my cluster with WebLogic Proxy Plugin

              Hi,
              I configured my cluster with software load balancer, HTTPClusterServlet. By default,
              it is load balancing with Round Robin Algorithm. That means one HTTP request
              goes to server1, and the other HTTP request goes to server2. However, it is not
              what I can see no matter for the requests in one HTTP session or not.
              Say I open two browser, and log into my application with two different users,
              one is "cyang", the other is "xpression". Then the HTTP request (for servlet/jsp)
              from two browsers always go to the same server1, server2 is not invoked at all.
              I did see one time, with only one session (one browser with "xPression" user log
              in), the most requests go to server1, suddenly I am brough into log in page, then
              I noticed that the request for "xPression" moved to server2 although server1 is
              still alive. Therefore, at most, I can say it is "Random", rather than "Round
              Robin".
              What is the real meaning for HTTP servlet/jsp load balancing algorithm? Does
              Round Robin mean request go to each server in turns? Does it have to be different
              session or it can be within one session?
              

    It should be sticky. If not, then bug / config error.
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com/coherence.jsp
              Tangosol Coherence: Clustered Replicated Cache for Weblogic
              "Carole Yang" <[email protected]> wrote in message
              news:[email protected]...
              >
              > Thanks. Yeh, with two kind of browsers, I do see the request goes to
              different
              > servers.
              >
              > That goes back to the original question. Does "Round Robin" here fall
              into the
              > scope of a HTTP session. However, that is not always true for my tests.
              Sometimes,
              > HTTP request just randomly goes to another server while in the same HTTP
              session.
              > It is not sticky to one particular server during one session.
              >
              >
              >
              > --Carole
              >
              >
              > "Cameron Purdy" <[email protected]> wrote:
              > >Has to be different sessions to go to different machines.
              > >
              > >Best way to test is to run one session in IE and the other in Mozilla
              > >or
              > >Netscape.
              > >
              > >Peace,
              > >
              > >Cameron Purdy
              > >Tangosol, Inc.
              > >http://www.tangosol.com/coherence.jsp
              > >Tangosol Coherence: Clustered Replicated Cache for Weblogic
              > >
              > >
              > >"Carole Yang" <[email protected]> wrote in message
              > >news:[email protected]...
              > >>
              > >> Hi,
              > >>
              > >> I configured my cluster with software load balancer,
              HTTPClusterServlet.
              > >By default,
              > >> it is load balancing with Round Robin Algorithm. That means one HTTP
              > >request
              > >> goes to server1, and the other HTTP request goes to server2. However,
              > >it
              > >is not
              > >> what I can see no matter for the requests in one HTTP session or not.
              > >>
              > >> Say I open two browser, and log into my application with two different
              > >users,
              > >> one is "cyang", the other is "xpression". Then the HTTP request (for
              > >servlet/jsp)
              > >> from two browsers always go to the same server1, server2 is not invoked
              > >at
              > >all.
              > >>
              > >> I did see one time, with only one session (one browser with "xPression"
              > >user log
              > >> in), the most requests go to server1, suddenly I am brough into log
              > >in
              > >page, then
              > >> I noticed that the request for "xPression" moved to server2 although
              > >server1 is
              > >> still alive. Therefore, at most, I can say it is "Random", rather
              > >than
              > >"Round
              > >> Robin".
              > >>
              > >> What is the real meaning for HTTP servlet/jsp load balancing algorithm?
              > >Does
              > >> Round Robin mean request go to each server in turns? Does it have
              > >to be
              > >different
              > >> session or it can be within one session?
              > >
              > >
              >
              

  • Oracle RAC + Clusterware and another Cluster with Clusterware for SAP

    Hi,
    I have some questions about implementation of Oracle RAC and Clusterware with SAP
    For exemple, an architecture with 4 servers ( 2 real and 2 vritual ).
    I would like to know if i can do this
    2 servers for the first cluster.
    First cluster is with Clusterware and Oracle RAC
    This is for all the SAP Oracle databases environment
    I think there is no problem here.
    Now, with 2 others servers il would like to make another cluster (with also clusterware ) for SAP Central services (SCS) and enque replication server (ERS)
    Because all architecture is for only one SAP environment with separate services.
    1 for Database (cluster 1)
    1 for Central services ( cluster 2, virtual machine )
    1 for Dialogue Instance (no cluster)
    To be clear, the second cluster is to make HA of central services SAP (SCS and ERS )
    My question 2 are :
    Is it a good job to do this ? or there is anything wrong ?
    Do i have to install antoher clusterware for this 2 servers or i have to make anything with the existing clusterware + oracle RAC ??
    Thank you very much for you help
    Edited by: user12395221 on 29 déc. 2009 15:36

    Hi Givre,
    have you checked: Providing High Availability for SAP Resources (http://www.oracle.com/technology/products/database/clusterware/pdf/sap-availability-on-rac-twp.pdf) available on otn.oracle.com/clusterware? Not being an SAP expert myself, I still think, this paper describes the configuration - at least partially - that you are trying to set up.
    Just an idea. Thanks,
    Markus

Maybe you are looking for