JobScheduling in two WL clusters

Hi,
I read following article on using WebLogic Server Clustered Timer:
http://blogs.oracle.com/jamesbayer/2009/04/a_simple_job_scheduler_example.html
As per this article, the job scheduling would work fine with multiple managed servers in single WL cluster.
Could someone guide if the same scheme will work when I have two active WL Clusters (each having 4 managed servers) and having active connections to a common Database? Or will there be any locking/data corruption/other issues?
Thanks
KT

Hi,
I don't think so. The supported architecture is SAP CI on one node and SQL Server on the other node.
Regards,
Olivier

Similar Messages

  • Can CUBE register with two CUCM clusters?

    We have two CUCM clusters - one is in US and one is in Australia. Currently CUBE is registered with US Cluster with the settings below -
    sccp local GigabitEthernet0/0
    sccp ccm 10.10.1.21 identifier 2 priority 2 version 7.0
    sccp ccm 10.10.1.20 identifier 1 priority 1 version 7.0
    sccp
    Now we need CUBE to communicate with Australia CUCM. Should we set sccp up for Australia CUCM cluster (version 6.0)?
    Thanks,
    Jessica Wang

    I think you are refering to registering media termination points / transcoders on the same router to two different CUCM clusters, correct?
    If yes, we can do it by creating separate sccp ccm groups.
    Example :
    sccp ccm identifier 1 version 7.0
    sccp ccm identifier 2 version 7.0
    sccp ccm identifier 3 version 7.0
    sccp ccm identifier 4 version 7.0
    sccp ccm group 1
    associate ccm 1 priority 1
    associate ccm 2 priority 2
    associate profile 1 register Transcoder1
    sccp ccm group 2
    associate ccm 3 priority 1
    associate ccm 4 priority 2
    associate profile 2 register Transcoder2
    dspfarm profile 1 transcode 
    codec g729r8
    codec g711ulaw
    codec g711alaw
    codec g729ar8
    codec g729abr8
    maximum sessions 5
    associate application SCCP
    dspfarm profile 2 transcode 
    codec g729r8
    codec g711ulaw
    codec g711alaw
    codec g729ar8
    codec g729abr8
    maximum sessions 5
    associate application SCCP
    Arun

  • Is it possible for a process to participate in two separate clusters

    Is it possible for a process to participate in two separate clusters? For example if our application would like to get market data in one cluster that has a separate multicast address, and post order in another.

    The easiest way for a client to access multiple clusters is via Coherence*Extend:
         http://wiki.tangosol.com/display/COH33UG/Configuring+and+Using+Coherence*Extend
         The client would not be a member of the cluster, instead it would connect to the cluster via a proxy node that is in the cluster. Using <remote-cache-scheme>, you can configure a cache to point to one proxy (in cluster A) and have another cache point to another proxy (in cluster B.)
         Thanks,
         Patrick Peralta

  • Is it possible to use the same Switch for two different clusters.

    I have 10g Rac setup on windows.
    Now I am planning to install 11gR2 on different servers.
    Is it possible to use the same Switch for two different clusters.

    user9198027 wrote:
    I have 10g Rac setup on windows.
    Now I am planning to install 11gR2 on different servers.
    Is it possible to use the same Switch for two different clusters.
    Yes.  Technically there will not be any conflict as long as the private addresses used by the 2 clusters do not collide, and provided that the switch's port capacity and bandwidth will not be exceeded.
    Your NA (netadmin) can also configure the switch to separate the 2 Interconnects from one another (called partitioning when using Infiniband) - if the switch supports such features.
    A major consideration is not to make the switch, public. That typically cause a range of problems and can have a serious impact on an Interconnect. But using 2 private networks on the same infrastructure should not have the same problems - if configured and implemented correctly.

  • One SAP system in two MSCS clusters for R/3 Enterprise

    Is the one SAP system in two MSCS clusters architecture supported on R/3 4.7 ( WAS 6.20), with SAP CI on one MSCS cluster and MSSQL on another pair of database MSCS cluster ?

    Hi,
    I don't think so. The supported architecture is SAP CI on one node and SQL Server on the other node.
    Regards,
    Olivier

  • Can a single server instance (Ip Port combo) be part of two different clusters?

    Hello,
              Is it possible for a single Weblogic 7.0 server instance to be a part of two different clusters?
              Thanks,
              Rajan
              

    Invoke a SLSB in a different cluster which will call invalidate() ?
              "Rajan" <[email protected]> wrote in message
              news:3d88a5b6$[email protected]..
              >
              > We have a requirement to synchronize in-memory data between the machines
              in different
              > clusters.
              >
              > Invalidate() propagates only within a cluster. I was wondering if there is
              a standard
              > way to update in-memory data on a different cluster if the underlying data
              stored
              > in a database, has changed.
              >
              > Thanks for help.
              >
              > Rajan
              >
              > "Cameron Purdy" <[email protected]> wrote:
              > >Rajan,
              > >
              > >> Is it possible for a single Weblogic 7.0 server instance to be a part
              > >> of two different clusters?
              > >
              > >No. Is there something specific that you are trying to accomplish?
              > >
              > >Peace,
              > >
              > >Cameron Purdy
              > >Tangosol, Inc.
              > >http://www.tangosol.com/coherence.jsp
              > >Tangosol Coherence: Clustered Replicated Cache for Weblogic
              > >
              > >
              > >"Rajan" <[email protected]> wrote in message
              > >news:[email protected]..
              > >>
              > >
              > >
              >
              Dimitri
              

  • Is it possible to connect to two different clusters?

    Hi,
    I was wondering if it was possible to connect to two different clusters from the same process?
    What I would like to do is to able to get some cache on the first cluster and others on a second cluster. The cache-config DTD doesn't seem to forbid the definitions of several remote-cache-scheme. So I should be able to map all "c1-*" caches to the first cluster and all "c1-*" caches to a second cluster...
    Is it possible?

    Thanks,
    Yes I'm connection via extend.
    Could you give me an example of such configuration? I've made a test from a C++ client with the following client config:
    ====================================================================
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>c1-*</cache-name>
    <scheme-name>extend-1</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>c2-*</cache-name>
    <scheme-name>extend-2</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <near-scheme>
    <scheme-name>near-c1</scheme-name>
    <back-scheme>
    <remote-cache-scheme>
    <scheme-ref>extend-1</scheme-ref>
    </remote-cache-scheme>
    </back-scheme>
    <invalidation-strategy>auto</invalidation-strategy>
    </near-scheme>
    <near-scheme>
    <scheme-name>near-c2</scheme-name>
    <back-scheme>
    <remote-cache-scheme>
    <scheme-ref>extend-2</scheme-ref>
    </remote-cache-scheme>
    </back-scheme>
    <invalidation-strategy>auto</invalidation-strategy>
    </near-scheme>
    <remote-cache-scheme>
    <scheme-name>extend-dist</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address system-property="tangosol.coherence.proxy.address">ldnpsm020001413</address>
    <port system-property="tangosol.coherence.proxy.port">4321</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    </serializer>
    </initiator-config>
    </remote-cache-scheme>
    <remote-cache-scheme>
    <scheme-name>extend-paul</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address system-property="tangosol.coherence.proxy.address">ldnpsm020001412</address>
    <port system-property="tangosol.coherence.proxy.port">4131</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    </serializer>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    </cache-config>
    ========================================================================
    if I do a CacheFactory::getCache("c1-Test"), the log shows a connection to the 1st cluster
    if I then do a CacheFactory::getCache("c2-Test"), the log shows a connection to the 1st cluster and not to the second cluster
    What did I do wrong?

  • Deployment of same EAR files to two separate clustered domains

    I am currently running all my portal applications and business objects from within one 8.1 clustered environment.
    However I would like to move to an architecture where we use two 8.1 clustered server domains.
    Where:
    Clustered domain 1 is used to service requests from back office applications from within the enterprise and
    Clustered domain 2 used to service requests for external facing applications from the portal jpf's.
    The deployment issue which concerns me is that each cluster will require an identical deployment of the same ear files.
    The datasources for each of the ear's will point to a common database.
    This solution is prefered over that of deploying the ear files to just one clustered domain and calling the application from the other clustered domain
    via the remote interface, as this would incure code changes and all the associated testing etc.
    I'd like to find out if there are any issues on of concurrency with this deployment model ?
    The business objects in the ears are comprised of mainly CMP EJB's and statless session beans.
    How will the container of each cluster manage the DB concurrency of the CMP EJB's when the datasource's of the ear files in each clustered domain
    point to the same DB ? Will this cause any concurrency conflicts?

    Sounds like you have some code that is not threadsafe. Instances are interacting, probably because you are using static data that is being shared between running instances in the same jvm. If so, convert to instance data if your code is the cause.
    Check this line of code, it may be the cause:
    at jep.MySimpleEventQueue.dispatchEvent(MySimpleEventQueue.java:59)

  • Two independent clusters on the same network

    Hi,
    How do I configure Coherence node VMs to participate in different coherence clusters on the same network?
    Thank you.
    Roman

    Hi Roman,
    You need only have each cluster running on a different multicast address. You can accomplish this by either modifying the <multicast-listener> <address> element in the tangosol-coherence.xml file. Or you can override this value by using the following java command line argument -Dtangosol.coherence.clusteraddress=<IP_ADDRESS>
    Later,
    Rob Misek
    Tangosol, Inc.
    Coherence: Cluster your Work. Work your Cluster.

  • Can we keep two different clusters  data in synch.

    i have 10 coherence servers in newyork all forming clusterA and i do have 10 coherence servers in charlotte forming clusterB ,i wish to keep the data between cluster A and cluster B in synch.
    Q1: is there any possible/configurable way to achive this.
    Note: I dont want all the 20 coherence servers(newyork and charlotte) forming the same cluster.
    Q2:Incase if all the 20 servers are forming the same cluster how would the read/write response times vary for the coherence clientapplications which are running in charlotte and newyork ,Asuuming i have 50 Apps running in charlotte and the same 50 apps(DR COPY) running in newyork.i am trying to achieve the best read/write performance.
    Q3: is read/write performance vary only based on the cache type irrespective of network/location of coherence servers?
    Thanks Much
    Edited by: goodboy626696 on May 28, 2009 2:19 PM

    On the remote site, something similar cache config (have proxy-scheme):
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>my-cache</cache-name>
    <scheme-name>distributed-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-scheme</scheme-name>
    ..... Define a backing map and a listener here .....
    </distributed-scheme>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>50</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>localhost</address>
    <port system-property="tangosol.coherence.acceptor.port">32000</port>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    </caching-schemes>
    </cache-config>
    On another site we would need a remote-cache-scheme:
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>distributed-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-scheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <lease-granularity>thread</lease-granularity>
    <backing-map-scheme>
    <local-scheme>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <!--
    remote cache scheme used to replicate data to the replicate cluster via
    Extend
    -->
    <remote-cache-scheme>
    <scheme-name>remote</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>localhost</address>
    <port>32000</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>20s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>20s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    </cache-config>
    And then from an active site in a BML use the code to publish the key/value to a remote cache.
    Edited by: asrivast on Jun 9, 2009 10:49 PM

  • Connecting two Coherence Clusters using Replication

    I am using: http://wiki.tangosol.com/pages/viewpage.action?pageId=1343616 to setup the "Replicated-site Coherence*Extend Example". On running the build I am getting errors as shown below. Can anyone help me to resolve this so that I can get the example up and running?
    .\..\src\com\tangosol\examples\extend\ExtendReadWriteBackingMap.java:57: cannot find symbol
    symbol : constructor ReadWriteBackingMap(com.tangosol.net.BackingMapManagerContext,com.tangosol.util.ObservableMap,java.util.Map,com.tangosol.net.cache.CacheStore,boolean,int)
    location: class com.tangosol.net.cache.ReadWriteBackingMap
    super(ctxService, mapInternal, mapMisses, store, fReadOnly,
    ^
    .\..\src\com\tangosol\examples\extend\ExtendReadWriteBackingMap.java:85: incompatible types
    found : com.tangosol.net.cache.ReadWriteBackingMap.CacheStoreWrapper
    required: com.tangosol.net.cache.CacheStore
    CacheStore store = getCacheStore();
    ^
    .\..\src\com\tangosol\examples\extend\ExtendReadWriteBackingMap.java:153: storeAllInternal(java.util.Set) in com.tangosol.net.cache.ReadWriteBackingMap.CacheStoreWrapper cannot be applied to (java.util.Map)
    super.storeAllInternal(mapEntries);
    ^
    3 errors
    On running javac in verbose mode it does show that the classes in coherence.jar have been loaded, especially the one required:
    [loading com\tangosol\net\cache\ReadWriteBackingMap.class(com\tangosol\net\cache:ReadWriteBackingMap.class)]
    [loading com\tangosol\net\cache\ReadWriteBackingMap$CacheStoreWrapper.class(com\tangosol\net\cache:ReadWriteBackingMap$CacheStoreWrapper.class)]

    I am running with Java: jdk1.6.0_21, Coherence: 3.6 and ant: 1.8. Nb. I am running on Windows but that should not matter....
    I thought it could be a problem with ant not passing in the correct classpaths or somehting similar so I wrote the equivalent DOS script (shown below) to run the build but still got the same error.
    Nb. I have managed to run the synchronized site example with the above java, coherence and ant versions.
    build.cmd_
    @echo off
    if "%1"=="build" goto build
    if "%1"=="clean" goto clean
    goto help
    :help
    echo "usage: build.cmd build"
    echo build: makes the clases directory and compiles the java
    echo clean: removes the classes directory
    goto exit
    :build
    mkdir .\..\classes
    "%JAVA_HOME%"\bin\javac -d .\..\classes -cp "%COHERENCE_HOME%"\lib\coherence.jar .\..\src\com\tangosol\examples\extend\*.java -verbose
    goto exit
    :clean
    rmdir .\..\classes /q/s
    goto exit
    :exit
    "%JAVA_HOME%" = "C:\Program Files\java\jdk1.6.0_21"
    "%COHERENCE_HOME%" = "C:\Program Files\Oracle\coherence-java-3.6.0.0b17229\coherence"
    Edited by: Ku on Sep 21, 2010 11:37 AM
    Added info to show location of java/coherence variables.

  • UCCX Historical Reports client on my pc with two clusters 7x and 9x

    I have two separate clusters, a 7x running UCCX 7x and we are building a new 9.1 cluster running UCCX 9x.
    Can I install the new 9x UCCX Historical Reporting Client and talk to both clusters?  I don’t think both old and new HRC clients can be installed on same pc, or can they?

    Why would you use HRC with version 9.X?  Is CUIC not an option?

  • SQL Server 2012 - 3 SQL clustered instances - one default/ two named instances - how assign/should assign static ports for named instances

    We have two physical servers hosting 3 SQL 2012 clustered instances, one default instance and two named instances.
    The default instance is using port 1433 and the two named instances are using dynamic port assignment.
    There is discussion about assigning static port numbers to the two named clustered SQL instances.
    What is considered best-practice?  For clustered named instances to have dynamic or static ports?
    Are there any pitfalls to assigning a static port to a named instance that is a cluster?
    Any help is greatly appreciated

    Hi RobinMCBC,
    In SQL server the default instance has a listener which listens on the fixed port which is TCP port 1433. And for the named instance the port on which the SQL server listens is random and is dynamically selected when the named instance of the SQL server
    starts.
    For Standalone instance of the SQL server we can change the dynamic port of the named instance to the static port by using SQL server configuration manager as other post, however, in case of the cluster, when we change the port no. of the named instance
    to the static port using the method described above, the port no. again changes back to the dynamic port after you restart the services. I recommend you changing the Dynamic port of the SQL Server to static port 
    on all the nodes , disabling and enabling the checkpointing to the quorum.
    For more information, you can review the following article about how to change the dynamic port of the SQL Server named instance to an static port in a SQL Server 2005 cluster.
    http://blogs.msdn.com/b/sqlserverfaq/archive/2008/06/02/how-to-change-the-dynamic-port-of-the-sql-server-named-instance-to-an-static-port-in-a-sql-server-2005-cluster.aspx
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Coherence clustering between two separate wka clusters

    Hi,
    I am running Coherence 3.6 in my environment.Off lately I am seeing a weird issue with respect to two separate wka clusters. when I mean separate wka cluster, these are two clusters configured with a distinct wka list.
    I try to connect with Coherence cmd line tool to a cluster named "cluster-ABC" which has a wka IP- 10.30.xx.xxx on "host x". Then I come out of this cluster, and connect from the same command prompt window to another host -"host y" which has a same named cluster-"cluster-ABC" but a different wka IP-10.30.yy.yyy. Between the two clusters, the only common factor is the cluster name. Other things such as wka list etc are different.
    After some time, I see that nodes from "host y" start joining "host x". this seems to happen only when i connect to two different clusters, from my command line utility. The clusters here are formed using wka in each case. there is no multicast enabled on our setups. This used to never happen with coherence 3.5, where i did similar kind of operations on the cluster.
    Is there some kind of reference being held by the utility even after it is disconnected from the second cluster?
    Thanks,
    Chandini

    We tried out your suggestion on using unique cluster names. I had 2 hosts each running a cluster with a unique cluster name; cluster-A and cluster-B. I had another node on a third host connect to cluster-A using port 8092 and then tried connecting to the cluster-B after changing the cluster name appropriately. But, this time the node was rejected by cluster-A saying that the configurations are different.
    On changing the port from 8092 to 8090 and connecting to cluster-B worked just fine. I guess, just having unique cluster names isn't helping. Would this be fixed by the ticket that's been raised?
    Is there anything else we could do in the interim to get around the issue? Using different ports might not be a very helpful solution as we've noticed that 3.6 onwards, we need to open up the ports in the firewall of the host where the nodes are running. This wouldn't be too feasible for us.
    -Roshan

  • Can we have two clusters in same domain running different versions of WLS

    I have not tested this yet but was looking for help from people who might have done this already.. Is it possible to have this scenario:
              create two independent clusters, say C1 and C2 as part of a single domain D. C1 has 2 managed servers running WLS8.1 and C2 has 2 managed servers running WLS7.0
              Can this be done?
              Thanks,
              Raghu

    It is definetely inviting for trouble.
              We can have two clusters from same version of weblogic server within a domain.
              If we have a WLS8.1 domain, the managed servers that we create from that domain should be from WLS8.1.
              As far I remember, its not a supported configuration to have different versions(major versions) of Admin and managed servers.
              -Vijay

Maybe you are looking for

  • Subtotals in dynamic internal table using alv grid

    hi experts i have created one dynamic table.  The requirement is to display the subtotals in the output using reuse_alv_grid. Dynamic itab (field-symbols) and ALV event BEFORE_LINE_OUTPUT the above is the thread related and could any one please provi

  • JTable rendering problem in JRE 1.6.0.10 under Linux

    I've encountered a bizarre issue using JTable. I've got a 3x10 JTable of Strings, values in which I change from within the program (no user input). After every update the value in the first column gets superimposed on the top of the previous one, so

  • How Do I Delete Hidden Files in Compressed Folders?

    I'm working on a mac and sometimes I compress files and send them to be opened on a PC. Is there anyway to delete the hidden files that mac adds to a zip file before sending it, or disable their creation completely? I'm thinking about getting a mac a

  • EBCC confusion (basic)

    I'm a bit confused by an aspect of the EBCC. When I open go to open an application there is a directory structure. There are directories that are mirrored in both the EBCC directory and the WLPortal direcotry. For example the following directories bo

  • Itunes 7 lost it's cool

    Everything that made iTune so cool is quickly disappearing. I refer to the quick access buttons that made the most used/useful features easy to find and quick to get to. The import/burn button, equalizer, visualizer. While the new eye candy is nice i