Missing Master Site in Multi Master Environment

Hello,
we are using MM-Replication on three Master sites with two replication groups.
Replication support was installed and configured by the application vendor.
I do not have experience with Multi Master Replication.
The data of the RG (Readonlymastergroup) was never replicated to one Master
(DEFS01). When I queried dba_repgroup on the failing site, I found the
RG status to be quiesced. There were no pending administrative requests
on this site or the Master Definition site, but I found that there is no definition
of the failing site's membership to the RG on the MD site.
Master Definition Site
select gname,dblink,Master,masterdef from dba_repsites;
GNAME DBLINK M M-----
MASTERGROUP      DEMD01 Y Y
MASTERGROUP      DEGS01 Y N
READONLYMASTERGROUP      DEMD01 Y Y
READONLYMASTERGROUP      DEGS01 Y N
MASTERGROUP      DEFS01 Y N
Failing Master Site
select gname,dblink,Master,masterdef from dba_repsites@defs01;
GNAME DBLINK M M-----
MASTERGROUP      DEFS01 Y N
MASTERGROUP      DEMD01 Y Y
READONLYMASTERGROUP      DEFS01 Y N
READONLYMASTERGROUP      DEMD01 Y Y
MASTERGROUP      DEGS01 Y N
READONLYMASTERGROUP      DEGS01 Y N
Can anybody explain how this could happen? AFAIK adding a master to a resource
group is a distributed transaction that should be rolled back on all sites, if it fails
on one.
To correct this situation; I am thinking of removing the RG from DEFS01
with DBMS_REPCAT.DROP_MASTER_REPGROUP (on DEFS01) and then rejoin DEFS01
with DBMS_REPCAT.DROP_MASTER_REPGROUP on the Master
Definition Site.
Will this work? Anything else I have to think off?
Regards,
uwe

Hi Janos,
I have tried the Multi language scenario. You need to have following setup in your system, they are:-
>Install language pack
>You need to upload content using below CSV fles:-
multistring.csv
Material.csv
Plant CSV
Product_Plant_Relationship_template.csv
If you can share your email id I will email you sample CSV file which you can use. Or else pl follow below steps to download CSV file from system:-
Click Setup
Click System Administration tab
Click to Import data
Click to New button
select radio button Upload to Server and click next
Select any dummy excel file from local server while browsing for Upload Import file option and click next.
Select  Preview import check box adn search MultiString  from the drop down object type and click next
In the page you will see the template.csv link, click and save the file. This the file you are looking for.
Create the content and upload the file...hope this will help you!!!
There is another way of importing master data is multiple langaure. In this scenario master data source would be ECC. You need to run SAP provided standard report to export the details. This XML file you can import into sourcing system. Pl find below blog link for more detail:-
http://scn.sap.com/community/sourcing/blog/2011/09/29/extracting-erp-master-data-for-sap-sourcing
Regards,
Deepak

Similar Messages

  • Multi master replication

    Hi
    I have implemented multi master replication only for dmls between 8i and 10g database
    whenever i try to generate the replication support after putting the object in replication group i get the error as
    ora-23416 primary key constraint doesnot exisrt
    I would like to know that whether is it necessary that the every table should have primary key constraint mandatorily before generating the replication support ?
    Also please say me whether i can replicate the dmls executing on the master definition site to master destination site in multi master replication
    If you have any specific document please provide me that to have better understanding of multi master replication
    Aram

    Hi , Please refer to metalink doc "ORA-23346 DURING GENERATE_REPLICATION_SUPPORT [ID 1059092.6]"
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=PROBLEM&id=1059092.6
    Hope this helps
    Thanks

  • Multi-master replication - indefinite referrals

    Hi,
    I am trying to set two master ldap in multi master mode on college campus:
    LDAP1 and LDAP2.
    I get this in errorlog of LDAP1:
    Configuration warning This server will be referring client updates for replica o=comms-config indefinitely
    WARNING<20805> - Backend Database - .. - search is not indexed
    I guess the warning says the search is not indexed, but still that's not the mail problem...
    in other server logs on LDAP2, I get:
    ldap://LDAP2:389; Partial results and referral
    received|#]
    ...0400|SEVERE|sun-appserver|javax.enterpr
    ise.system.container.web|_ThreadID=24|SomeJSP:Servlet.service() for
    servlet jsp threw exception java.lang.Exception: The Error has occured due to
    :netscape.ldap.LDAPReferralException: referral (9); Referral:
    LDAP2; Partial results and referral received at...
    dse.ldif from LDAP1:
    nsslapd-referral: ...
    numsubordinates: 1
    nsslapd-state: referral on update
    from LDAP2:
    nsslapd-referral:...
    nsslapd-state: backend
    Should LDAP1 and LDAP2 be both set to "process write and read requests" ??
    Please advise

    The state of LDAP1 is probably the result of the way you've done the initialization of both servers and replication.
    What to do in this situation is precisely explained in the Administration Guide, Replication chapter.
    Regards,
    Ludovic

  • Setting up Multi server environment in Sql Server 2012 - Enlist Failed Error

    I am trying to Configure the Master target server / Multi server environment in Sql Server 2012.
    I changed :
     - `MSXENCryptChannnelOptions`-->Changed from 2 to 0
     - `AllowDownloadedJobsToMatchProxyName` - changed from 0 to 1 on the target
    When I run the wizard I am getting below error
    >MSX Enlist failed for Job Server 'MasterServerName'
    >The enlist operation Failed(Reason:SQL Server Agent Error: Unable to connect to MSX 'MasterServerName'(Microsoft Sql Server, Error : 22026)
    They both servers SQL Agents are running on the same windows service account.
    Any Suggestions on how to fix this?
    **Adding the Log:**
    Enlist TSX Progress
    - Create MSXOperator (Success)
    Checking for an existing MSXOperator. 
    Updating existing MSXOperator. 
    Successfully updated MSXOperator. 
    - Make sure the Agent service for 'Test3' is running (Success)
    The service 'SQLSERVERAGENT' is running. 
    - Ensure the agent startup account for 'Test4' has rights to login as a target server (Success)
    Checking to see if the startup account for 'Test4' already exists. 
    Login exists on server. 
    Checking to see if login has rights to msdb. 
    Login has rights to msdb. 
    Checking to see if user is a member of the TargetServersRole. 
    User is a member of the TargetServersRole. 
    - Enlist 'Test4' into 'Test3' (Error)
    Enlisting target server 'Test4' with master server 'Test3'. 
    Using new enlistment method. 
    Messages
    MSX enlist failed for JobServer 'Test4'.  (Microsoft.SqlServer.Smo)
    ADDITIONAL INFORMATION:
    An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
    The enlist operation failed (reason: SQLServerAgent Error: Unable to connect to MSX 'TEST3'.) (Microsoft SQL Server, Error: 22026)

    hi SmilingLily,
    you can try to run the SQL Agent under a domain account.

  • Multi Master Site replication with Snapshot features

    Dear,
    I am implementing a Multi Master site replication. I want to replicate Table A in MASTER SITE A to Table A in MASTER SITE B.
    However, I don't want all the data from MASTER SITE A to be propagated to MASTER SITE B.I may want data that belongs to a certain criteria only to be replicated over.
    I know I can achieve this in a snapshot site replication. But can this be done in a Multi Master Site replication too ?

    Hai,
    As far under my observation, the tables that is marked for a replication from the MASTER SITE to Child Site is an exact replica of data. Your case could be achieved only thru SNAPSHOT VIEWS/GROUPS
    - [email protected]

  • Multi-master replicated environment

    hi,
    what does it mean multi-master replicated environment? How could i benefit from it?
    thanks in advance.

    Hello,
    This is the environment in which inserts, updates and deletes on objects
    in any nodes, included in the environment will be replicated to remaining
    nodes which are defined to be part of the replicated environment.
    Sinces changes to any node are replicated to all other nodes, they all work
    as Master, which gives it the name, Master-2-Master and also advanced replication.
    Tahir.

  • Best way to Rebuild multi-master replication

    I have a site (A) which has been modified (tables and columns and such) that was used as the master replication site.
    I have another site (B) which was the target site.  The tables in site B have not been changed.
    The replication is functioning correctly (as the columns and tables were only additions).
    All the replication is asynchronous and is divided into to groups (one that refreshes every 2 minutes and one that refreshes every 15 minutes)
    I want to bring site B up to date (move the changes from site A into site B).
    I know that the data in the added columns has not been utilized as the new software was not in production yet.
    I know I will need to rebuild the replication groups to include the missing columns, and I presume I will need to stop the replication (I will disable the PUSH so I can still work with the master database).
    I will also need to add the new tables to the replication groups.
    I can rebuild the database entirely using an export and import (after cleaning out the existing database in site B).
    The question I have is efficiency, would I be better to just drop all the replication and start from scratch using the export?
    Could I simply add the tables and columns to site B and rebuild the replication?  Will I run into issues when I try to rebuild the replication if the job remains broken?
    Or can I rebuild site B (after breaking the job) using the export, then rebuild all the replication groups and re-enable replication (I know the scn numbers need to align but I am not sure if it will cause issues with the pending replication in the queue once I break the job).
    I am pretty much free to do what is needed (even though this is a production environment, the replication is not critical - I just cannot deny access to the master site for the users)
    The "former" DBA did not understand replication apparently and I have inherited this mess, while I understand how to do most of the things in this situation, I am not so sure of the best (or fastest) way.
    Forgot to mention the environment Oracle 11g R2 running on Linux (Ubuntu 12.04 LTS)
    Message was edited by: MrOracle

    There was a good discussion on the same subject in one of the oracle forum.
    Q] I have an application where data is never deleted, I have just rebuilt an
    index which was previously 4gb in size and its now 3gb in size. Can anybody
    explain?
    A1] The first thing that comes to mind is that (assuming what was taught
    on the Oracle Fundementals and tuning course I went on with Oracle
    University last year is true[1]) when an index leaf block that is not
    the right most block is filled up the next entry to be put in results
    in a 50-50 split, two blocks are created one containing the lower 50%
    of the entries and the other containing the higher 50%, both half
    full.. Assumign that the indexed field isn't an increasing key type
    value (i.e. it is possible to insert a value that will be lower
    (futher left) than an earlier value) that will mean that your index
    leaf blocks will mostly be between 50% and 100% full. Assuming a
    reasonably random distribution of data your average block would be
    around 75% full (split the difference between 50% and 100%). When you
    rebuild the index the blocks are (with the same caveat as above)
    repacked to 100% (or upto PCTFREE, I forget which) full. this would
    explain why the size of the index has changed to 75% of it's previous
    value. The same amount of data, it's just packed more tightly.
    I don't have a database to hand to check this but it does seem to fit
    the situation and my reccollection of how I'm told Oracle handles
    indexes.
    Stephen
    [1] I'm told, and have seen, that it's not always a given. Maybe
    there's a conference paper in somone's future entitled something like
    "Lies Oracle Told Me: The gap between training and reality"
    A2] If you look at Stephens post, then a reduction from free space of 30% in the blocks to 10% in the blocks (assuming pctfree =3D 10), gives around 20% gain back or around 800m from a 4G index. This would fit in quite nicely with what you are observing.
    Jaffar

  • Partial transaction for multi-master asynchronous replication

    I have a fundamental question about multi-master asynchronous replication.
    Lets consider a situation where we have 2 servers participating in a multimaster asynchronous mode of replication.
    3 tables are part of an Oracle transaction. Now if I mark one of these tables for replication and the other 2 are not a part of any of the replication groups.
    Say as a part of Oracle transaction one record is inserted into each of the 3 tables.
    Now if I start replicating, will the change made in the table marked for replication be replicated on to the other server. Since the change made to the other 2 tables are not propogated by the deferred queue.
    Please reply.
    null

    MR.Bradd piontek is very much correct.If the tables involved are interdependent you have to place them in a group and all of them should exist at all sights in a multi master replication.
    If the data is updated(pushed) from a snapshot to a table at a master site it may get updated if it is not a child table in a relationship.
    But in multi master replication environment even this is not possible.

  • Basic Multi Master Replication doesn't work!

    Hi all,
    <br><br>
    I am studying Oracle replication and I tried to apply multi master replication MMR between two databases: WinXp12 (master definition) and WinXp11.<br>
    After successfully doing all the steps in the code below to replicate table SCOTT.DEPT, I inserted a row in the table in WinXp12. But I didn't see the inserted row in the remote site WinXp11.<br><br>
    Where wrong was I? Anything missed?<br><br>
    By the way, deferror table contains no row and DBA_JOBS has no failures.<br><br>
    Thanks in advance.
    <br><br>
    CREATE USER REPADMIN IDENTIFIED BY REPADMIN;
    GRANT CONNECT, RESOURCE, CREATE DATABASE LINK TO REPADMIN;
    EXECUTE DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_SCHEMA('REPADMIN');
    GRANT COMMENT ANY TABLE TO REPADMIN;
    GRANT LOCK ANY TABLE TO REPADMIN;
    EXECUTE DBMS_DEFER_SYS.REGISTER_PROPAGATOR('REPADMIN');
    CONN REPADMIN/REPADMIN
    CREATE DATABASE LINK WINXP11 CONNECT TO REPADMIN IDENTIFIED BY REPADMIN USING 'WINXP11';
    SELECT SYSDATE FROM DUAL@WINXP11 ;
    -- Add jobs to WINXP11
    CONNECT REPADMIN/REPADMIN@WINXP11
    BEGIN
      DBMS_DEFER_SYS.SCHEDULE_PUSH(
        DESTINATION => 'WINXP12',
        INTERVAL => 'SYSDATE + 1/(60*24)',
        NEXT_DATE => SYSDATE,
        STOP_ON_ERROR => FALSE,
        DELAY_SECONDS => 0,
        PARALLELISM => 1);
    END;
    BEGIN
    DBMS_DEFER_SYS.SCHEDULE_PURGE(
      NEXT_DATE => SYSDATE,
      INTERVAL => 'SYSDATE + 1/24',
      DELAY_SECONDS => 0,
      ROLLBACK_SEGMENT => '');
    END;
    -- ADD JOBS TO WinXP12
    CONNECT REPADMIN/REPADMIN@WINXP12
    BEGIN
      DBMS_DEFER_SYS.SCHEDULE_PUSH(
        DESTINATION => 'WINXP11',
        INTERVAL => 'SYSDATE + 1/(60*24)',
        NEXT_DATE => SYSDATE,
        STOP_ON_ERROR => FALSE,
        DELAY_SECONDS => 0,
        PARALLELISM => 1);
    END;
    BEGIN
    DBMS_DEFER_SYS.SCHEDULE_PURGE(
      NEXT_DATE => SYSDATE,
      INTERVAL => 'SYSDATE + 1/24',
      DELAY_SECONDS => 0,
      ROLLBACK_SEGMENT => '');
    END;
    BEGIN
       DBMS_REPCAT.CREATE_MASTER_REPGROUP(
         GNAME => '"MGROUP1"',
         QUALIFIER => '',
         GROUP_COMMENT => '');
    END;
    BEGIN
       DBMS_REPCAT.CREATE_MASTER_REPOBJECT(
         GNAME => '"MGROUP1"',
         TYPE => 'TABLE',
         ONAME => '"DEPT',
         SNAME => '"SCOTT"');
    END;
    -- Generate Replication Support
    BEGIN
       DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT(
         SNAME => '"SCOTT"',
         ONAME => '"DEPT"',
         TYPE => 'TABLE',
         MIN_COMMUNICATION => TRUE,
         GENERATE_80_COMPATIBLE => FALSE);
    END;
    SELECT * FROM DBA_REPCATLOG ;
    -- NO ERROR
    BEGIN
    DBMS_REPCAT.RESUME_MASTER_ACTIVITY(
    GNAME => '"MGROUP1"');
    END;
    BEGIN
    DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY(
    GNAME => '"MGROUP1"');
    END;
    BEGIN
    DBMS_REPCAT.ADD_MASTER_DATABASE(
    GNAME => '"MGROUP1"',
    MASTER => 'WINXP11');
    END;
    -- Restart replication support:
    BEGIN
    DBMS_REPCAT.RESUME_MASTER_ACTIVITY(
    GNAME => '"MGROUP1"');
    END;
    -- here I could see in WinXP11 the tables created in SCOTT schema with data
    -- in WinXP12 I successfully issued the command
    insert into dept values ( 44,'text',null);
    -- I don't see the data in WinXP11
    -- No rows in deferror
    -- dba_jobs shows that there is not broken job

    Hi!
    You will need to create a public db link and a private db link connecting to the public link.
    CONN / AS SYSDBA
    CREATE PUBLIC DATABASE LINK WINXP11 USING 'WINXP11';
    CONN REPADMIN/REPADMIN
    CREATE DATABASE LINK WINXP11 CONNECT TO REPADMIN IDENTIFIED BY REPADMIN;
    SELECT SYSDATE FROM DUAL@WINXP11;
    Regards,
    PP

  • Cannot get multi master replication to work - TcpAcceptor error

    I am running Coherence 3.7.1 and WebLogic 10.3.5 on Solaris. I am packaging my coherence override, cache-config, and pof config files inside a jar file and adding that to my WebLogic domain's /lib directory as well as the coherence.jar so this is added to the path on startup.
    When adding something to the cache the put is successful and I can retrieve the value from the cache but I get the error:
    2012-09-04 15:52:41.992/2186.560 Oracle Coherence GE 3.7.1.0 <Error> (thread=EventChannelController:Thread-17, member=1): Error while starting service "remote-scm1": com.tangosol.net.messaging.ConnectionException: could not establish a connection to one of the following addresses: [xxx.16.22.151:20001]; make sure the "remote-addresses" configuration element contains an address and port of a running TcpAcceptor
    Additionally I am not seeing the message "TcpAcceptor now listening for connections on . . ." on WebLogic or Coherence startup but I think it is supposed to be there. There is a tcp-acceptor defined in my cache-config.xml.
    So I think I am supposed to have a TcpAcceptor running and it is not starting, possibly due to the cofiguration for that element may be getting overridden.
    I can't seem to track this one down any insight would be appreciated.
    Thanks
    JG
    <h5>Cache put/get that is working with the exception of the push replication</h5>
    NamedCache cache = CacheFactory.getCache("scm-combiner-cache");
    cache.put(key, value);
    final Object value = cache.get(key);
    <h5>My multi-master-pof-config.xml for both scm1 and scm2</h5>
    <pof-config>
    <user-type-list>
    <include>coherence-pof-config.xml</include>
    <include>coherence-common-pof-config.xml</include>
    <include>coherence-messagingpattern-pof-config.xml</include>
    <include>coherence-eventdistributionpattern-pof-config.xml
    </include>
    </user-type-list>
    </pof-config>
    <h5>tangosol-coherence-override.xml for scm1:</h5>
    <?xml version="1.0" encoding="UTF-8"?>
    <coherence>
    <cluster-config>
    <member-identity>
    <site-name system-property="tangosol.coherence.site">scm1</site-name>
    <cluster-name system-property="tangosol.coherence.cluster">multimaster</cluster-name>
    </member-identity>
    <multicast-listener>
    <address>224.3.6.0</address>
    <port>9001</port>
    <time-to-live>0</time-to-live>
    </multicast-listener>
    </cluster-config>
    <configurable-cache-factory-config>
    <class-name>com.oracle.coherence.environment.extensible.ExtensibleEnvironment</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value system-property="tangosol.coherence.cacheconfig">multimaster-cache-config.xml</param-value>
    </init-param>
    </init-params>
    </configurable-cache-factory-config>
    </coherence>
    <h5>tangosol-coherence-override.xml for scm2:</h5>
    <?xml version="1.0" encoding="UTF-8"?>
    <coherence>
    <cluster-config>
    <member-identity>
    <site-name system-property="tangosol.coherence.site">scm2</site-name>
    <cluster-name system-property="tangosol.coherence.cluster">multimaster</cluster-name>
    </member-identity>
    <multicast-listener>
    <address>224.3.6.0</address>
    <port>9002</port>
    <time-to-live>0</time-to-live>
    </multicast-listener>
    </cluster-config>
    <configurable-cache-factory-config>
    <class-name>com.oracle.coherence.environment.extensible.ExtensibleEnvironment</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value system-property="tangosol.coherence.cacheconfig">multimaster-cache-config.xml</param-value>
    </init-param>
    </init-params>
    </configurable-cache-factory-config>
    </coherence>
    <h5>multimaster-cache-config.xml for scm1:</h5>
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config xmlns:event="class://com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributionNamespaceContentHandler"
    xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler">
    <defaults>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-value>multi-master-pof-config.xml</param-value>
    <param-type>String</param-type>
    </init-param>
    </init-params>
    </serializer>
    </defaults>
    <caching-schemes>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>scm-combiner-cache</cache-name>
    <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
    <event:distributor>
    <event:distributor-name>{cache-name}</event:distributor-name>
    <event:distributor-external-name>{site-name}-{cluster-name}-{cache-name}</event:distributor-external-name>
    <event:distributor-scheme>
    <event:coherence-based-distributor-scheme/>
    </event:distributor-scheme>
    <event:distribution-channels>
    <event:distribution-channel>
    <event:channel-name>scm2-channel</event:channel-name>
    <event:starting-mode system-property="channel.starting.mode">enabled</event:starting-mode>
    <event:channel-scheme>
    <event:remote-cluster-channel-scheme>
    <event:remote-invocation-service-name>remote-scm2</event:remote-invocation-service-name>
    <event:remote-channel-scheme>
    <event:local-cache-channel-scheme>
    <event:target-cache-name>scm-combiner-cache</event:target-cache-name>
    </event:local-cache-channel-scheme>
    </event:remote-channel-scheme>
    </event:remote-cluster-channel-scheme>
    </event:channel-scheme>
    </event:distribution-channel>
    </event:distribution-channels>
    </event:distributor>
    </cache-mapping>
    </caching-scheme-mapping>
    <!--
    The following scheme is required for each remote-site when
    using a RemoteInvocationPublisher
    -->
    <remote-invocation-scheme>
    <service-name>remote-scm2</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>xxx.16.22.152</address>
    <port>20002</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>2s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-invocation-scheme>
    <distributed-scheme>
    <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
    <service-name>DistributedCacheWithPublishingCacheStore</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.oracle.coherence.patterns.pushreplication.PublishingCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>xxx.16.22.151</address>
    <port>20001</port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    <near-scheme>
    <scheme-name>near-scheme-with-publishing-cachestore</scheme-name>
    <front-scheme>
    <local-scheme />
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>distributed-scheme-with-publishing-cachestore</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    </near-scheme>
    </caching-schemes>
    </cache-config>
    <h5>multimaster-cache-config.xml for scm2:</h5>
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config xmlns:event="class://com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributionNamespaceContentHandler"
    xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler">
    <defaults>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-value>multi-master-pof-config.xml</param-value>
    <param-type>String</param-type>
    </init-param>
    </init-params>
    </serializer>
    </defaults>
    <caching-schemes>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>scm-combiner-cache</cache-name>
    <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
    <event:distributor>
    <event:distributor-name>{cache-name}</event:distributor-name>
    <event:distributor-external-name>{site-name}-{cluster-name}-{cache-name}</event:distributor-external-name>
    <event:distributor-scheme>
    <event:coherence-based-distributor-scheme/>
    </event:distributor-scheme>
    <event:distribution-channels>
    <event:distribution-channel>
    <event:channel-name>scm1-channel</event:channel-name>
    <event:starting-mode system-property="channel.starting.mode">enabled</event:starting-mode>
    <event:channel-scheme>
    <event:remote-cluster-channel-scheme>
    <event:remote-invocation-service-name>remote-scm1</event:remote-invocation-service-name>
    <event:remote-channel-scheme>
    <event:local-cache-channel-scheme>
    <event:target-cache-name>scm-combiner-cache</event:target-cache-name>
    </event:local-cache-channel-scheme>
    </event:remote-channel-scheme>
    </event:remote-cluster-channel-scheme>
    </event:channel-scheme>
    </event:distribution-channel>
    </event:distribution-channels>
    </event:distributor>
    </cache-mapping>
    </caching-scheme-mapping>
    <!--
    The following scheme is required for each remote-site when
    using a RemoteInvocationPublisher
    -->
    <remote-invocation-scheme>
    <service-name>remote-scm1</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>xxx.16.22.151</address>
    <port>20001</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>2s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-invocation-scheme>
    <distributed-scheme>
    <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
    <service-name>DistributedCacheWithPublishingCacheStore</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.oracle.coherence.patterns.pushreplication.PublishingCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>xxx.16.22.152</address>
    <port>20002</port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    <near-scheme>
    <scheme-name>near-scheme-with-publishing-cachestore</scheme-name>
    <front-scheme>
    <local-scheme />
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>distributed-scheme-with-publishing-cachestore</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    </near-scheme>
    </caching-schemes>
    </cache-config>

    <h5>Error message pt 2</h5>
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2012-09-04 15:52:26.016/2170.585 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "jar:file:/home/jg/weblogic/scm2/lib/CoherenceMultiMaster.jar!/multimaster-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
    Using the Incubator Extensible Environment for Coherence Cache Configuration
    Copyright (c) 2011, Oracle Corporation. All Rights Reserved.
    2012-09-04 15:52:26.908/2171.476 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Using the Coherence-based Event Distributor. Class:com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributorBuilder Method:<init>
    2012-09-04 15:52:27.348/2171.916 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-messagingpattern-2.8.4.32329.jar!/coherence-messagingpattern-cache-config.xml"
    2012-09-04 15:52:27.854/2172.423 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-common-2.2.0.32329.jar!/coherence-common-cache-config.xml"
    2012-09-04 15:52:31.734/2176.302 Oracle Coherence GE 3.7.1.0 <D4> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): TCMP bound to /xxx.16.22.152:8088 using SystemSocketProvider
    2012-09-04 15:52:35.674/2180.245 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): Created a new cluster "multimaster" with Member(Id=1, Timestamp=2012-09-04 15:52:32.088, Address=xxx.16.22.152:8088, MachineId=65086, Location=site:scm2,machine:scm-2,process:24982, Role=WeblogicServer, Edition=Grid Edition, Mode=Development, CpuCount=128, SocketCount=128) UID=0xAC1016980000013991FBA658FE3E1F98
    2012-09-04 15:52:35.704/2180.272 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Started cluster Name=multimaster
    Group{Address=224.3.6.0, Port=9002, TTL=0}
    MasterMemberSet(
    ThisMember=Member(Id=1, Timestamp=2012-09-04 15:52:32.088, Address=xxx.16.22.152:8088, MachineId=65086, Location=site:scm2,machine:scm-2,process:24982, Role=WeblogicServer)
    OldestMember=Member(Id=1, Timestamp=2012-09-04 15:52:32.088, Address=xxx.16.22.152:8088, MachineId=65086, Location=site:scm2,machine:scm-2,process:24982, Role=WeblogicServer)
    ActualMemberSet=MemberSet(Size=1
    Member(Id=1, Timestamp=2012-09-04 15:52:32.088, Address=xxx.16.22.152:8088, MachineId=65086, Location=site:scm2,machine:scm-2,process:24982, Role=WeblogicServer)
    MemberId|ServiceVersion|ServiceJoined|MemberState
    1|3.7.1|2012-09-04 15:52:35.677|JOINED
    RecycleMillis=1200000
    RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[]}
    IpMonitor{AddressListSize=0}
    2012-09-04 15:52:35.951/2180.519 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2012-09-04 15:52:37.525/2182.093 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded POF configuration from "jar:file:/home/jg/weblogic/scm2/lib/CoherenceMultiMaster.jar!/multi-master-pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-09-04 15:52:37.717/2182.285 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded included POF configuration from "jar:file:/home/jg/weblogic/scm2/lib/coherence.jar!/coherence-pof-config.xml"
    2012-09-04 15:52:37.731/2182.299 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded included POF configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-common-2.2.0.32329.jar!/coherence-common-pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-09-04 15:52:37.742/2182.310 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded included POF configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-messagingpattern-2.8.4.32329.jar!/coherence-messagingpattern-pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-09-04 15:52:37.755/2182.323 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded included POF configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-eventdistributionpattern-1.2.0.32329.jar!/coherence-eventdistributionpattern-pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-09-04 15:52:38.988/2183.556 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Service DistributedCacheWithPublishingCacheStore joined the cluster with senior service member 1
    2012-09-04 15:52:40.058/2184.627 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Establising Event Distributors for the Cache [scm-combiner-cache]. Class:com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1 Method:ensureResource
    2012-09-04 15:52:40.081/2184.650 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Using Coherence-based Event Distributor [scm-combiner-cache] (scm2-multimaster-scm-combiner-cache). Class:com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributor Method:<init>
    2012-09-04 15:52:40.139/2184.707 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache:DistributedCacheForDestinations, member=1): Service DistributedCacheForDestinations joined the cluster with senior service member 1
    2012-09-04 15:52:40.971/2185.539 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Establishing Event Channel [scm1-channel] for Event Distributor [scm-combiner-cache (scm2-multimaster-scm-combiner-cache)] based on [AbstractEventChannelController.Dependencies{channelName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel, eventChannelBuilder=com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannelBuilder@22e33b0f, transformerBuilder=null, startingMode=ENABLED, batchDistributionDelayMS=1000, batchSize=100, restartDelay=10000, totalConsecutiveFailuresBeforeSuspended=-1}]. Class:com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributorTemplate Method:realize
    2012-09-04 15:52:41.033/2185.601 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache:DistributedCacheForSubscriptions, member=1): Service DistributedCacheForSubscriptions joined the cluster with senior service member 1
    2012-09-04 15:52:41.345/2185.914 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCacheForSubscriptionsWorker:0, member=1): Establishing the EventChannelController for CoherenceEventChannelSubscription{Subscription{subscriptionIdentifier=SubscriptionIdentifier{destinationIdentifier=Identifier{scm2-multimaster-scm-combiner-cache}, subscriberIdentifier=Identifier{scm2:multimaster:scm-combiner-cache:scm1-channel}}, status=ENABLED}, distributorIdentifier=EventDistributor.Identifier{symbolicName=scm-combiner-cache, externalName=scm2-multimaster-scm-combiner-cache}, controllerIdentifier=EventChannelController.Identifier{symbolicName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel}, controllerDependencies=AbstractEventChannelController.Dependencies{channelName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel, eventChannelBuilder=com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannelBuilder@61981853, transformerBuilder=null, startingMode=ENABLED, batchDistributionDelayMS=1000, batchSize=100, restartDelay=10000, totalConsecutiveFailuresBeforeSuspended=-1}, parameterProvider=ScopedParameterProvider{parameterProvider=SimpleParameterProvider{parameters={distributor-name=Parameter{name=distributor-name, type=java.lang.String, expression=Constant{value=Value{scm-combiner-cache}}}, distributor-external-name=Parameter{name=distributor-external-name, type=java.lang.String, expression=Constant{value=Value{scm2-multimaster-scm-combiner-cache}}}}}, innerParameterProvider=ScopedParameterProvider{parameterProvider=SimpleParameterProvider{parameters={cache-name=Parameter{name=cache-name, type=java.lang.String, expression=Constant{value=Value{scm-combiner-cache}}}, site-name=Parameter{name=site-name, type=java.lang.String, expression=Constant{value=Value{scm2}}}, cluster-name=Parameter{name=cluster-name, type=java.lang.String, expression=Constant{value=Value{multimaster}}}}}, innerParameterProvider=com.oracle.coherence.configuration.parameters.SystemPropertyParameterProvider@48652333}}, serializerBuilder=[email protected]67ea0e66}. Class:com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventChannelSubscription Method:onCacheEntryLifecycleEvent
    2012-09-04 15:52:41.485/2186.053 Oracle Coherence GE 3.7.1.0 <D5> (thread=EventChannelController:Thread-17, member=1): Attempting to connect to Remote Invocation Service remote-scm1 in EventDistributor.Identifier{symbolicName=scm-combiner-cache, externalName=scm2-multimaster-scm-combiner-cache} for EventChannelController.Identifier{symbolicName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel} Class:com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel Method:connect
    2012-09-04 15:52:41.969/2186.537 Oracle Coherence GE 3.7.1.0 <D5> (thread=remote-scm1:TcpInitiator, member=1): Started: TcpInitiator{Name=remote-scm1:TcpInitiator, State=(SERVICE_STARTED), ThreadCount=0, Codec=Codec(Format=POF), Serializer=com.tangosol.io.pof.ConfigurablePofContext, PingInterval=0, PingTimeout=5000, RequestTimeout=5000, ConnectTimeout=2000, SocketProvider=SystemSocketProvider, RemoteAddresses=[/xxx.16.22.151:20001], SocketOptions{LingerTimeout=0, KeepAliveEnabled=true, TcpDelayEnabled=false}}
    2012-09-04 15:52:41.981/2186.549 Oracle Coherence GE 3.7.1.0 <D5> (thread=EventChannelController:Thread-17, member=1): Connecting Socket to xxx.16.22.151:20001
    2012-09-04 15:52:41.986/2186.554 Oracle Coherence GE 3.7.1.0 <Info> (thread=EventChannelController:Thread-17, member=1): Error connecting Socket to xxx.16.22.151:20001: java.net.ConnectException: Connection refused
    2012-09-04 15:52:41.992/2186.560 Oracle Coherence GE 3.7.1.0 <Error> (thread=EventChannelController:Thread-17, member=1): Error while starting service "remote-scm1": com.tangosol.net.messaging.ConnectionException: could not establish a connection to one of the following addresses: [xxx.16.22.151:20001]; make sure the "remote-addresses" configuration element contains an address and port of a running TcpAcceptor
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.initiator.TcpInitiator.openConnection(TcpInitiator.CDB:120)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.Initiator.ensureConnection(Initiator.CDB:11)
    at com.tangosol.coherence.component.net.extend.remoteService.RemoteInvocationService.openChannel(RemoteInvocationService.CDB:5)
    at com.tangosol.coherence.component.net.extend.RemoteService.doStart(RemoteService.CDB:11)
    at com.tangosol.coherence.component.net.extend.RemoteService.start(RemoteService.CDB:5)
    at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:39)
    at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
    at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInternal(DefaultConfigurableCacheFactory.java:1105)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:937)
    at com.oracle.coherence.environment.extensible.ExtensibleEnvironment.ensureService(ExtensibleEnvironment.java:525)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:337)
    at com.oracle.coherence.common.resourcing.InvocationServiceSupervisedResourceProvider.ensureResource(InvocationServiceSupervisedResourceProvider.java:61)
    at com.oracle.coherence.common.resourcing.InvocationServiceSupervisedResourceProvider.ensureResource(InvocationServiceSupervisedResourceProvider.java:34)
    at com.oracle.coherence.common.resourcing.AbstractSupervisedResourceProvider.getResource(AbstractSupervisedResourceProvider.java:81)
    at com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel.connect(RemoteClusterEventChannel.java:187)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventChannelController.internalStart(CoherenceEventChannelController.java:209)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController.onStart(AbstractEventChannelController.java:682)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController.access$000(AbstractEventChannelController.java:70)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController$1.run(AbstractEventChannelController.java:461)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    2012-09-04 15:52:41.996/2186.564 Oracle Coherence GE 3.7.1.0 <D5> (thread=remote-scm1:TcpInitiator, member=1): Stopped: TcpInitiator{Name=remote-scm1:TcpInitiator, State=(SERVICE_STOPPED), ThreadCount=0, Codec=Codec(Format=POF), Serializer=com.tangosol.io.pof.ConfigurablePofContext, PingInterval=0, PingTimeout=5000, RequestTimeout=5000, ConnectTimeout=2000, SocketProvider=SystemSocketProvider, RemoteAddresses=[/xxx.16.22.151:20001], SocketOptions{LingerTimeout=0, KeepAliveEnabled=true, TcpDelayEnabled=false}}
    2012-09-04 15:52:42.005/2186.573 Oracle Coherence GE 3.7.1.0 <Warning> (thread=EventChannelController:Thread-17, member=1): Failed to connect to Remote Invocation Service remote-scm1 in EventDistributor.Identifier{symbolicName=scm-combiner-cache, externalName=scm2-multimaster-scm-combiner-cache} for EventChannelController.Identifier{symbolicName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel} Class:com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel Method:connect
    2012-09-04 15:52:42.007/2186.575 Oracle Coherence GE 3.7.1.0 <Warning> (thread=EventChannelController:Thread-17, member=1): Causing exception was: Class:com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel Method:connect
    com.tangosol.net.messaging.ConnectionException: could not establish a connection to one of the following addresses: [xxx.16.22.151:20001]; make sure the "remote-addresses" configuration element contains an address and port of a running TcpAcceptor
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.initiator.TcpInitiator.openConnection(TcpInitiator.CDB:120)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.Initiator.ensureConnection(Initiator.CDB:11)
    at com.tangosol.coherence.component.net.extend.remoteService.RemoteInvocationService.openChannel(RemoteInvocationService.CDB:5)
    at com.tangosol.coherence.component.net.extend.RemoteService.doStart(RemoteService.CDB:11)
    at com.tangosol.coherence.component.net.extend.RemoteService.start(RemoteService.CDB:5)
    at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:39)
    at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
    at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInternal(DefaultConfigurableCacheFactory.java:1105)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:937)
    at com.oracle.coherence.environment.extensible.ExtensibleEnvironment.ensureService(ExtensibleEnvironment.java:525)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:337)
    at com.oracle.coherence.common.resourcing.InvocationServiceSupervisedResourceProvider.ensureResource(InvocationServiceSupervisedResourceProvider.java:61)
    at com.oracle.coherence.common.resourcing.InvocationServiceSupervisedResourceProvider.ensureResource(InvocationServiceSupervisedResourceProvider.java:34)
    at com.oracle.coherence.common.resourcing.AbstractSupervisedResourceProvider.getResource(AbstractSupervisedResourceProvider.java:81)
    at com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel.connect(RemoteClusterEventChannel.java:187)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventChannelController.internalStart(CoherenceEventChannelController.java:209)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController.onStart(AbstractEventChannelController.java:682)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController.access$000(AbstractEventChannelController.java:70)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController$1.run(AbstractEventChannelController.java:461)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)

  • Latency at the apply side in multi master replication

    Hi Gurus,
    We are facing some multi master replication latency always at one side . Ex: Site1 -> Site 3 . Transactions from site 1 always gets applied on Site 3 with some latency though the latency doesnt happen while applying from Site 1 - Site 2 for the same transaction originating at site 1 in 3 site(3 way) multi master replication environment.
    We have investigated into it and now looking to check the system replication tables which would be involved at the apply side (ex:- Site 3).
    Could someone pls let me know all the system replication tables that would be involved at the apply side which can impact the latency .I know few of them but dont know the all of them.
    System.def$_AQERROR
    System.def$_ERROR
    System.def$_ORIGIN
    Thanks

    I would say that 50, 50 and 75 are a Very Large number of Job Queue processes. Do you really have that many jobs that need to run concurrently ?
    Since Advanced Replication Queues are maintained in only a small set of tables you might end up having "buffer busy waits" or "read by other session" waits or latch waits.
    BTW, what other factors did you eliminate before deciding to look at the Replication tables ?
    See the documentation on monitoring performance in replication :
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96568/rarmonit.htm#35535
    If you want to look at the "tables" start with the Replication Data Dictionary Reference at
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96568/rarpart4.htm#435986
    and then drill down through the View definitions to the underlying base tables.

  • Problem in creation of Multi-master

    I created master group in replication.
    I have decided to use multi-master replication and because of that I add another site to my first master site.But when I do that in OEM ,I encunter to a problem
    called that invalid compatible version ....
    My Oracle Version is 9.2 and compatible parameter in my oracle is 9.2.0.0.0
    It is kind of everybody to help me.

    its due to the number ranges that is specified in the partner determination ... chech what is the number range assigned in the parner determination ,,, so once u check that u have to manually enter the number between that range only .. this should solve ur problem...
    path to check the number range is as below :
    spro-img-logistic generalbusiness partner-customers---define account groups and field selection ---    
    at this point click position button at the bottom and give ur account group .. select ur account group and click detail button .. now inside that u can see the specified number range .. dafault is 08 ( means u can specify between 400000 to 499999)..
    and some times  u may not have defined number range... check urs and create accordingly ... this should solve ur problem...
    rewards if solved ..
      thank you
    madhan

  • Problem in multi master replication creation using DBA Studio -- replication

    Hi,
    I am trying to create multi master replication using DBA studio but facing following problem at the time of master group creation.
    ORA-04052 error occured when looking up remote object SYS.SYS@CBOLDATA
    ORA-00604 error occured at recursive SQL level 2
    ORA-01017 invalid username/password, logon denied
    ORA-02063 preceding line from CBOLDATA
    If you want to know how I am trying the whole thing then here is the way I am doing comfiguration.
    First I have created master site which is created successfully.
    Here I have used two database named UPP817 & CBOLDATA
    Step of master site creation is followed like this.
    1. Master Site addition (Added both UPP817 & CBOLDATA using SYSTEM username)
    2. Default User (No change is done, Default schema and password taken)
    3. Master Site Schema (Added SCOTT/tiger)
    4. Scheduled Link (No change is done, Default values taken)
    5. Purge (No change is done, Default values taken)
    6. Finished successfully
    Master group creation there is three option available
    1. General (Only value of Name is given)
    2. Object (No object added)
    3. Master Sites (It takes UPP817 by default as a master defination site then as a Master Site I have added CBOLDATA)
    when I pressed on Create It gives me the error listed above.
    I really appreceate your help
    Mukesh

    Create public database link at master site database for CBOLDATA.
    Also see that you are using SNMP protocol and Oracle Pipe at both
    databases. This will be usefull while making replication from remote
    place.
    I hope this will work.
    regards
    Avinash Jadhav

  • Some segments are missing in the idocs for master data zdebmas

    hi guru's,
    can any one hlep me here we facing the probelumm
    some segments are missing in the idocs for master data zdebmas
    , there is some issue on the generation of the Site Master IDoc (Message type: ZDEBMAS, Basic type: DEBMAS06).
    This is using the SAP standard program (RBDMIDOC) which reads the Site master change pointers.
    There is  some segments below is missing in the IDoc:
    how to chcek this probelumm...

    hi
    i got the function module. it is  triggerig whne i do changepointer running.
    what ever changes i made only that segments are onlycomming in to the idoc. but remaing segments are not comming.
    my req is to show all segments  even if i do changes in one segmet fields  dont change theay have send to the interfece all athe segments.i ahve to do some enhancemetns for that
    can u plse help me the login  or any function module which will fill the alla the segmetns .

  • Multi master replication between 5.2 and 6.3.1

    I have a setup in which I have a master running version 5.2 and about 15 consumers ( slaves) all of which have been upgraded to 6.3.1 . I now want to create a multi master topology by promoting one of these consumers to be a master and still keep the 5.2 in use as we have a bunch of other applications that depend on the 5.2 instance. Our master has two suffixes. The master server is also the CA cert authority for all the consumers . After reading the docs I narrowed down the procedure to be
    1. Promote one of the 6.3.1 consumers to hub and then to master using the dsconf promote-repl commands. The problem here is that I am not sure how I can create a single consumer that can slave both the suffixes. We currently have them being slaved to different consumers.
    Also do I need to stop the existing replication between the 5.2 master and the would be 6.3.1 master to promote to hub and master.
    2. Set the replication manager manually or using dsconf set-server-prop on the new 6.3.1 master .
    3. Create a new replication agreement from 5.2 to 6.3.1 master without initializing. (using java console)
    4. Create new replication agreement from 6.3.1 to 5.2 (using command line)
    5. Create new repl agreements between the new 6.3.1 master and all the other consumers. For this do I need to first disable all the agreements between 5.2 and 6.3 or can I create new agreements without disabling the old ones?
    6. Initialize 6.3.1 from the 5.2 master.
    My biggest concern at this point is surrounding the ssl certs and the existing trusts the consumers have with the 5.2 master. Currently my 5.2 server acts as the CA authority for our certificate management with the ldap slaves. How can I migrate this functionality to the new server and also will this affect how the slaves communicate to the new master server ?
    Thanks in advance.

    Thanks Marco and Chris for the replies.
    I was able to get around the message by first manually initialzing the new slave using an ldif of the ou from the master , using dscc to change the default replication manager account to connect and finally editing the dse.ldif to enter the correct crypt hash for the new repl manager password. After these steps I was able to successfully set up replication to the second ou and also promote it to hub and master ( I had to repeat the steps after promotion of the slave to master as somehow it reset replication manager settings when I did that).
    So right now, I have a 5.2 master with two ou's replicating to about 15 consumers.
    I promoted one of these to be a second master (from consumer to hub to master). Replication is setup from 5.2 to 6.3 master but not the other way round.
    I am a little bit nervous setting up replication the other way round as this is our production environment and do want to end up blowing up my production instance. The steps I plan on taking are , from the new master server
    1. dsconf create-repl-agmt -p 389 dc=xxxxx,dc=com <5.2-master>:389
    2. dsconf set-repl-agmt-prop -p 389 dc=xxxxx,dc=com <5.2-master>:389 auth-pwd-file:<passwd_file.txt>
    I am assuming I can do all of this while the instances are up. Also in the above, does create-repl-agmt just create the agreement or does it also initalize the consumer with the data ? I want to ensure I do not initialize my 5.2 master with my 6.3 data.
    Thanks again

Maybe you are looking for