Multi Master Site replication with Snapshot features

Dear,
I am implementing a Multi Master site replication. I want to replicate Table A in MASTER SITE A to Table A in MASTER SITE B.
However, I don't want all the data from MASTER SITE A to be propagated to MASTER SITE B.I may want data that belongs to a certain criteria only to be replicated over.
I know I can achieve this in a snapshot site replication. But can this be done in a Multi Master Site replication too ?

Hai,
As far under my observation, the tables that is marked for a replication from the MASTER SITE to Child Site is an exact replica of data. Your case could be achieved only thru SNAPSHOT VIEWS/GROUPS
- [email protected]

Similar Messages

  • Partial transaction for multi-master asynchronous replication

    I have a fundamental question about multi-master asynchronous replication.
    Lets consider a situation where we have 2 servers participating in a multimaster asynchronous mode of replication.
    3 tables are part of an Oracle transaction. Now if I mark one of these tables for replication and the other 2 are not a part of any of the replication groups.
    Say as a part of Oracle transaction one record is inserted into each of the 3 tables.
    Now if I start replicating, will the change made in the table marked for replication be replicated on to the other server. Since the change made to the other 2 tables are not propogated by the deferred queue.
    Please reply.
    null

    MR.Bradd piontek is very much correct.If the tables involved are interdependent you have to place them in a group and all of them should exist at all sights in a multi master replication.
    If the data is updated(pushed) from a snapshot to a table at a master site it may get updated if it is not a child table in a relationship.
    But in multi master replication environment even this is not possible.

  • IDM in multi-master LDAP Replication

    Hi,
    We got two functional SUN Java Directory Server in multi-master replication setup. Both server have their own IDM's.
    When I change password/uid from any IDM , straightaway changes get done on both LDAP servers and I can see changes on another IDM.
    Problem is when I create new user from IDM of one server, user doesn't show up in second server IDM unless I run manually Accounts-->Load from resource.
    Even full reconciliation doesn't pickup the new user on that IDM. What need to be done so IDM picks new users straight away in multi-master setup.
    Thanks,
    Farhan
    Edited by: rozzx on May 5, 2009 11:32 PM
    Edited by: rozzx on May 5, 2009 11:34 PM

    Any help guys? Whey IDM is not getting update when I add/delete new user in Directory Server. I have to do Load from Resource to get new entries everytime.
    And If I delete any user from LDAP, it still stays in IDM.

  • (V8.1 REPLICATION) UPDATABLE SNAPSHOT환경 구축을 위한 초기 작업

    제품 : ORACLE SERVER
    작성날짜 : 2001-07-19
    (V8.1 REPLICATION) UPDATABLE SNAPSHOT환경 구축을 위한 초기 작업
    =============================================================
    1. 개요
    =======
    Symmetric Replication의 한 종류인 updatable snapshot환경의 구축을 위해서는
    replication manager를 사용하도록 한다. 이 문서는 replication manager사용을
    통해 snapshot group및 object 등을 등록하고 관리하기 이전에 master및 snapshot
    site에서 미리 설정되고 수행되어야 하는 작업들을 정리한다.
    이중 일부 작업은 replication manager를 통해서도 구현이 되지만, 정확한
    수행이 요구되어 지는 작업에 대해서는 이 문서에서와 같이 직접 SQL command를
    통해 수행되어지는 것이 권고된다.
    Oracle 8.1.x부터는 snapshot과 materialized view가 동의로 사용되어지나, 이
    문서에는 snapshot으로 표시한다.
    2. Init.ora parameters
    ======================
    아래와 같은 initialisation parameter들이 추가되거나 수정되어져야 한다.
    아래의 설정은 master site와 snapshot site모두 설정이 필요하며, master site가
    하나뿐인 경우는 job_queue부분과 parallel_ parameter의 경우는 설장할 필요가
    없다.
    Parameter 이름 권장되는 초기 값
    COMPATIBLE 8.1.5.0.0 이상
    SHARED_POOL_SIZE 20M ~ 40M 정도가 추가로 요구
    PROCESSES 현재 값에서 12정도 추가
    GLOBAL_NAMES 반드시 TRUE
    OPEN_LINKS 4, 추가되는 master site당 2 증가
    DISTRIBUTED_TRANSACTIONS 최소 5 이상 (기본값은 trsactions/4)
    가되는 master site당 2 증가
    REPLICATION_DEPENDENCY_TRACKING TRUE
    아래의 값은 상황에 맞게 조정하여 사용한다.
    JOB_QUEUE_INTERVAL 10 초
    JOB_QUEUE_PROCESSES 3, 추가되는 master site당 1씩 증가
    PARALLEL_MAX_SERVERS 10
    PARALLEL_MIN_SERVERS 2, max server의 갯수와 동일한것이 권장
    parallel propagation을 사용하지 않고자
    하면 설정하지 않는다.
    SNAPSHOT_ 로 시작하는 parameter나 JOB_QUEUE_KEEP_CONNECTIONS parameter는
    Oracle 8.1에서는 없어졌으므로 지정하지 않도록 한다.
    3. Tablespace 요구 사항
    =======================
    다음은 replication을 사용하기 위해 추가적으로 필요한 tablespace용량이다.
    Tablespace 권장사항
    SYSTEM 최소 40 Mb free space
    ROLLBACK SEGMENTS 최소 30 Mb free space
    TEMPORARY 최소 20 Mb free space
    4. Replication Catalogue 설치
    ==============================
    database생성시에 replication catalog를 설치하지 않았다면, Database
    Configuration Assistant 를 이용하여 Advance Replication Option부분을
    database에 추가하도록 하거나, 아니면 다음과 같이 작업한다.
    cd $ORACLE_HOME/rdbms/admin
    sqlplus internal
    SQL>spool rep.log
    SQL>@catrep
    SQL>spool off
    이 script는 약 1시간 정도 수행되며, spool을 받아 수행에 문제가 있었는지
    이후에 확인하도록 한다.
    [주의] 이 script를 수행하게 되면, replication관련 table과 queue등이 system
    owner로 생성되게 된다.
    그러므로 이후에 replication작업이 일시 중단되어 queue가 커지게 되면,
    system user의 default tablespace에 부담이 되므로 system owner의
    default tablespace를 tools나 그외 replication관리를 위한 별도의
    table로 일시 지정하도록 한다.
    SQL>ALTER USER sysem DEFAULT TABLESPACE REPTBS;
    catrep의 수행이 끝나면 spool로 생긴 rep.log도 확인하여 보고, 아래 문장을
    통해 invalid상태인 object를 확인하고 재 compile을 시도한다.
    SQL> SELECT OWNER, OBJECT_NAME FROM ALL_OBJECTS
    WHERE STATUS = 'INVALID';
    이때 만약 SYS나 SYSTEM package의 body가 invalid로 나타나면 다음과 같이 재
    compile을 시도한다. package body만 invalid인데 aler package문장에서 compile
    body대신 compile이라고만 하면 그 package에 대한 definition이 변경되어,
    이 package와 dependency관계에 있는 다른 object도 invalid상태가 되도록 한다.
    작업이 성공적으로 끝났으면, system user의 default tablespace의 원래대로,
    tool나 system으로 수정한다.
    SQL> alter user system default tablespace system;
    5. Setup NET8
    =============
    Replication환경에 포함된 모든 server는 listener를 구동하도록 하며, snapshot
    site에서는 master site를 가리키는 service alias를 tnsnames.ora file에 등록하
    도록 한다.
    또한, replication manager를 사용하는 client도 tnsnames.ora file에 master와
    snapshot site와 연결가능하도록 service alias를 net8 easy configuration등을
    통하여 등록하도록 한다.
    6. Replication Manager Setup Wizard
    ===================================
    위의 작업이 모두 끝났으면 앞으로의 작업은 replication manager를 통해서도
    수행 가능하다.
    그러나 이 문서에 포함된 내용은 replication환경 설정시 한번만 필요한 작업이며,
    이 부분에 대한 사소한 실수도 이후 사용에 문제를 야기하기 때문에, 여기에 포함된
    작업만큼은 sql command를 통해 직접 작업하기를 권고한다.
    Replication manager가 포함되어 있는 OEM version은 1.4부터이며, Oracle8.1의
    경우는 2.1이 최적화되어 있다. 그러나 그 이전 version도 사용가능하다.
    7. Replication Users
    =====================
    일반적으로, multi-master환경보다는 updatable snapshot환경이 더 보안이 문제시
    된다. multi-master의 경우는 일반적으로 중앙에서 같은 db admin에 의해 관리
    되는것이 일반적이나 snapshot의 경우는 지역적으로 떨어져 별개로 관리되는 경우가
    많아 그 snapshot에서 master site의 database에 손상을 주는 일을 막는것이
    중요하여, user구성은 multi-master환경보다 복잡하다.
    Snapshot Site Master Site
    Replication administrator
    Snapshot replication administrator ---> Proxy snapshot administrator
    Propagator ---> Receiver
    Refresher ---> Proxy refresher
    Snapshot Owner(s) ---> Proxy refresher
    - Replication administrator: master site에서, master group들의 구성과
    관리를 책임진다.
    - Snapshot administrators: snapshot site에서 snapshot replication group을
    구성하고 관리하는 책임을 지며, master site의 Proxy administrator가
    master group의 최소한의 access만 허용하게 된다.
    - Propagators: deferred transaction을 master site에 전달하면, master site의
    Receiver가 전달받은 data를 master table에 적용시킨다.
    - Refreshers: snapshot site에서 master site의 변경된 data를 snapshot
    site로 끌어오는 역할을 하며, master site의 Proxy refresher는 master
    site의 table에 대한 최소한의 acess만을 허용한다.
    이러한 user들의 생성하고 이용하는 방법은 여러가지로 가능하다. 아래에서는,
    그중 가장 일반적으로 사용되는 방법을 기술한다.
    8. Replication user 생성 및 권한 부여
    =====================================
    Oracle 8.1부터 updatable snapshot 환경 구축시 trusted와 untrusted두가지
    방법이 가능하다. untrusted의 경우 snapshot site가 master site와 같은 관리자
    아래에 포함되지 않아 보안을 더 강화해야 하는 model이다.
    하나의 master site에 snapshot group이 여러개이고, 각 snapshot별로, 이
    snapshot group을 공유하는것이 아니고, 별개로 사용하는 경우 서로 다른
    snapshot이 사용하는 group에 대해 보안을 유지하려면 untrusted model을 사용해야
    하나 실제 이러한 환경으로 구성되는 일은 거의 없고, 작업의 번거로움으로 이
    문서에는 trusted model에 대해서는 설명한다.
    8.1 Master Site user와 권한
    (1) Replication Administrator (REPADMIN)
    SQL>CONNECT system/<password>
    SQL>CREATE USER repadmin IDENTIFIED BY <password>
    DEFAULT TABLESPACE <tablespace name>
    TEMPORARY TABLESPACE <tablespace name>;
    SQL>GRANT connect, resource TO repadmin;
    SQL>EXECUTE dbms_repcat_admin.grant_admin_any_schema('repadmin');
    SQL>GRANT comment any table TO repadmin;
    SQL>GRANT lock any table TO repadmin;
    만약 replication group이 어느 특정 schema에만 한정된다면,
    grant_admin_any_schema대신 grant_admin_schema를 사용하여도 된다.
    (2) Proxy snapshot administrator / Receiver / Proxy refresher (SNAPPROXY)
    master site의 proxy snapshot administrator, receiver, proxy refresher는
    하나의 user로 관리하는 것이 일반적이다. 이러한 SNAPPROXY user를 생성하지
    않고 REPADMIN user가 이러한 역할도 수행하도록 설정할 수 있으나, 그렇게
    되면 snapshot site에 너무 많은 권한을 부여하게 되어 바람직하지 않다.
    replication manager를 이용해 이러한 user를 생성하면 master site에 연결된
    snapshot site마다 각각의 snapproxy_n과 같은 형태로 user를 생성하나,
    같은 snapshot group에 대하여 이렇게 각각의 user를 생성하는 것은 의미가
    없으므로 권장되지 않는다.
    SQL>CONNECT system/<password>
    SQL>CREATE USER snapproxy IDENTIFIED BY <password>
    DEFAULT TABLESPACE <tablespace name>
    TEMPORARY TABLESPACE <tablespace name>;
    (3) Proxy snapshot administrator privileges
    SQL>CONNECT system/<password>
    SQL>BEGIN
    dbms_repcat_admin.register_user_repgroup(
    username => 'snapproxy',
    privilege_type => 'proxy_snapadmin',
    list_of_gnames => NULL);
    SQL>END;
    SQL>/
    register_user_repgroup procedure를 통해 권한을 부여한다. 이때, "create
    session" 과 dbmsobjgwrapper / dbms_repcat_untrusted package에 대한
    "execute"권한이 부여된다.
    Replication group에 table을 add하게 되면 그 table에 대한 "select" 권한이
    부여된, master objects에 대한 DML권한은 이 user에게 부여되지 않는다.
    (4) Receiver privileges
    SQL>CONNECT system/<password>
    SQL>BEGIN
    dbms_repcat_admin.register_user_repgroup(
    username => 'snapproxy',
    privilege_type => 'receiver',
    list_of_gnames => NULL);
    SQL>END;
    SQL>/
    이 문장을 통해 "create session" 과 dbms_defer_internal packages,
    dbms_defer package에 대한 "execute"권한이 부여된다. 그러나 이것도
    마찬가지로 master object에 대한 DML권한을 부여하지 않는다.
    또한 receiver는 replication object를 추가한 후 replication support시에
    생성되는 <schema>.<object name>$RP (LOB table에 대해서는 $RL)에 대한
    execute권한을 부여 받아 master site에 변경사항을 적용하게 된다.
    (5) Proxy refresher privileges
    SQL>CONNECT system/<password>
    SQL>GRANT create session TO snapproxy;
    SQL>GRANT select any table TO snapproxy;
    (6) Schema Owner(s) (여기에서는 REPUSER로 명시)
    SQL>CONNECT system/<password>
    SQL>CREATE USER repuser IDENTIFIED BY <password>
    DEFAULT TABLESPACE <tablespace name>
    TEMPORARY TABLESPACE <tablespace name>;
    SQL>GRANT connect, resource TO repuser;
    8.2 Snapshot Site user와 권한
    (1) Snapshot administrator / Propagator / Refresher (SNAPADMIN)
    snapshot administrator, propagator, refresher에 대해서 하나의 user로
    관리하는것이 일반적이다.
    SQL>CONNECT system/<password>
    SQL>CREATE USER snapadmin IDENTIFIED BY <password>;
    DEFAULT TABLESPACE <tablespace name>
    TEMPORARY TABLESPACE <tablespace name>;
    (2) Snapshot administrator 권한
    SQL>CONNECT system/<password>
    SQL>EXECUTE dbms_repcat_admin.grant_admin_any_schema('snapadmin');
    SQL>GRANT comment any table TO snapadmin;
    SQL>GRANT lock any table TO snapadmin;
    (3) Propagator 권한
    SQL>EXECUTE DBMS_DEFER_SYS.REGISTER_PROPAGATOR('snapadmin');
    (4) Refresher 권한
    SQL>GRANT create any snapshot TO snapadmin;
    SQL>GRANT alter any snapshot TO snapadmin;
    refresher와 대응하는 master site의 proxy refrsher에 의해 master site의
    table에 대한 "select any table" 권한을 가지게 된다.
    (5) Schema Owner(s) (여기에서는 REPUSER로 명시)
    replicate되고자 하는 table에 대한 master site의 schema와 동일한 이름의
    shema가 snapshot site에서도 필요하다. 단 password는 같을 필요는 없다.
    만약 다른 schema에 대해서 updatable snapshot을 생성하면, snapshot생성시는
    오류없이 만들어지지만, create_snapshot_repobject 수행시에 아래와 같이
    오류가 발생할 것이다.
    ORA-23306: schema <schema name> does not exist
    ORA-23308: object <schema name>.<object name> does not exist or is invalid
    아래 문장과 같은 schema owner생성시 주의할 점은 이 권한을 role을 통해
    부여하지 말고 직접 주어야 한다는것이다. 즉, 이미 connect, resource등을
    통해 create table과 같은 권한은 부여된 상황임에도 불구하고, 이후에
    procedure나 package등에서 이 권한이 필요한 경우 role을 통해 부여된 것은
    인식이 되지 않고 ORA-1031이 발생하기 때문에 아래와 같이 직접 권한을
    부여하여야 한다.
    SQL>CONNECT system/<password>
    SQL>CREATE USER repuser IDENTIFIED BY <password>
    DEFAULT TABLESPACE <tablespace name>
    TEMPORARY TABLESPACE <tablespace name>;
    SQL>GRANT connect, resource TO repuser;
    SQL>GRANT create table TO repuser;
    SQL>GRANT create snapshot TO repuser;
    이 user가 master site에 대해서 수행할 수 있는 권한에 대해서는 이 user에서
    master site로 생성하는 database link를 어느 user로 connect하느냐에 달려
    있으며 이것은 9.4에서 설명한다.
    (6) End Users
    snapshot schema에 있는 snapshot을 이용하는 user가 별도로 존재할 수
    있으며, 이 경우 특별한 권한이 추가적으로 필요하지는 않고, snapshot
    object에 대한 필요한 권한을 부여하면 된다.
    9. Database Link
    ================
    database link를 생성하기 전에 중요한 것은 replication 환경에 포함된 모든
    database는 고유한 global name을 가지고 있어야 한다. default global name은
    <db_name>과 동일하나 다음과 같이 문장을 통해 global name을 수정가능하다.
    그러므로 db_name이 같더라도 이렇게 global name을 수정하여 replication
    환경을 구성할 수 있다.
    SQL>ALTER DATABASE RENAME GLOBAL_NAME TO <new global name>;
    SQL>SELECT * FROM GLOBAL_NAME;
    updatable snapshot환경에서 database link가 제대로 구성되지 않으면 이후
    작업에 계속 문제가 생기므로 주의하여야 한다. 이 database link는 snapshot
    site에서 master site로만 구성하면 되며, 반대로 master site에서 snapshot
    site로 구성할 필요는 없다.
    (1) Public Database Links
    Net8 connection alias를 이용하여 다음과 같이 public database link를
    만든다. 이후 생성되는 private link는 using절을 포함하지 않게 되며,
    모두 public database link와 동일한 이름의 db link이름을 가져야 한다.
    (global_names=true이기 때문)
    SQL>CONNECT system/<password>
    SQL>CREATE PUBLIC DATABASE LINK <remote databases global name.world>
    USING 'Net8 alias';
    여기에서 표현한 Net8 alias는 snapshot site의 tnsnames.ora file에 정의
    되어 있어야만 한다.
    (2) Snapshot Administrator / Snapshot propagator / refresher
    snapshot administrator와 proxy snapshot administrator 다음과 같이 private
    database link가 필요하다.
    SQL>CONNECT snapadmin/<password>
    SQL>CREATE DATABASE LINK <remote databases global name.world>
    CONNECT TO snapproxy IDENTIFIED BY <password>;
    (3) Schema Owner (여기에서는 REPUSER로 명시한다)
    snapshot을 가지고 있는 모든 schema는 master site에 private database
    link를 생성해야 한다. 이때, "Schema to Schema" 와 "Schema to Proxy
    Refresher" 두가지를 고려할 수 있다.
    만약 link를 schema to schema방식으로 생성하게 되면, snapshot owner는
    쉽게 master site로 연결하여, master table에 DDL이나 DML을 수행가능하다.
    이것은 일반적으로 바람직하지 않으므로 master의 proxy refresher로
    private database link를 만들도록 권고된다. master에서 proxy refresher와
    proxy administrator가 동일한것이 일반적으로 결국 proxy administrtor에게
    연결하면 된다.
    이렇게 되면 snapshot의 schema owner는 proxy refresher user에 의해,
    master table에 대한 "select any table"권한은 가지게 되지만, DDL이나,
    DML은 허용되지 않으므로 더 안정적이다.
    SQL>CONNECT repuser/<password>
    SQL>CREATE DATABASE LINK <remote databases global name.world>
    CONNECT TO snapproxy IDENTIFIED BY <password>;
    (5) End User
    database link가 필요하지 않다.
    9.6 Testing Database Links
    database link생성 작업이 끝나면 다음과 같이 하여, 각 user별로 database
    link 생성이 올바로 되었는지 확인하도록 한다.
    SQL>CONNECT user/<password>
    SQL>SELECT * FROM DUAL@<database link name>
    10. schedule설정
    ================
    1 ~ 9번까지 성공적으로 수행되었다면 이제 replication manager를 이용하여,
    replication/snapshot group과 object를 추가할 수 있다.
    replication manager를 이용시 master site는 repadmin user로 접속하고,
    snapshot site는 snapadmin user로 접속하도록 한다.
    snapshot site의 변경 작업을 master site로 주기적으로 push시키고, 이렇게
    push된 data를 또 다른 주기로 purge시키는 scheduling이 필요하다.
    이것은 replication manager를 이용하려면, 아래 menu를 이용하면 된다.
    - Scheduling --> Scheduled Links
    - Database Information --> Purge Job Tab
    이 작업을 sql문으로 수행하려면 다음과 같은 문장을 snapshot site에서
    수행한다.
    SQL>CONNECT snapadmin/<password>
    SQL>BEGIN
    dbms_defer_sys.schedule_push(
    destination => '<destination databases global name>.WORLD',
    interval => '/*10:Mins*/ sysdate + 10/(60*24)',
    next_date => sysdate,
    stop_on_error => FALSE,
    delay_seconds => 0,
    parallelism => 1);
    SQL>END;
    SQL>/
    SQL>BEGIN
    dbms_defer_sys.schedule_purge(
    next_date => sysdate,
    interval => '/*1:Hr*/ sysdate + 1/24',
    delay_seconds => 0,
    rollback_segment => '');
    SQL>END;
    SQL>/

  • Can not refresh snapshot changes after importing data of master site

    Hello !
    I have two database computer,one as master site,one as snapshot site.Because the error of the hard disk of master computer,I use the exporting data file to recover my database.after importing ,I found I can't refresh the refreshgroup on snapshot,who can tell me why?
    thinks in advance!
    (exp system/manager full=y inctype=complete file='/home/save/backdata/xhsdcomp.dat')
    (imp system/manager inctype=system full=Y file='/home/save/backdata/xhsdcomp.dat'
    imp system/manager inctype=restore full=Y file='/home/save/backdata/xhsdcomp.dat')
    null

    You haven't listed the errors that you're receiving when attempting to refresh your refresh group, but if your snapshots are attempting to fast refresh, I suspect it's because the creation timestamp of the snapshot log on the master site is newer than the creation timestamp of the snapshot. In this case you will need to do a complete refresh of the snapshot (or drop and recreate the snapshot) before you will be able to fash refresh it again.
    If this is not the case, please post the errors you are receiving when you attempt to refresh the refresh group.
    HTH,
    -- Anita
    Oracle Support Services
    null

  • Cross Site collections navigation with publishing feature enabled into sharepoint 2010??

    Hi,
    Is it possible to cross site collection navigation in share point 2010 with publishing feature enabled? Right now we have a site collection with all the departmental sites within it. We are trying to create separate site collection with separate content
    database for each department for better management. But problem with Global navigation as OOB does not provide cross site collection navigation functionality, So looking for multiple site collections or navigation for more than one site collection under single
    umbrella. i was able to get the cross site collection navigation in my development env without publishing feature enabled using below link. But problem with production environment, as all the site collections and sites are publishing feature enabled. how i
    am gonna do cross site navigation with publishing feature enabled? 
    http://www.itsolutionbraindumps.com/2011/10/sharepoint-2010-cross-site-collection.html
    Any link or suggest will be greatly appreciated !

    Hi,
    According to your description, my understanding is that you want to create cross site collections navigation with publishing feature enabled in SharePoint 2010.
    Publishing sites (sites with publishing infrastructure) have their own navigation API, and it is much more complicated task to preserve cross-publishing sites navigation.
    We need to implement our own custom navigation provider.
    Please refer to the link below about the cross site collections navigation with publishing feature enabled:
    http://sadomovalex.blogspot.com/2010/12/cross-site-and-cross-site-collection.html
    Best regards.
    Thanks
    Victoria Xia
    TechNet Community Support

  • Using attribute uniqueness with multi-master replication?

    Hi,
    I'm trying to use attribute uniqueness in a iDS 5.1 multi-master replication env. I have created a plug-in instance for the attribute (memberID) on each directory instance (same installation on NT) and tested (if I try to create a duplicate value under the same instance I get a constraint error as expected). However if I create a entry under one instance and then create a second entry (different DN) with the same attribute value on the second instance, the entry is written with no complaints? If I create the entries with an identical DN, then the directory automatically adds nsuniqueID to the RDN of the second entry to maintain DN uniqueness but it doesn't seem to mind about duplicate values within the entry despite the plug-in?
    BTW I've tested MMR and it is working and I'm using a subtree to enforce uniqueness.
    Regards
    Simon

    Attribute uniqueness plugin only ensure uniqueness on a single master before the entry is added. It doesn't check replicated operation since they have already been accepted and a positive result was returned to the client. So in a multiMastered environment, it is still possible to add 2 identical attributes, if like you're saying you're adding the entries at the same time on both master servers.
    We're working on a solution to have Attribute Uniqueness working in a multiMastered environment. But we're worried about its impact on performances we may get with it.
    Regards,
    Ludovic.

  • Problem with Multi Master Replication

    Hello All,
    I've setup a multi-master replication with no consumers. i.e i'm having 2 suppliers which should update each other. The setup seems to be fine since the initialization of one supplier by other works very fine. But i couldn't get the synchronization btwn the suppliers get worked. I noticed in the error log that the syn-scan request arrived, but ignored. What are the possibilities of this error ?
    Please help me with this regard.
    Thanks in advance,
    Rajesh

    Hello All,
    Rich, you have been a support to most of us in the group(indeed much to my help)...Its splendid work....
    My problems disappeared after applying the Service pack ....the service pack in fact is mainly to sort out the replication issues.
    Advice from my experience - The patch may be more than enough for most of the replication issues.
    One observation - i had the replica busy error, but i didn't have to restart the replica as suggested by some of the previous threads. Seems the service pack did some fix for it.
    Thank you all,
    Best Regards,
    Rajesh

  • Multi Master Replication - Only works for some tables?

    I have a multi master replication between a 9i and an 816 database.
    All the tables are in the same tablespace and have the same owner. The replication user has full privs on all the tables.
    When setting up the replication some tables create properly, but others fail with a "table not found" error.
    Ideas anyone ?
    Andrew

    You said that you have a 9i replicated with a 816.
    I try the same thing but with two 9i enterprise version databases, free downloaded from www.oracle.com.
    when i ran
    exec dbms_repcat.add_master_database(gname=>'groupname', master=>'replica_link')
    this error appears
    ERROR at line 1:
    ORA-23375: feature is incompatible with database version at replica_link
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_REPCAT_MAS", line 2159
    ORA-06512: at "SYS.DBMS_REPCAT", line 146
    ORA-06512: at line 1
    please help me if u have any idea.

  • Procedure to create Multi Master Replication

    Hi,
    can anybody help on this. How to do Multi Master replication from scarch . I used to do single master . this is the first time iam trying to do Multi-Master Replication . I am looking for doc, iam unable to find . Is ther anybody can tell how to do Multi-Master little in detail. I installed my Software and i loaded my company schema and the data on one server. I have another server ready to install software and load data.

    with regards to the post "RTFM"..
    in my experience the sunone/iplanet manuals are very badly written and in some cases just plain wrong..
    sure start with reading the manuals, then make sure you also read the release notes and then you can still be confused in some cases....
    the section on mmr in the manual has a number of mistakes last time i looked..
    one reason why i think community sites, and generally available "support" are such prime requisites in choosing software...
    just my 2 cents

  • Multi-Master Replication or......?

    user requirement:
    1. 3 sites, separated by geographic distance.
    2. using privately owned wire for network connectivity.
    3. any breakage of connection between sites, users can still get to a database (i.e., database at each site).
    4. any breakage of database, users can still get to a database.
    5. databases must be within 30 minutes of in-sync with each other.
    The customer started out talking about a solution (multi-mastered
    replication) instead of the requirements (listed above). Now that I have the requirements, I'm looking at options on how to meet them. As part of that work, thought I'd look to this community for ideas.
    thanks!

    user requirement:
    1. 3 sites, separated by geographic distance.
    2. using privately owned wire for network connectivity.
    3. any breakage of connection between sites, users can still get to a database (i.e., database at each site).
    4. any breakage of database, users can still get to a database.
    5. databases must be within 30 minutes of in-sync with each other.One question. By "still get to a database" do you mean updateable, or just queryable? The former can be done with snapshots or other means, but for updates, multimaster replication is the only "easy" choice.

  • Missing Master Site in Multi Master Environment

    Hello,
    we are using MM-Replication on three Master sites with two replication groups.
    Replication support was installed and configured by the application vendor.
    I do not have experience with Multi Master Replication.
    The data of the RG (Readonlymastergroup) was never replicated to one Master
    (DEFS01). When I queried dba_repgroup on the failing site, I found the
    RG status to be quiesced. There were no pending administrative requests
    on this site or the Master Definition site, but I found that there is no definition
    of the failing site's membership to the RG on the MD site.
    Master Definition Site
    select gname,dblink,Master,masterdef from dba_repsites;
    GNAME DBLINK M M-----
    MASTERGROUP      DEMD01 Y Y
    MASTERGROUP      DEGS01 Y N
    READONLYMASTERGROUP      DEMD01 Y Y
    READONLYMASTERGROUP      DEGS01 Y N
    MASTERGROUP      DEFS01 Y N
    Failing Master Site
    select gname,dblink,Master,masterdef from dba_repsites@defs01;
    GNAME DBLINK M M-----
    MASTERGROUP      DEFS01 Y N
    MASTERGROUP      DEMD01 Y Y
    READONLYMASTERGROUP      DEFS01 Y N
    READONLYMASTERGROUP      DEMD01 Y Y
    MASTERGROUP      DEGS01 Y N
    READONLYMASTERGROUP      DEGS01 Y N
    Can anybody explain how this could happen? AFAIK adding a master to a resource
    group is a distributed transaction that should be rolled back on all sites, if it fails
    on one.
    To correct this situation; I am thinking of removing the RG from DEFS01
    with DBMS_REPCAT.DROP_MASTER_REPGROUP (on DEFS01) and then rejoin DEFS01
    with DBMS_REPCAT.DROP_MASTER_REPGROUP on the Master
    Definition Site.
    Will this work? Anything else I have to think off?
    Regards,
    uwe

    Hi Janos,
    I have tried the Multi language scenario. You need to have following setup in your system, they are:-
    >Install language pack
    >You need to upload content using below CSV fles:-
    multistring.csv
    Material.csv
    Plant CSV
    Product_Plant_Relationship_template.csv
    If you can share your email id I will email you sample CSV file which you can use. Or else pl follow below steps to download CSV file from system:-
    Click Setup
    Click System Administration tab
    Click to Import data
    Click to New button
    select radio button Upload to Server and click next
    Select any dummy excel file from local server while browsing for Upload Import file option and click next.
    Select  Preview import check box adn search MultiString  from the drop down object type and click next
    In the page you will see the template.csv link, click and save the file. This the file you are looking for.
    Create the content and upload the file...hope this will help you!!!
    There is another way of importing master data is multiple langaure. In this scenario master data source would be ECC. You need to run SAP provided standard report to export the details. This XML file you can import into sourcing system. Pl find below blog link for more detail:-
    http://scn.sap.com/community/sourcing/blog/2011/09/29/extracting-erp-master-data-for-sap-sourcing
    Regards,
    Deepak

  • Basic Multi Master Replication doesn't work!

    Hi all,
    <br><br>
    I am studying Oracle replication and I tried to apply multi master replication MMR between two databases: WinXp12 (master definition) and WinXp11.<br>
    After successfully doing all the steps in the code below to replicate table SCOTT.DEPT, I inserted a row in the table in WinXp12. But I didn't see the inserted row in the remote site WinXp11.<br><br>
    Where wrong was I? Anything missed?<br><br>
    By the way, deferror table contains no row and DBA_JOBS has no failures.<br><br>
    Thanks in advance.
    <br><br>
    CREATE USER REPADMIN IDENTIFIED BY REPADMIN;
    GRANT CONNECT, RESOURCE, CREATE DATABASE LINK TO REPADMIN;
    EXECUTE DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_SCHEMA('REPADMIN');
    GRANT COMMENT ANY TABLE TO REPADMIN;
    GRANT LOCK ANY TABLE TO REPADMIN;
    EXECUTE DBMS_DEFER_SYS.REGISTER_PROPAGATOR('REPADMIN');
    CONN REPADMIN/REPADMIN
    CREATE DATABASE LINK WINXP11 CONNECT TO REPADMIN IDENTIFIED BY REPADMIN USING 'WINXP11';
    SELECT SYSDATE FROM DUAL@WINXP11 ;
    -- Add jobs to WINXP11
    CONNECT REPADMIN/REPADMIN@WINXP11
    BEGIN
      DBMS_DEFER_SYS.SCHEDULE_PUSH(
        DESTINATION => 'WINXP12',
        INTERVAL => 'SYSDATE + 1/(60*24)',
        NEXT_DATE => SYSDATE,
        STOP_ON_ERROR => FALSE,
        DELAY_SECONDS => 0,
        PARALLELISM => 1);
    END;
    BEGIN
    DBMS_DEFER_SYS.SCHEDULE_PURGE(
      NEXT_DATE => SYSDATE,
      INTERVAL => 'SYSDATE + 1/24',
      DELAY_SECONDS => 0,
      ROLLBACK_SEGMENT => '');
    END;
    -- ADD JOBS TO WinXP12
    CONNECT REPADMIN/REPADMIN@WINXP12
    BEGIN
      DBMS_DEFER_SYS.SCHEDULE_PUSH(
        DESTINATION => 'WINXP11',
        INTERVAL => 'SYSDATE + 1/(60*24)',
        NEXT_DATE => SYSDATE,
        STOP_ON_ERROR => FALSE,
        DELAY_SECONDS => 0,
        PARALLELISM => 1);
    END;
    BEGIN
    DBMS_DEFER_SYS.SCHEDULE_PURGE(
      NEXT_DATE => SYSDATE,
      INTERVAL => 'SYSDATE + 1/24',
      DELAY_SECONDS => 0,
      ROLLBACK_SEGMENT => '');
    END;
    BEGIN
       DBMS_REPCAT.CREATE_MASTER_REPGROUP(
         GNAME => '"MGROUP1"',
         QUALIFIER => '',
         GROUP_COMMENT => '');
    END;
    BEGIN
       DBMS_REPCAT.CREATE_MASTER_REPOBJECT(
         GNAME => '"MGROUP1"',
         TYPE => 'TABLE',
         ONAME => '"DEPT',
         SNAME => '"SCOTT"');
    END;
    -- Generate Replication Support
    BEGIN
       DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT(
         SNAME => '"SCOTT"',
         ONAME => '"DEPT"',
         TYPE => 'TABLE',
         MIN_COMMUNICATION => TRUE,
         GENERATE_80_COMPATIBLE => FALSE);
    END;
    SELECT * FROM DBA_REPCATLOG ;
    -- NO ERROR
    BEGIN
    DBMS_REPCAT.RESUME_MASTER_ACTIVITY(
    GNAME => '"MGROUP1"');
    END;
    BEGIN
    DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY(
    GNAME => '"MGROUP1"');
    END;
    BEGIN
    DBMS_REPCAT.ADD_MASTER_DATABASE(
    GNAME => '"MGROUP1"',
    MASTER => 'WINXP11');
    END;
    -- Restart replication support:
    BEGIN
    DBMS_REPCAT.RESUME_MASTER_ACTIVITY(
    GNAME => '"MGROUP1"');
    END;
    -- here I could see in WinXP11 the tables created in SCOTT schema with data
    -- in WinXP12 I successfully issued the command
    insert into dept values ( 44,'text',null);
    -- I don't see the data in WinXP11
    -- No rows in deferror
    -- dba_jobs shows that there is not broken job

    Hi!
    You will need to create a public db link and a private db link connecting to the public link.
    CONN / AS SYSDBA
    CREATE PUBLIC DATABASE LINK WINXP11 USING 'WINXP11';
    CONN REPADMIN/REPADMIN
    CREATE DATABASE LINK WINXP11 CONNECT TO REPADMIN IDENTIFIED BY REPADMIN;
    SELECT SYSDATE FROM DUAL@WINXP11;
    Regards,
    PP

  • Cannot get multi master replication to work - TcpAcceptor error

    I am running Coherence 3.7.1 and WebLogic 10.3.5 on Solaris. I am packaging my coherence override, cache-config, and pof config files inside a jar file and adding that to my WebLogic domain's /lib directory as well as the coherence.jar so this is added to the path on startup.
    When adding something to the cache the put is successful and I can retrieve the value from the cache but I get the error:
    2012-09-04 15:52:41.992/2186.560 Oracle Coherence GE 3.7.1.0 <Error> (thread=EventChannelController:Thread-17, member=1): Error while starting service "remote-scm1": com.tangosol.net.messaging.ConnectionException: could not establish a connection to one of the following addresses: [xxx.16.22.151:20001]; make sure the "remote-addresses" configuration element contains an address and port of a running TcpAcceptor
    Additionally I am not seeing the message "TcpAcceptor now listening for connections on . . ." on WebLogic or Coherence startup but I think it is supposed to be there. There is a tcp-acceptor defined in my cache-config.xml.
    So I think I am supposed to have a TcpAcceptor running and it is not starting, possibly due to the cofiguration for that element may be getting overridden.
    I can't seem to track this one down any insight would be appreciated.
    Thanks
    JG
    <h5>Cache put/get that is working with the exception of the push replication</h5>
    NamedCache cache = CacheFactory.getCache("scm-combiner-cache");
    cache.put(key, value);
    final Object value = cache.get(key);
    <h5>My multi-master-pof-config.xml for both scm1 and scm2</h5>
    <pof-config>
    <user-type-list>
    <include>coherence-pof-config.xml</include>
    <include>coherence-common-pof-config.xml</include>
    <include>coherence-messagingpattern-pof-config.xml</include>
    <include>coherence-eventdistributionpattern-pof-config.xml
    </include>
    </user-type-list>
    </pof-config>
    <h5>tangosol-coherence-override.xml for scm1:</h5>
    <?xml version="1.0" encoding="UTF-8"?>
    <coherence>
    <cluster-config>
    <member-identity>
    <site-name system-property="tangosol.coherence.site">scm1</site-name>
    <cluster-name system-property="tangosol.coherence.cluster">multimaster</cluster-name>
    </member-identity>
    <multicast-listener>
    <address>224.3.6.0</address>
    <port>9001</port>
    <time-to-live>0</time-to-live>
    </multicast-listener>
    </cluster-config>
    <configurable-cache-factory-config>
    <class-name>com.oracle.coherence.environment.extensible.ExtensibleEnvironment</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value system-property="tangosol.coherence.cacheconfig">multimaster-cache-config.xml</param-value>
    </init-param>
    </init-params>
    </configurable-cache-factory-config>
    </coherence>
    <h5>tangosol-coherence-override.xml for scm2:</h5>
    <?xml version="1.0" encoding="UTF-8"?>
    <coherence>
    <cluster-config>
    <member-identity>
    <site-name system-property="tangosol.coherence.site">scm2</site-name>
    <cluster-name system-property="tangosol.coherence.cluster">multimaster</cluster-name>
    </member-identity>
    <multicast-listener>
    <address>224.3.6.0</address>
    <port>9002</port>
    <time-to-live>0</time-to-live>
    </multicast-listener>
    </cluster-config>
    <configurable-cache-factory-config>
    <class-name>com.oracle.coherence.environment.extensible.ExtensibleEnvironment</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value system-property="tangosol.coherence.cacheconfig">multimaster-cache-config.xml</param-value>
    </init-param>
    </init-params>
    </configurable-cache-factory-config>
    </coherence>
    <h5>multimaster-cache-config.xml for scm1:</h5>
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config xmlns:event="class://com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributionNamespaceContentHandler"
    xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler">
    <defaults>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-value>multi-master-pof-config.xml</param-value>
    <param-type>String</param-type>
    </init-param>
    </init-params>
    </serializer>
    </defaults>
    <caching-schemes>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>scm-combiner-cache</cache-name>
    <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
    <event:distributor>
    <event:distributor-name>{cache-name}</event:distributor-name>
    <event:distributor-external-name>{site-name}-{cluster-name}-{cache-name}</event:distributor-external-name>
    <event:distributor-scheme>
    <event:coherence-based-distributor-scheme/>
    </event:distributor-scheme>
    <event:distribution-channels>
    <event:distribution-channel>
    <event:channel-name>scm2-channel</event:channel-name>
    <event:starting-mode system-property="channel.starting.mode">enabled</event:starting-mode>
    <event:channel-scheme>
    <event:remote-cluster-channel-scheme>
    <event:remote-invocation-service-name>remote-scm2</event:remote-invocation-service-name>
    <event:remote-channel-scheme>
    <event:local-cache-channel-scheme>
    <event:target-cache-name>scm-combiner-cache</event:target-cache-name>
    </event:local-cache-channel-scheme>
    </event:remote-channel-scheme>
    </event:remote-cluster-channel-scheme>
    </event:channel-scheme>
    </event:distribution-channel>
    </event:distribution-channels>
    </event:distributor>
    </cache-mapping>
    </caching-scheme-mapping>
    <!--
    The following scheme is required for each remote-site when
    using a RemoteInvocationPublisher
    -->
    <remote-invocation-scheme>
    <service-name>remote-scm2</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>xxx.16.22.152</address>
    <port>20002</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>2s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-invocation-scheme>
    <distributed-scheme>
    <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
    <service-name>DistributedCacheWithPublishingCacheStore</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.oracle.coherence.patterns.pushreplication.PublishingCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>xxx.16.22.151</address>
    <port>20001</port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    <near-scheme>
    <scheme-name>near-scheme-with-publishing-cachestore</scheme-name>
    <front-scheme>
    <local-scheme />
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>distributed-scheme-with-publishing-cachestore</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    </near-scheme>
    </caching-schemes>
    </cache-config>
    <h5>multimaster-cache-config.xml for scm2:</h5>
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config xmlns:event="class://com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributionNamespaceContentHandler"
    xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler">
    <defaults>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-value>multi-master-pof-config.xml</param-value>
    <param-type>String</param-type>
    </init-param>
    </init-params>
    </serializer>
    </defaults>
    <caching-schemes>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>scm-combiner-cache</cache-name>
    <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
    <event:distributor>
    <event:distributor-name>{cache-name}</event:distributor-name>
    <event:distributor-external-name>{site-name}-{cluster-name}-{cache-name}</event:distributor-external-name>
    <event:distributor-scheme>
    <event:coherence-based-distributor-scheme/>
    </event:distributor-scheme>
    <event:distribution-channels>
    <event:distribution-channel>
    <event:channel-name>scm1-channel</event:channel-name>
    <event:starting-mode system-property="channel.starting.mode">enabled</event:starting-mode>
    <event:channel-scheme>
    <event:remote-cluster-channel-scheme>
    <event:remote-invocation-service-name>remote-scm1</event:remote-invocation-service-name>
    <event:remote-channel-scheme>
    <event:local-cache-channel-scheme>
    <event:target-cache-name>scm-combiner-cache</event:target-cache-name>
    </event:local-cache-channel-scheme>
    </event:remote-channel-scheme>
    </event:remote-cluster-channel-scheme>
    </event:channel-scheme>
    </event:distribution-channel>
    </event:distribution-channels>
    </event:distributor>
    </cache-mapping>
    </caching-scheme-mapping>
    <!--
    The following scheme is required for each remote-site when
    using a RemoteInvocationPublisher
    -->
    <remote-invocation-scheme>
    <service-name>remote-scm1</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>xxx.16.22.151</address>
    <port>20001</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>2s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-invocation-scheme>
    <distributed-scheme>
    <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
    <service-name>DistributedCacheWithPublishingCacheStore</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.oracle.coherence.patterns.pushreplication.PublishingCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>xxx.16.22.152</address>
    <port>20002</port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    <near-scheme>
    <scheme-name>near-scheme-with-publishing-cachestore</scheme-name>
    <front-scheme>
    <local-scheme />
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>distributed-scheme-with-publishing-cachestore</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    </near-scheme>
    </caching-schemes>
    </cache-config>

    <h5>Error message pt 2</h5>
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2012-09-04 15:52:26.016/2170.585 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "jar:file:/home/jg/weblogic/scm2/lib/CoherenceMultiMaster.jar!/multimaster-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
    Using the Incubator Extensible Environment for Coherence Cache Configuration
    Copyright (c) 2011, Oracle Corporation. All Rights Reserved.
    2012-09-04 15:52:26.908/2171.476 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Using the Coherence-based Event Distributor. Class:com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributorBuilder Method:<init>
    2012-09-04 15:52:27.348/2171.916 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-messagingpattern-2.8.4.32329.jar!/coherence-messagingpattern-cache-config.xml"
    2012-09-04 15:52:27.854/2172.423 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-common-2.2.0.32329.jar!/coherence-common-cache-config.xml"
    2012-09-04 15:52:31.734/2176.302 Oracle Coherence GE 3.7.1.0 <D4> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): TCMP bound to /xxx.16.22.152:8088 using SystemSocketProvider
    2012-09-04 15:52:35.674/2180.245 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): Created a new cluster "multimaster" with Member(Id=1, Timestamp=2012-09-04 15:52:32.088, Address=xxx.16.22.152:8088, MachineId=65086, Location=site:scm2,machine:scm-2,process:24982, Role=WeblogicServer, Edition=Grid Edition, Mode=Development, CpuCount=128, SocketCount=128) UID=0xAC1016980000013991FBA658FE3E1F98
    2012-09-04 15:52:35.704/2180.272 Oracle Coherence GE 3.7.1.0 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Started cluster Name=multimaster
    Group{Address=224.3.6.0, Port=9002, TTL=0}
    MasterMemberSet(
    ThisMember=Member(Id=1, Timestamp=2012-09-04 15:52:32.088, Address=xxx.16.22.152:8088, MachineId=65086, Location=site:scm2,machine:scm-2,process:24982, Role=WeblogicServer)
    OldestMember=Member(Id=1, Timestamp=2012-09-04 15:52:32.088, Address=xxx.16.22.152:8088, MachineId=65086, Location=site:scm2,machine:scm-2,process:24982, Role=WeblogicServer)
    ActualMemberSet=MemberSet(Size=1
    Member(Id=1, Timestamp=2012-09-04 15:52:32.088, Address=xxx.16.22.152:8088, MachineId=65086, Location=site:scm2,machine:scm-2,process:24982, Role=WeblogicServer)
    MemberId|ServiceVersion|ServiceJoined|MemberState
    1|3.7.1|2012-09-04 15:52:35.677|JOINED
    RecycleMillis=1200000
    RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[]}
    IpMonitor{AddressListSize=0}
    2012-09-04 15:52:35.951/2180.519 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2012-09-04 15:52:37.525/2182.093 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded POF configuration from "jar:file:/home/jg/weblogic/scm2/lib/CoherenceMultiMaster.jar!/multi-master-pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-09-04 15:52:37.717/2182.285 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded included POF configuration from "jar:file:/home/jg/weblogic/scm2/lib/coherence.jar!/coherence-pof-config.xml"
    2012-09-04 15:52:37.731/2182.299 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded included POF configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-common-2.2.0.32329.jar!/coherence-common-pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-09-04 15:52:37.742/2182.310 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded included POF configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-messagingpattern-2.8.4.32329.jar!/coherence-messagingpattern-pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-09-04 15:52:37.755/2182.323 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Loaded included POF configuration from "zip:/home/jg/weblogic/scm2/servers/AdminServer/tmp/_WL_user/ScmTest/ojgs72/APP-INF/lib/coherence-eventdistributionpattern-1.2.0.32329.jar!/coherence-eventdistributionpattern-pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-09-04 15:52:38.988/2183.556 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Service DistributedCacheWithPublishingCacheStore joined the cluster with senior service member 1
    2012-09-04 15:52:40.058/2184.627 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Establising Event Distributors for the Cache [scm-combiner-cache]. Class:com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1 Method:ensureResource
    2012-09-04 15:52:40.081/2184.650 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Using Coherence-based Event Distributor [scm-combiner-cache] (scm2-multimaster-scm-combiner-cache). Class:com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributor Method:<init>
    2012-09-04 15:52:40.139/2184.707 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache:DistributedCacheForDestinations, member=1): Service DistributedCacheForDestinations joined the cluster with senior service member 1
    2012-09-04 15:52:40.971/2185.539 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Establishing Event Channel [scm1-channel] for Event Distributor [scm-combiner-cache (scm2-multimaster-scm-combiner-cache)] based on [AbstractEventChannelController.Dependencies{channelName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel, eventChannelBuilder=com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannelBuilder@22e33b0f, transformerBuilder=null, startingMode=ENABLED, batchDistributionDelayMS=1000, batchSize=100, restartDelay=10000, totalConsecutiveFailuresBeforeSuspended=-1}]. Class:com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributorTemplate Method:realize
    2012-09-04 15:52:41.033/2185.601 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache:DistributedCacheForSubscriptions, member=1): Service DistributedCacheForSubscriptions joined the cluster with senior service member 1
    2012-09-04 15:52:41.345/2185.914 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCacheForSubscriptionsWorker:0, member=1): Establishing the EventChannelController for CoherenceEventChannelSubscription{Subscription{subscriptionIdentifier=SubscriptionIdentifier{destinationIdentifier=Identifier{scm2-multimaster-scm-combiner-cache}, subscriberIdentifier=Identifier{scm2:multimaster:scm-combiner-cache:scm1-channel}}, status=ENABLED}, distributorIdentifier=EventDistributor.Identifier{symbolicName=scm-combiner-cache, externalName=scm2-multimaster-scm-combiner-cache}, controllerIdentifier=EventChannelController.Identifier{symbolicName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel}, controllerDependencies=AbstractEventChannelController.Dependencies{channelName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel, eventChannelBuilder=com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannelBuilder@61981853, transformerBuilder=null, startingMode=ENABLED, batchDistributionDelayMS=1000, batchSize=100, restartDelay=10000, totalConsecutiveFailuresBeforeSuspended=-1}, parameterProvider=ScopedParameterProvider{parameterProvider=SimpleParameterProvider{parameters={distributor-name=Parameter{name=distributor-name, type=java.lang.String, expression=Constant{value=Value{scm-combiner-cache}}}, distributor-external-name=Parameter{name=distributor-external-name, type=java.lang.String, expression=Constant{value=Value{scm2-multimaster-scm-combiner-cache}}}}}, innerParameterProvider=ScopedParameterProvider{parameterProvider=SimpleParameterProvider{parameters={cache-name=Parameter{name=cache-name, type=java.lang.String, expression=Constant{value=Value{scm-combiner-cache}}}, site-name=Parameter{name=site-name, type=java.lang.String, expression=Constant{value=Value{scm2}}}, cluster-name=Parameter{name=cluster-name, type=java.lang.String, expression=Constant{value=Value{multimaster}}}}}, innerParameterProvider=com.oracle.coherence.configuration.parameters.SystemPropertyParameterProvider@48652333}}, serializerBuilder=[email protected]67ea0e66}. Class:com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventChannelSubscription Method:onCacheEntryLifecycleEvent
    2012-09-04 15:52:41.485/2186.053 Oracle Coherence GE 3.7.1.0 <D5> (thread=EventChannelController:Thread-17, member=1): Attempting to connect to Remote Invocation Service remote-scm1 in EventDistributor.Identifier{symbolicName=scm-combiner-cache, externalName=scm2-multimaster-scm-combiner-cache} for EventChannelController.Identifier{symbolicName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel} Class:com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel Method:connect
    2012-09-04 15:52:41.969/2186.537 Oracle Coherence GE 3.7.1.0 <D5> (thread=remote-scm1:TcpInitiator, member=1): Started: TcpInitiator{Name=remote-scm1:TcpInitiator, State=(SERVICE_STARTED), ThreadCount=0, Codec=Codec(Format=POF), Serializer=com.tangosol.io.pof.ConfigurablePofContext, PingInterval=0, PingTimeout=5000, RequestTimeout=5000, ConnectTimeout=2000, SocketProvider=SystemSocketProvider, RemoteAddresses=[/xxx.16.22.151:20001], SocketOptions{LingerTimeout=0, KeepAliveEnabled=true, TcpDelayEnabled=false}}
    2012-09-04 15:52:41.981/2186.549 Oracle Coherence GE 3.7.1.0 <D5> (thread=EventChannelController:Thread-17, member=1): Connecting Socket to xxx.16.22.151:20001
    2012-09-04 15:52:41.986/2186.554 Oracle Coherence GE 3.7.1.0 <Info> (thread=EventChannelController:Thread-17, member=1): Error connecting Socket to xxx.16.22.151:20001: java.net.ConnectException: Connection refused
    2012-09-04 15:52:41.992/2186.560 Oracle Coherence GE 3.7.1.0 <Error> (thread=EventChannelController:Thread-17, member=1): Error while starting service "remote-scm1": com.tangosol.net.messaging.ConnectionException: could not establish a connection to one of the following addresses: [xxx.16.22.151:20001]; make sure the "remote-addresses" configuration element contains an address and port of a running TcpAcceptor
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.initiator.TcpInitiator.openConnection(TcpInitiator.CDB:120)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.Initiator.ensureConnection(Initiator.CDB:11)
    at com.tangosol.coherence.component.net.extend.remoteService.RemoteInvocationService.openChannel(RemoteInvocationService.CDB:5)
    at com.tangosol.coherence.component.net.extend.RemoteService.doStart(RemoteService.CDB:11)
    at com.tangosol.coherence.component.net.extend.RemoteService.start(RemoteService.CDB:5)
    at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:39)
    at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
    at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInternal(DefaultConfigurableCacheFactory.java:1105)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:937)
    at com.oracle.coherence.environment.extensible.ExtensibleEnvironment.ensureService(ExtensibleEnvironment.java:525)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:337)
    at com.oracle.coherence.common.resourcing.InvocationServiceSupervisedResourceProvider.ensureResource(InvocationServiceSupervisedResourceProvider.java:61)
    at com.oracle.coherence.common.resourcing.InvocationServiceSupervisedResourceProvider.ensureResource(InvocationServiceSupervisedResourceProvider.java:34)
    at com.oracle.coherence.common.resourcing.AbstractSupervisedResourceProvider.getResource(AbstractSupervisedResourceProvider.java:81)
    at com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel.connect(RemoteClusterEventChannel.java:187)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventChannelController.internalStart(CoherenceEventChannelController.java:209)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController.onStart(AbstractEventChannelController.java:682)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController.access$000(AbstractEventChannelController.java:70)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController$1.run(AbstractEventChannelController.java:461)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    2012-09-04 15:52:41.996/2186.564 Oracle Coherence GE 3.7.1.0 <D5> (thread=remote-scm1:TcpInitiator, member=1): Stopped: TcpInitiator{Name=remote-scm1:TcpInitiator, State=(SERVICE_STOPPED), ThreadCount=0, Codec=Codec(Format=POF), Serializer=com.tangosol.io.pof.ConfigurablePofContext, PingInterval=0, PingTimeout=5000, RequestTimeout=5000, ConnectTimeout=2000, SocketProvider=SystemSocketProvider, RemoteAddresses=[/xxx.16.22.151:20001], SocketOptions{LingerTimeout=0, KeepAliveEnabled=true, TcpDelayEnabled=false}}
    2012-09-04 15:52:42.005/2186.573 Oracle Coherence GE 3.7.1.0 <Warning> (thread=EventChannelController:Thread-17, member=1): Failed to connect to Remote Invocation Service remote-scm1 in EventDistributor.Identifier{symbolicName=scm-combiner-cache, externalName=scm2-multimaster-scm-combiner-cache} for EventChannelController.Identifier{symbolicName=scm1-channel, externalName=scm2:multimaster:scm-combiner-cache:scm1-channel} Class:com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel Method:connect
    2012-09-04 15:52:42.007/2186.575 Oracle Coherence GE 3.7.1.0 <Warning> (thread=EventChannelController:Thread-17, member=1): Causing exception was: Class:com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel Method:connect
    com.tangosol.net.messaging.ConnectionException: could not establish a connection to one of the following addresses: [xxx.16.22.151:20001]; make sure the "remote-addresses" configuration element contains an address and port of a running TcpAcceptor
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.initiator.TcpInitiator.openConnection(TcpInitiator.CDB:120)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.Initiator.ensureConnection(Initiator.CDB:11)
    at com.tangosol.coherence.component.net.extend.remoteService.RemoteInvocationService.openChannel(RemoteInvocationService.CDB:5)
    at com.tangosol.coherence.component.net.extend.RemoteService.doStart(RemoteService.CDB:11)
    at com.tangosol.coherence.component.net.extend.RemoteService.start(RemoteService.CDB:5)
    at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:39)
    at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
    at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInternal(DefaultConfigurableCacheFactory.java:1105)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:937)
    at com.oracle.coherence.environment.extensible.ExtensibleEnvironment.ensureService(ExtensibleEnvironment.java:525)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:337)
    at com.oracle.coherence.common.resourcing.InvocationServiceSupervisedResourceProvider.ensureResource(InvocationServiceSupervisedResourceProvider.java:61)
    at com.oracle.coherence.common.resourcing.InvocationServiceSupervisedResourceProvider.ensureResource(InvocationServiceSupervisedResourceProvider.java:34)
    at com.oracle.coherence.common.resourcing.AbstractSupervisedResourceProvider.getResource(AbstractSupervisedResourceProvider.java:81)
    at com.oracle.coherence.patterns.eventdistribution.channels.RemoteClusterEventChannel.connect(RemoteClusterEventChannel.java:187)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventChannelController.internalStart(CoherenceEventChannelController.java:209)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController.onStart(AbstractEventChannelController.java:682)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController.access$000(AbstractEventChannelController.java:70)
    at com.oracle.coherence.patterns.eventdistribution.distributors.AbstractEventChannelController$1.run(AbstractEventChannelController.java:461)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)

  • Latency at the apply side in multi master replication

    Hi Gurus,
    We are facing some multi master replication latency always at one side . Ex: Site1 -> Site 3 . Transactions from site 1 always gets applied on Site 3 with some latency though the latency doesnt happen while applying from Site 1 - Site 2 for the same transaction originating at site 1 in 3 site(3 way) multi master replication environment.
    We have investigated into it and now looking to check the system replication tables which would be involved at the apply side (ex:- Site 3).
    Could someone pls let me know all the system replication tables that would be involved at the apply side which can impact the latency .I know few of them but dont know the all of them.
    System.def$_AQERROR
    System.def$_ERROR
    System.def$_ORIGIN
    Thanks

    I would say that 50, 50 and 75 are a Very Large number of Job Queue processes. Do you really have that many jobs that need to run concurrently ?
    Since Advanced Replication Queues are maintained in only a small set of tables you might end up having "buffer busy waits" or "read by other session" waits or latch waits.
    BTW, what other factors did you eliminate before deciding to look at the Replication tables ?
    See the documentation on monitoring performance in replication :
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96568/rarmonit.htm#35535
    If you want to look at the "tables" start with the Replication Data Dictionary Reference at
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96568/rarpart4.htm#435986
    and then drill down through the View definitions to the underlying base tables.

Maybe you are looking for

  • How to create a view, maintanence view

    Hi all, I need to create a maintanence view but I do not know how to do this. I know I have to use se11 but afterwards? Thanks.

  • Can I install Windows 7 on a Satellite C855-1WR?

    I've made a BIG mistake! After having Satellite's for years, I recently bought a C885-1WR but didn't notice it came with Windows 8.....and I HATE it :( Can I install Windows 7 on this machine? If so, how do I do this, please? I have very little techn

  • OVM Repository and VM Guest Backups - Best Practice?

    Hey all, Does anybody out there have any tips/best practices on backing up the OVM Repository as well ( of course ) the VM's? We are using NFS exclusively and have the ability to take snapshots at the storage level. Some of the main points we'd like

  • Dynamic select statements

    PARAMETERS: p_table LIKE dd02t-tabname OBLIGATORY. SELECT *   FROM (p_table) CLIENT SPECIFIED   INTO CORRESPONDING FIELDS OF TABLE <dyn_table>   WHERE mandt = '800'. mandt is the system client number as 800. dyn_table: dynamic internal table p_table

  • TS3274 Digital copied movies downloaded on my iPad skip when playing movie

    I downloaded movie from digital copy and movie stops and starts playing during entire movie.