Multi-master replicated environment

hi,
what does it mean multi-master replicated environment? How could i benefit from it?
thanks in advance.

Hello,
This is the environment in which inserts, updates and deletes on objects
in any nodes, included in the environment will be replicated to remaining
nodes which are defined to be part of the replicated environment.
Sinces changes to any node are replicated to all other nodes, they all work
as Master, which gives it the name, Master-2-Master and also advanced replication.
Tahir.

Similar Messages

  • Multi-Master replica busy

    I have a multi-master ldap server(5.1 service pack 2, build number:2003.028.2338). There are about 56k users in 1st server and 45k on the 2nd server when count the users in the 2 servers. Plus, I have a pure consumer which is working fine with the replica from 1st server.
    But when activated replication between two multi-master machines, the secondary server got the error message like below. And the replica process is very slow and the cost CPU always 40% and up. Can anyone please help me with this issue? What the replica process suppose to do? Why the replica is slow? What happens if the replication queue is about 11k entries? How can I speed up the replica?
    Thanks in advance for your help.
    ===error log====
    [29/Sep/2003:12:52:05 -0400] NSMMReplicationPlugin - Warning: timed out waiting 600 seconds for add operation result from replica 10.30.250.146:389
    [29/Sep/2003:12:52:05 -0400] NSMMReplicationPlugin - Failed to replay change (uniqueid e1181801-1dd111b2-8003d8a4-aa1f9c2b, CSN 3f729278000300030000) to replica "cn=PRI-2-SEC-USER-01, cn=replica, cn="o=sso", cn=mapping tree, cn=config (host 10.30.250.146, port 389)": Connection lost. Will retry later.
    [29/Sep/2003:12:52:05 -0400] NSMMReplicationPlugin - Warning: unable to send endReplication extended operation to consumer "cn=PRI-2-SEC-USER-01, cn=replica, cn="o=sso", cn=mapping tree, cn=config (host 10.30.250.146, port 389)" - error 2
    [29/Sep/2003:13:07:18 -0400] NSMMReplicationPlugin - Warning: timed out waiting 600 seconds for add operation result from replica 10.30.250.146:389
    [29/Sep/2003:13:07:18 -0400] NSMMReplicationPlugin - Failed to replay change (uniqueid e1181804-1dd111b2-8003d8a4-aa1f9c2b, CSN 3f729293000200030000) to replica "cn=PRI-2-SEC-USER-01, cn=replica, cn="o=sso", cn=mapping tree, cn=config (host 10.30.250.146, port 389)": Connection lost. Will retry later.
    [29/Sep/2003:13:07:19 -0400] NSMMReplicationPlugin - Warning: unable to send endReplication extended operation to consumer "cn=PRI-2-SEC-USER-01, cn=replica, cn="o=sso", cn=mapping tree, cn=config (host 10.30.250.146, port 389)" - error 2
    [29/Sep/2003:13:27:32 -0400] NSMMReplicationPlugin - Warning: timed out waiting 600 seconds for add operation result from replica 10.30.250.146:389
    ===access log===
    [29/Sep/2003:13:59:42 -0400] conn=292 op=1 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:43 -0400] conn=292 op=2 SRCH base="cn=config" scope=0 filter="(|(objectClass=*)(objectClass=ldapsuben
    try))" attrs="nsslapd-instancedir nsslapd-security"
    [29/Sep/2003:13:59:43 -0400] conn=292 op=2 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:43 -0400] conn=292 op=3 SRCH base="cn=options,cn=features,cn=config" scope=1 filter="(objectClass=dir
    ectoryServerFeature)" attrs=ALL
    [29/Sep/2003:13:59:43 -0400] conn=292 op=3 RESULT err=0 tag=101 nentries=0 etime=0
    [29/Sep/2003:13:59:48 -0400] conn=292 op=4 SRCH base="cn=config" scope=0 filter="(|(objectClass=*)(objectClass=ldapsuben
    try))" attrs="nsslapd-security"
    [29/Sep/2003:13:59:48 -0400] conn=292 op=4 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:48 -0400] conn=292 op=5 SRCH base="cn=config" scope=0 filter="(|(objectClass=*)(objectClass=ldapsuben
    try))" attrs="nsslapd-port nsslapd-secureport nsslapd-lastmod nsslapd-readonly nsslapd-schemacheck nsslapd-referral"
    [29/Sep/2003:13:59:48 -0400] conn=292 op=5 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:51 -0400] conn=292 op=6 SRCH base="cn=replication, cn=config" scope=0 filter="(|(objectClass=*)(objec
    tClass=ldapsubentry))" attrs=ALL
    [29/Sep/2003:13:59:51 -0400] conn=292 op=6 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:51 -0400] conn=292 op=7 SRCH base="cn=replication,cn=config" scope=2 filter="(objectClass=*)" attrs=ALL

    I have had a similar problem. and it was due to that I recreated a new replication agreement which pointed to a different server with the same id. Or to the same server with a different ID.
    The solution provided by sun was to remove the reference to the old replicas in the dse.ldif and restart but it didn't work out. not even by ldapmodify on the replicas dn.
    I had to reinstall...

  • Partial transaction for multi-master asynchronous replication

    I have a fundamental question about multi-master asynchronous replication.
    Lets consider a situation where we have 2 servers participating in a multimaster asynchronous mode of replication.
    3 tables are part of an Oracle transaction. Now if I mark one of these tables for replication and the other 2 are not a part of any of the replication groups.
    Say as a part of Oracle transaction one record is inserted into each of the 3 tables.
    Now if I start replicating, will the change made in the table marked for replication be replicated on to the other server. Since the change made to the other 2 tables are not propogated by the deferred queue.
    Please reply.
    null

    MR.Bradd piontek is very much correct.If the tables involved are interdependent you have to place them in a group and all of them should exist at all sights in a multi master replication.
    If the data is updated(pushed) from a snapshot to a table at a master site it may get updated if it is not a child table in a relationship.
    But in multi master replication environment even this is not possible.

  • Latency at the apply side in multi master replication

    Hi Gurus,
    We are facing some multi master replication latency always at one side . Ex: Site1 -> Site 3 . Transactions from site 1 always gets applied on Site 3 with some latency though the latency doesnt happen while applying from Site 1 - Site 2 for the same transaction originating at site 1 in 3 site(3 way) multi master replication environment.
    We have investigated into it and now looking to check the system replication tables which would be involved at the apply side (ex:- Site 3).
    Could someone pls let me know all the system replication tables that would be involved at the apply side which can impact the latency .I know few of them but dont know the all of them.
    System.def$_AQERROR
    System.def$_ERROR
    System.def$_ORIGIN
    Thanks

    I would say that 50, 50 and 75 are a Very Large number of Job Queue processes. Do you really have that many jobs that need to run concurrently ?
    Since Advanced Replication Queues are maintained in only a small set of tables you might end up having "buffer busy waits" or "read by other session" waits or latch waits.
    BTW, what other factors did you eliminate before deciding to look at the Replication tables ?
    See the documentation on monitoring performance in replication :
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96568/rarmonit.htm#35535
    If you want to look at the "tables" start with the Replication Data Dictionary Reference at
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96568/rarpart4.htm#435986
    and then drill down through the View definitions to the underlying base tables.

  • Missing Master Site in Multi Master Environment

    Hello,
    we are using MM-Replication on three Master sites with two replication groups.
    Replication support was installed and configured by the application vendor.
    I do not have experience with Multi Master Replication.
    The data of the RG (Readonlymastergroup) was never replicated to one Master
    (DEFS01). When I queried dba_repgroup on the failing site, I found the
    RG status to be quiesced. There were no pending administrative requests
    on this site or the Master Definition site, but I found that there is no definition
    of the failing site's membership to the RG on the MD site.
    Master Definition Site
    select gname,dblink,Master,masterdef from dba_repsites;
    GNAME DBLINK M M-----
    MASTERGROUP      DEMD01 Y Y
    MASTERGROUP      DEGS01 Y N
    READONLYMASTERGROUP      DEMD01 Y Y
    READONLYMASTERGROUP      DEGS01 Y N
    MASTERGROUP      DEFS01 Y N
    Failing Master Site
    select gname,dblink,Master,masterdef from dba_repsites@defs01;
    GNAME DBLINK M M-----
    MASTERGROUP      DEFS01 Y N
    MASTERGROUP      DEMD01 Y Y
    READONLYMASTERGROUP      DEFS01 Y N
    READONLYMASTERGROUP      DEMD01 Y Y
    MASTERGROUP      DEGS01 Y N
    READONLYMASTERGROUP      DEGS01 Y N
    Can anybody explain how this could happen? AFAIK adding a master to a resource
    group is a distributed transaction that should be rolled back on all sites, if it fails
    on one.
    To correct this situation; I am thinking of removing the RG from DEFS01
    with DBMS_REPCAT.DROP_MASTER_REPGROUP (on DEFS01) and then rejoin DEFS01
    with DBMS_REPCAT.DROP_MASTER_REPGROUP on the Master
    Definition Site.
    Will this work? Anything else I have to think off?
    Regards,
    uwe

    Hi Janos,
    I have tried the Multi language scenario. You need to have following setup in your system, they are:-
    >Install language pack
    >You need to upload content using below CSV fles:-
    multistring.csv
    Material.csv
    Plant CSV
    Product_Plant_Relationship_template.csv
    If you can share your email id I will email you sample CSV file which you can use. Or else pl follow below steps to download CSV file from system:-
    Click Setup
    Click System Administration tab
    Click to Import data
    Click to New button
    select radio button Upload to Server and click next
    Select any dummy excel file from local server while browsing for Upload Import file option and click next.
    Select  Preview import check box adn search MultiString  from the drop down object type and click next
    In the page you will see the template.csv link, click and save the file. This the file you are looking for.
    Create the content and upload the file...hope this will help you!!!
    There is another way of importing master data is multiple langaure. In this scenario master data source would be ECC. You need to run SAP provided standard report to export the details. This XML file you can import into sourcing system. Pl find below blog link for more detail:-
    http://scn.sap.com/community/sourcing/blog/2011/09/29/extracting-erp-master-data-for-sap-sourcing
    Regards,
    Deepak

  • Multi master replication between 5.2 and 6.3.1

    I have a setup in which I have a master running version 5.2 and about 15 consumers ( slaves) all of which have been upgraded to 6.3.1 . I now want to create a multi master topology by promoting one of these consumers to be a master and still keep the 5.2 in use as we have a bunch of other applications that depend on the 5.2 instance. Our master has two suffixes. The master server is also the CA cert authority for all the consumers . After reading the docs I narrowed down the procedure to be
    1. Promote one of the 6.3.1 consumers to hub and then to master using the dsconf promote-repl commands. The problem here is that I am not sure how I can create a single consumer that can slave both the suffixes. We currently have them being slaved to different consumers.
    Also do I need to stop the existing replication between the 5.2 master and the would be 6.3.1 master to promote to hub and master.
    2. Set the replication manager manually or using dsconf set-server-prop on the new 6.3.1 master .
    3. Create a new replication agreement from 5.2 to 6.3.1 master without initializing. (using java console)
    4. Create new replication agreement from 6.3.1 to 5.2 (using command line)
    5. Create new repl agreements between the new 6.3.1 master and all the other consumers. For this do I need to first disable all the agreements between 5.2 and 6.3 or can I create new agreements without disabling the old ones?
    6. Initialize 6.3.1 from the 5.2 master.
    My biggest concern at this point is surrounding the ssl certs and the existing trusts the consumers have with the 5.2 master. Currently my 5.2 server acts as the CA authority for our certificate management with the ldap slaves. How can I migrate this functionality to the new server and also will this affect how the slaves communicate to the new master server ?
    Thanks in advance.

    Thanks Marco and Chris for the replies.
    I was able to get around the message by first manually initialzing the new slave using an ldif of the ou from the master , using dscc to change the default replication manager account to connect and finally editing the dse.ldif to enter the correct crypt hash for the new repl manager password. After these steps I was able to successfully set up replication to the second ou and also promote it to hub and master ( I had to repeat the steps after promotion of the slave to master as somehow it reset replication manager settings when I did that).
    So right now, I have a 5.2 master with two ou's replicating to about 15 consumers.
    I promoted one of these to be a second master (from consumer to hub to master). Replication is setup from 5.2 to 6.3 master but not the other way round.
    I am a little bit nervous setting up replication the other way round as this is our production environment and do want to end up blowing up my production instance. The steps I plan on taking are , from the new master server
    1. dsconf create-repl-agmt -p 389 dc=xxxxx,dc=com <5.2-master>:389
    2. dsconf set-repl-agmt-prop -p 389 dc=xxxxx,dc=com <5.2-master>:389 auth-pwd-file:<passwd_file.txt>
    I am assuming I can do all of this while the instances are up. Also in the above, does create-repl-agmt just create the agreement or does it also initalize the consumer with the data ? I want to ensure I do not initialize my 5.2 master with my 6.3 data.
    Thanks again

  • Multi-master replication questions for iPlanet 5.0, gurus out there?

    hi:
    I'm using iPlanet Dir Server 5.0 and I note that many gurus out there has
    been able
    to get this to work, that's good, but I have yet to. I have several
    questions, maybe
    someone can spend a few minutes and save me hours...
    I have a suffix called dc=calient,dc=net. I followed the suggestions in
    the
    iPlanet install guide and created 2 directory servers
    a) suffix o=NetscapeRoot, at some arbitrary port, 4601
    b) suffix dc=calient,dc=net, at the usual port 389.
    All my searches/create/delete work fine. However, when I try to replicate
    with multi-master between 2 machines, I keep getting into problems.
    Here's one set of questions...
    Q1: do people out there really split their tree from the o=NetscapeRoot
    tree?
    Q2: The admin guide says the the unit of replication is a database, and
    that each replication can only have 1 suffix. Is this true? Can
    a replicated db have more than 1 suffix?
    Q3: If I also want to replicate the o=NetscapeRoot tree, I have to set
    up yet 2 more replication agreements. Isn't this more work? If
    I just lump the 2 suffixes together, wouldn't it be easier? But would
    it work?
    Q4: I followed the instructions to enable replicas on the masters.
    But then I tried to create this cn=Replication Manager, cn=config
    object.
    But what is the object class of this entry? An iPlanet user has uid
    as its RDN... I tried a person object class, and I added a password.
    But then I keep getting error code 32, object not found in the error
    log. What gives? such as
    WARNING: 'get_entry' can't find entry 'cn=replication
    manager,cn=config', err 32
    Q5: Also, are there any access control issues with this cn=Replication
    Manager,
    cn=config object? By this I mean, I cannot seem to see this object
    using
    ldapsearch, I can only see cn=SNMP, cn=config. Also, do I have
    to give all access via aci to my suffix dc=calient,dc=net? Also,
    given the fact that my o=NetscapeRoot tree is at a different port (say
    4601),
    not 389, could this be an issue?
    Q6: when replication fails, should the Dir Server still come up? Mine does
    not anymore
    which is strange. I keep getting things like this in my log file
    [08/Nov/2001:21:49:13 -0800] NSMMReplicationPlugin - Could not send consumer
    mufasa.chromisys.com:389 the bind request
    [08/Nov/2001:21:49:13 -0800] NSMMReplicationPlugin - Failed to connect to
    replication consumer mufasa.chromisys.com:389
    But why shouldn't the dir server itself come up even if replication
    fails?
    steve

    Hi Steve,
    First, please read the 'Deployment Guide'. I think that is easier to
    understand when you want to setup multi-master replication. The
    'Administrator's Guide' gives you step-by-step instructions, but it may
    not help you to understand how to design your directory services.
    Stephen Tsun wrote:
    I have a suffix called dc=calient,dc=net. I followed the suggestions in
    the
    iPlanet install guide and created 2 directory servers
    a) suffix o=NetscapeRoot, at some arbitrary port, 4601
    b) suffix dc=calient,dc=net, at the usual port 389.
    All my searches/create/delete work fine. However, when I try to replicate
    with multi-master between 2 machines, I keep getting into problems.I don't understand something: which backend do you want to replicate?
    The one holding 'o=NetscapeRoot' or the one holding 'dc=calient,dc=net'?
    Do you want to setup replication between these two instances of the
    directory server (i.e. between port 4601 and 389 in your example)?
    Q1: do people out there really split their tree from the o=NetscapeRoot
    tree?If you have multiple directory servers installed in your environment, it
    is probably worth dedicating (at least) one directory server for the
    o=netscaperoot tree.
    Q2: The admin guide says the the unit of replication is a database, and
    that each replication can only have 1 suffix. Is this true? Can
    a replicated db have more than 1 suffix?Well, it is normal, since in iDS 5.x you have 1 suffix per database.
    You can, however, replicate multiple databases.
    Q3: If I also want to replicate the o=NetscapeRoot tree, I have to set
    up yet 2 more replication agreements. Isn't this more work? If
    I just lump the 2 suffixes together, wouldn't it be easier? But would
    it work?You can't lump the 2 suffixes together, because each backend has 1
    suffix associated with.
    Q4: I followed the instructions to enable replicas on the masters.
    But then I tried to create this cn=Replication Manager, cn=config
    object.
    But what is the object class of this entry?Usually, it is organizationalperson or inetorgperson. In most of the
    cases you want an objectclass which can have userPassword attribute.
    An iPlanet user has uid
    as its RDN... I tried a person object class, and I added a password.
    But then I keep getting error code 32, object not found in the error
    log. What gives? such asYou must have misconfigured something. Or perhaps, it is not
    cn=replication manager, cn=config, but 'uid=replication manager,cn=config'
    Q5: Also, are there any access control issues with this cn=Replication
    Manager,
    cn=config object? By this I mean, I cannot seem to see this object
    using
    ldapsearch, I can only see cn=SNMP, cn=config.The configuration tree is protected by ACIs, so you can not see them
    using anonymous BINDs. Try binding as 'directory manager' and you will
    find your entry.
    Also, do I have
    to give all access via aci to my suffix dc=calient,dc=net?For what purpose? For replication, it is enough to set user DN in the
    replication agreement and this user can update the replicated backend.
    Q6: when replication fails, should the Dir Server still come up?Yes.
    Bertold

  • Using attribute uniqueness with multi-master replication?

    Hi,
    I'm trying to use attribute uniqueness in a iDS 5.1 multi-master replication env. I have created a plug-in instance for the attribute (memberID) on each directory instance (same installation on NT) and tested (if I try to create a duplicate value under the same instance I get a constraint error as expected). However if I create a entry under one instance and then create a second entry (different DN) with the same attribute value on the second instance, the entry is written with no complaints? If I create the entries with an identical DN, then the directory automatically adds nsuniqueID to the RDN of the second entry to maintain DN uniqueness but it doesn't seem to mind about duplicate values within the entry despite the plug-in?
    BTW I've tested MMR and it is working and I'm using a subtree to enforce uniqueness.
    Regards
    Simon

    Attribute uniqueness plugin only ensure uniqueness on a single master before the entry is added. It doesn't check replicated operation since they have already been accepted and a positive result was returned to the client. So in a multiMastered environment, it is still possible to add 2 identical attributes, if like you're saying you're adding the entries at the same time on both master servers.
    We're working on a solution to have Attribute Uniqueness working in a multiMastered environment. But we're worried about its impact on performances we may get with it.
    Regards,
    Ludovic.

  • Streams multi master replication problem

    Dear Stefan Menschel,
    Good morning
    DB:10.2.0.2
    OS:windowsXp
    This is we are trying in test environment
    I have downloaded the OSC and just configured the multi master replication i mean just run the folder(400) scripts. After all the scripts exec. i was monitoring the streams activity through OEM console.
    i have got an error in -->streams -->propagation-->topology
    ERROR : ORA-25205: the QUEUE S1ADMIN.A_QN1_DBS2TEST does not exist
    and link is showing as RED culler with some error.
    hub : DBS1TEST
    streamsadmin: s1admin
    target1 : DBS2TEST
    streamsadmin: s2admin
    i might be wrong while preparing the scripts. if please let me know.
    i can send you the scripts also
    Thanks
    Hareesh.L
    DBA mailid: [email protected]
    Message was edited by:
    user627795

    I've never seen that error, but do make sure that your Replica ID's are different (that's a requirement). Also, you might consider setting up one of the masters as a hub at first, then initialize it, and then promote it. Once you've done that, you can create the replication agreement from the second host to the first one.
    Patrick

  • Unique identifier in multi-master replication scenario.

    Hi,
    I am trying to workout whether or not to use nsuniqueid as the unique identifier to pull data out of multi-master replication sun one directory server environment. From another forum I noticed that nsuniqueid can change if the tree structure changes. Is there any unique identifier that I can use which will guarantee that in every master server this unique identifier field will have the same value for the same object? I know that this is not the case for entryid values, as they are assigned different values in different master server.
    Thank you in advance,
    Johan

    Well, depending on your DIT, the usual thing to do is to use some collection of naming attributes, typically the RDN. If all else fails each DN will certainly identify each entry. You should avoid using nsuniqueid for anything in your application logic, though it is certainly unique and identical across the topology for each entry. What nsuniqueid allows you to do that other attributes don't is to differentiate two entries that are otherwise identical, but which have been added at different times. So, for instance, if you added an entry, deleted it, then readded it, the nsuniqueids would be different.
    Just make sure you don't do anything silly like putting nsuniqueid in an LDIF template. Always let the server create the nsuniqueid. And when you create a replica initialization LDIF, don't change the nsuniqueids in it.

  • Multi-cluster CCE environment

    Does any one have a multi-cluster (PG/UCM) environment with CVP and CUPS?
    In my environment we experience random call delivery failures to agents.
    Looking at MTP's, Transcoders and MRGL's and it isn't consistant.
    If someone else has this expansive of environment can we start a discussion regarding the challenges and pitfalls. One of our services uses the "Follow the Sun" approach which has agent located all over the world.
    5 UCM(7.1.3)/PG clusters
    1 Central control cluster (7.5.5)
    CVP (7.X)
    CUPS (7.X)
    Thanks in advance
    Troy

    Hello,
    This is the environment in which inserts, updates and deletes on objects
    in any nodes, included in the environment will be replicated to remaining
    nodes which are defined to be part of the replicated environment.
    Sinces changes to any node are replicated to all other nodes, they all work
    as Master, which gives it the name, Master-2-Master and also advanced replication.
    Tahir.

  • Multi Master Replication - Only works for some tables?

    I have a multi master replication between a 9i and an 816 database.
    All the tables are in the same tablespace and have the same owner. The replication user has full privs on all the tables.
    When setting up the replication some tables create properly, but others fail with a "table not found" error.
    Ideas anyone ?
    Andrew

    You said that you have a 9i replicated with a 816.
    I try the same thing but with two 9i enterprise version databases, free downloaded from www.oracle.com.
    when i ran
    exec dbms_repcat.add_master_database(gname=>'groupname', master=>'replica_link')
    this error appears
    ERROR at line 1:
    ORA-23375: feature is incompatible with database version at replica_link
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_REPCAT_MAS", line 2159
    ORA-06512: at "SYS.DBMS_REPCAT", line 146
    ORA-06512: at line 1
    please help me if u have any idea.

  • Dataguard in a replicated environment

    Folks,
    Has anyone implemented dataguard(standby database) in a replicated environment
    or
    worked in an environment where replication (updateable snapshots) are in place already along with dataguard?
    Are there any complications I need to be aware of whilst setting-up dataguard with replication on?
    Thanks
    Amit

    That is entirely due to the checkpoint delay. Depending on variations in hardware,
    I/O configuration and workload, it can take clients longer to flush their caches than
    the master. You can adjust the delay, which is 30 seconds by default, by calling
    the DB_ENV->rep_set_timeout API with the DB_REP_CHECKPOINT_DELAY flag.
    If you set it to 0, there will be no delay.
    Sue LoVerso
    Oracle

  • Multi-master Replication and Metadirectory 5.0

    The Metadirectory 5.0 documentation states that it cannot work with a directory server configured for multi-master replication. We need to use Metadirectory since we are integrating the Directory Server with other systems. Does this mean that we'll be forced away from MMR configuration? What are some of the alternatives? Does iPlanet have any plans for supporting MMR in future versions of Metadirectory?

    I think you can enable the retro changelog on a consumer replica. I'm pretty sure that works.
    You might be able to enable it on a hub. You also might get it to work on a master, but the changelog on the master will contain some replication housekeeping changes that may confuse Meta Directory. I'm not sure what they are but they are obvious if you look at the changelog. If you can figure out some way to screen those changes out from Meta Directory, you might be able to use it.

  • Multi-master access from separate DSCC views

    Hi,
    I have two DS6.2 installations - one in test, one in production. They are both multi-master (two masters) environments.
    In test I can run DSCC on both masters, and can manage both from each DSCC.
    However, in production I can only access both masters from one DSCC at a time. If I enable access on the second DSCC, then the first DSCC shows them in red as Inaccessible until I "Enable Access" which works ok, but then the second DSCC shows them as inaccessible.
    I have been over the configs, and they appear the same in test & production.
    Does anyone know what could be stopping the production instances from being managed by more than one DSCC?
    Thanks.
    Terry.

    Replicating the ADS instance, ie cn=dscc is not supported and not supposed to work so what you are trying to do is futile.

Maybe you are looking for

  • How to upload a pdf file and download/open it

    hi experts,   i want to upload a pdf or word format files to server and let user download or open it when user click a button, how to do that, can you post some sample codes, hunger for your advices and thanks a lot !!

  • What new features in 3.0 will not be available to the iPod Touch?

    I fully expect that I will not have the capability to use voice control (not sure why, but what is the hardware limitation that does not allow this to work???) for my music, games that require > 128 MB of space to run, and to even think of video or c

  • Should I buy this?

    Joined: 20 Dec 2008 Posts: 99 I am in the market for a laptop and want a Mac of some sort, and will probably be getting a used one. I will be using it for Logic/Final Cut, but not as my main machine. Other than that it will be used for other everyday

  • Component for Multiple Lines?

    Hi guys,    I'm working with a situation where I need to respect the format that RFC READ_TEXT returns me. I mean, I call the RFC and the column TDLINE has multiples lines with formatting. I put this content (CONCATENATE) within a attribute and showi

  • Exporting from compressor to FCPX

    Is there a way to export from Compressor directly to FCPX? I am trying to figure out how to best use my nMP.  At the moment I am playing with the best way to import 4K  mpeg-4 video from my gopro camera. russ