IDM in multi-master LDAP Replication

Hi,
We got two functional SUN Java Directory Server in multi-master replication setup. Both server have their own IDM's.
When I change password/uid from any IDM , straightaway changes get done on both LDAP servers and I can see changes on another IDM.
Problem is when I create new user from IDM of one server, user doesn't show up in second server IDM unless I run manually Accounts-->Load from resource.
Even full reconciliation doesn't pickup the new user on that IDM. What need to be done so IDM picks new users straight away in multi-master setup.
Thanks,
Farhan
Edited by: rozzx on May 5, 2009 11:32 PM
Edited by: rozzx on May 5, 2009 11:34 PM

Any help guys? Whey IDM is not getting update when I add/delete new user in Directory Server. I have to do Load from Resource to get new entries everytime.
And If I delete any user from LDAP, it still stays in IDM.

Similar Messages

  • Multi Master Site replication with Snapshot features

    Dear,
    I am implementing a Multi Master site replication. I want to replicate Table A in MASTER SITE A to Table A in MASTER SITE B.
    However, I don't want all the data from MASTER SITE A to be propagated to MASTER SITE B.I may want data that belongs to a certain criteria only to be replicated over.
    I know I can achieve this in a snapshot site replication. But can this be done in a Multi Master Site replication too ?

    Hai,
    As far under my observation, the tables that is marked for a replication from the MASTER SITE to Child Site is an exact replica of data. Your case could be achieved only thru SNAPSHOT VIEWS/GROUPS
    - [email protected]

  • Partial transaction for multi-master asynchronous replication

    I have a fundamental question about multi-master asynchronous replication.
    Lets consider a situation where we have 2 servers participating in a multimaster asynchronous mode of replication.
    3 tables are part of an Oracle transaction. Now if I mark one of these tables for replication and the other 2 are not a part of any of the replication groups.
    Say as a part of Oracle transaction one record is inserted into each of the 3 tables.
    Now if I start replicating, will the change made in the table marked for replication be replicated on to the other server. Since the change made to the other 2 tables are not propogated by the deferred queue.
    Please reply.
    null

    MR.Bradd piontek is very much correct.If the tables involved are interdependent you have to place them in a group and all of them should exist at all sights in a multi master replication.
    If the data is updated(pushed) from a snapshot to a table at a master site it may get updated if it is not a child table in a relationship.
    But in multi master replication environment even this is not possible.

  • Problem configure Ldap realm with multi master Ldap server

    I have a multimaster Directory Server (Ldap) eg: LdapMaster01 & LdapMaster02.
    I configured the realm Ldap:
    realm= myLdapRealm
    class name =com.sun.enterprise.security.auth.realm.ldap.LDAPRealm
    jaas-context = myLdapRealm
    directory = ldap://LdapMaster01:389
    base-dn = ou=my_APP, ou=Applications, dc=devinc, dc=com
    search-bind-dn = cn=Directory Manager
    search-bind-password = 99999999So how can i configure realm to automatically switch to LdapMaster02 when the LdapMaster01 is not up?
    Thanks in advance

    Probably you need an external intelligent
    loadbalancer unit, that receives all requests for an
    DNS like 'LdapMaster' and reroutes the traffic to
    LdapMaster01 or LdapMaster02.
    If one LdapMaster ist not available then the
    loadbalancer is responsible to route all requests
    only to the available server.Thank you very much. :)
    I found other post on the internet about this, and yes, probably the only way is a loadbalancer.
    Another way is to write a custum realm impl that receives the server list and try to connect until an available server is found.

  • Multi-Master replica busy

    I have a multi-master ldap server(5.1 service pack 2, build number:2003.028.2338). There are about 56k users in 1st server and 45k on the 2nd server when count the users in the 2 servers. Plus, I have a pure consumer which is working fine with the replica from 1st server.
    But when activated replication between two multi-master machines, the secondary server got the error message like below. And the replica process is very slow and the cost CPU always 40% and up. Can anyone please help me with this issue? What the replica process suppose to do? Why the replica is slow? What happens if the replication queue is about 11k entries? How can I speed up the replica?
    Thanks in advance for your help.
    ===error log====
    [29/Sep/2003:12:52:05 -0400] NSMMReplicationPlugin - Warning: timed out waiting 600 seconds for add operation result from replica 10.30.250.146:389
    [29/Sep/2003:12:52:05 -0400] NSMMReplicationPlugin - Failed to replay change (uniqueid e1181801-1dd111b2-8003d8a4-aa1f9c2b, CSN 3f729278000300030000) to replica "cn=PRI-2-SEC-USER-01, cn=replica, cn="o=sso", cn=mapping tree, cn=config (host 10.30.250.146, port 389)": Connection lost. Will retry later.
    [29/Sep/2003:12:52:05 -0400] NSMMReplicationPlugin - Warning: unable to send endReplication extended operation to consumer "cn=PRI-2-SEC-USER-01, cn=replica, cn="o=sso", cn=mapping tree, cn=config (host 10.30.250.146, port 389)" - error 2
    [29/Sep/2003:13:07:18 -0400] NSMMReplicationPlugin - Warning: timed out waiting 600 seconds for add operation result from replica 10.30.250.146:389
    [29/Sep/2003:13:07:18 -0400] NSMMReplicationPlugin - Failed to replay change (uniqueid e1181804-1dd111b2-8003d8a4-aa1f9c2b, CSN 3f729293000200030000) to replica "cn=PRI-2-SEC-USER-01, cn=replica, cn="o=sso", cn=mapping tree, cn=config (host 10.30.250.146, port 389)": Connection lost. Will retry later.
    [29/Sep/2003:13:07:19 -0400] NSMMReplicationPlugin - Warning: unable to send endReplication extended operation to consumer "cn=PRI-2-SEC-USER-01, cn=replica, cn="o=sso", cn=mapping tree, cn=config (host 10.30.250.146, port 389)" - error 2
    [29/Sep/2003:13:27:32 -0400] NSMMReplicationPlugin - Warning: timed out waiting 600 seconds for add operation result from replica 10.30.250.146:389
    ===access log===
    [29/Sep/2003:13:59:42 -0400] conn=292 op=1 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:43 -0400] conn=292 op=2 SRCH base="cn=config" scope=0 filter="(|(objectClass=*)(objectClass=ldapsuben
    try))" attrs="nsslapd-instancedir nsslapd-security"
    [29/Sep/2003:13:59:43 -0400] conn=292 op=2 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:43 -0400] conn=292 op=3 SRCH base="cn=options,cn=features,cn=config" scope=1 filter="(objectClass=dir
    ectoryServerFeature)" attrs=ALL
    [29/Sep/2003:13:59:43 -0400] conn=292 op=3 RESULT err=0 tag=101 nentries=0 etime=0
    [29/Sep/2003:13:59:48 -0400] conn=292 op=4 SRCH base="cn=config" scope=0 filter="(|(objectClass=*)(objectClass=ldapsuben
    try))" attrs="nsslapd-security"
    [29/Sep/2003:13:59:48 -0400] conn=292 op=4 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:48 -0400] conn=292 op=5 SRCH base="cn=config" scope=0 filter="(|(objectClass=*)(objectClass=ldapsuben
    try))" attrs="nsslapd-port nsslapd-secureport nsslapd-lastmod nsslapd-readonly nsslapd-schemacheck nsslapd-referral"
    [29/Sep/2003:13:59:48 -0400] conn=292 op=5 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:51 -0400] conn=292 op=6 SRCH base="cn=replication, cn=config" scope=0 filter="(|(objectClass=*)(objec
    tClass=ldapsubentry))" attrs=ALL
    [29/Sep/2003:13:59:51 -0400] conn=292 op=6 RESULT err=0 tag=101 nentries=1 etime=0
    [29/Sep/2003:13:59:51 -0400] conn=292 op=7 SRCH base="cn=replication,cn=config" scope=2 filter="(objectClass=*)" attrs=ALL

    I have had a similar problem. and it was due to that I recreated a new replication agreement which pointed to a different server with the same id. Or to the same server with a different ID.
    The solution provided by sun was to remove the reference to the old replicas in the dse.ldif and restart but it didn't work out. not even by ldapmodify on the replicas dn.
    I had to reinstall...

  • Multi master replication between 5.2 and 6.3.1

    I have a setup in which I have a master running version 5.2 and about 15 consumers ( slaves) all of which have been upgraded to 6.3.1 . I now want to create a multi master topology by promoting one of these consumers to be a master and still keep the 5.2 in use as we have a bunch of other applications that depend on the 5.2 instance. Our master has two suffixes. The master server is also the CA cert authority for all the consumers . After reading the docs I narrowed down the procedure to be
    1. Promote one of the 6.3.1 consumers to hub and then to master using the dsconf promote-repl commands. The problem here is that I am not sure how I can create a single consumer that can slave both the suffixes. We currently have them being slaved to different consumers.
    Also do I need to stop the existing replication between the 5.2 master and the would be 6.3.1 master to promote to hub and master.
    2. Set the replication manager manually or using dsconf set-server-prop on the new 6.3.1 master .
    3. Create a new replication agreement from 5.2 to 6.3.1 master without initializing. (using java console)
    4. Create new replication agreement from 6.3.1 to 5.2 (using command line)
    5. Create new repl agreements between the new 6.3.1 master and all the other consumers. For this do I need to first disable all the agreements between 5.2 and 6.3 or can I create new agreements without disabling the old ones?
    6. Initialize 6.3.1 from the 5.2 master.
    My biggest concern at this point is surrounding the ssl certs and the existing trusts the consumers have with the 5.2 master. Currently my 5.2 server acts as the CA authority for our certificate management with the ldap slaves. How can I migrate this functionality to the new server and also will this affect how the slaves communicate to the new master server ?
    Thanks in advance.

    Thanks Marco and Chris for the replies.
    I was able to get around the message by first manually initialzing the new slave using an ldif of the ou from the master , using dscc to change the default replication manager account to connect and finally editing the dse.ldif to enter the correct crypt hash for the new repl manager password. After these steps I was able to successfully set up replication to the second ou and also promote it to hub and master ( I had to repeat the steps after promotion of the slave to master as somehow it reset replication manager settings when I did that).
    So right now, I have a 5.2 master with two ou's replicating to about 15 consumers.
    I promoted one of these to be a second master (from consumer to hub to master). Replication is setup from 5.2 to 6.3 master but not the other way round.
    I am a little bit nervous setting up replication the other way round as this is our production environment and do want to end up blowing up my production instance. The steps I plan on taking are , from the new master server
    1. dsconf create-repl-agmt -p 389 dc=xxxxx,dc=com <5.2-master>:389
    2. dsconf set-repl-agmt-prop -p 389 dc=xxxxx,dc=com <5.2-master>:389 auth-pwd-file:<passwd_file.txt>
    I am assuming I can do all of this while the instances are up. Also in the above, does create-repl-agmt just create the agreement or does it also initalize the consumer with the data ? I want to ensure I do not initialize my 5.2 master with my 6.3 data.
    Thanks again

  • DS 5 SP1 multi-master replication

    I am running DS5 SP1 on Solaris 8 with the recommended patch set. I am
    using replica id 1 for master1 and replica id 2 for master2.
    I am replicating 7 nodes in a multi-master scenario. I start with about
    1000 base entries in each db node, and add approximately 1000 to each
    node via a meta directory interface to master1. I run 9 passes, so I am
    adding approximately 10,000 entries to each node over the course of the
    passes.
    Occasionally, inconsistently, and without explanation, slapd stops on my
    master1 (sometimes master2). I have successfully run through all passes
    at differing times, and then it happens. There is no specific error
    message indicating it has died, but a ps reveals that it is no longer
    running. Once I restart it, the error log indicates that the directory
    had a disorderly shutdown. It has the nasty effect of resetting my
    changelogs to 0 and requiring me to reinitialize my replication
    agreements. Below is a relevant excerpt from my error log, hopefully
    someone can give me some insight as to why this is happening . . . . .
    [10/Dec/2001:17:13:55 +0000] NSMMReplicationPlugin - Failed to connect
    to replication consumer master2.domain.com:636
    [10/Dec/2001:17:13:57 +0000] NSMMReplicationPlugin - Could not send
    consumer master2.domain.com:636 the bind request
    [10/Dec/2001:17:13:57 +0000] NSMMReplicationPlugin - Failed to connect
    to replication consumer master2.domain.com:636
    [10/Dec/2001:17:14:02 +0000] NSMMReplicationPlugin - Could not send
    consumer master2.domain.com:636 the bind request
    [10/Dec/2001:17:14:02 +0000] NSMMReplicationPlugin - Failed to connect
    to replication consumer master2.domain.com:636
    [10/Dec/2001:17:14:08 +0000] NSMMReplicationPlugin - Could not send
    consumer master2.domain.com:636 the bind request
    [10/Dec/2001:17:14:08 +0000] NSMMReplicationPlugin - Failed to connect
    to replication consumer master2.domain.com:636
    [10/Dec/2001:20:29:03 +0000] NSMMReplicationPlugin - Could not send
    consumer master2.domain.com:636 the bind request
    [10/Dec/2001:20:29:03 +0000] NSMMReplicationPlugin - Failed to connect
    to replication consumer master2.domain.com:636
    [10/Dec/2001:20:29:03 +0000] NSMMReplicationPlugin - received error 81:
    NULL from replica master2.domain.com:636 for add operation
    [10/Dec/2001:20:29:03 +0000] NSMMReplicationPlugin - Failed to replay
    change (uniqueid 45e39dd9-1dd211b2-80a8c58f-66391b25, CSN
    3c151ad00001000
    10000) to replica "cn=to master2, cn=replica,
    cn="ou=group,o=company,c=us", cn=mapping tree, cn=config (host
    master2.domain.com, port 636)"
    : Local error. Will retry later.
    [10/Dec/2001:20:29:03 +0000] NSMMReplicationPlugin - failed to send
    extended operation to replica master2.domain.com:636; LDAP error - 81
    [10/Dec/2001:20:29:03 +0000] NSMMReplicationPlugin - Warning: unable to
    send endReplication extended operation to consumer "cn=to master2,
    cn=replica, cn="ou=group,o=company,c=us", cn=mapping tree, cn=config
    (host master2.domain.com, port 636)" - error 1
    [10/Dec/2001:20:29:13 +0000] NSMMReplicationPlugin - failed to send
    extended operation to replica master2.domain.com:636; LDAP error - 81
    [10/Dec/2001:20:29:13 +0000] NSMMReplicationPlugin - Unable to send a
    startReplication extended operation to consumer "cn=to master2,
    cn=replica, cn="ou=group,o=company,c=us", cn=mapping tree, cn=config
    (host master2.domain.com, port 636)". Will retry later.
    [10/Dec/2001:20:29:13 +0000] NSMMReplicationPlugin - Could not send
    consumer master2.domain.com:636 the bind request
    [10/Dec/2001:20:29:13 +0000] NSMMReplicationPlugin - Failed to connect
    to replication consumer master2.domain.com:636
    [10/Dec/2001:20:29:17 +0000] NSMMReplicationPlugin - Could not send
    consumer master2.domain.com:636 the bind request
    [10/Dec/2001:20:29:17 +0000] NSMMReplicationPlugin - Failed to connect
    to replication consumer master2.domain.com:636
    [10/Dec/2001:20:29:33 +0000] NSMMReplicationPlugin - failed to send
    extended operation to replica master2.domain.com:636; LDAP error - 81
    [10/Dec/2001:20:29:33 +0000] NSMMReplicationPlugin - Unable to send a
    startReplication extended operation to consumer "cn=to master2,
    cn=replica, cn="ou=group,o=company,c=us", cn=mapping tree, cn=config
    (host master2.domain.com, port 636)". Will retry later.
    [10/Dec/2001:20:30:13 +0000] NSMMReplicationPlugin - failed to send
    extended operation to replica master2.domain.com:636; LDAP error - 81
    [10/Dec/2001:20:30:13 +0000] NSMMReplicationPlugin - Unable to send a
    startReplication extended operation to consumer "cn=to master2,
    cn=replica, cn="ou=group,o=company,c=us", cn=mapping tree, cn=config
    (host master2.domain.com, port 636)". Will retry later.
    [10/Dec/2001:20:31:54 +0000] - iPlanet-Directory/5.0 ServicePack 1
    B2001.264.1425 starting up
    [10/Dec/2001:20:31:56 +0000] - Detected Disorderly Shutdown last time
    Directory Server was running, recovering database.
    [10/Dec/2001:20:34:38 +0000] NSMMReplicationPlugin - replica_reload_ruv:
    Warning: data for replica ou=node3,ou=group,o=company,c=us was reloaded
    and it no longer matches the data in the changelog. Recreating the
    changlog file. This could effect replication with replica's consumers in
    which case the consumers should be reinitialized.
    [10/Dec/2001:20:34:38 +0000] NSMMReplicationPlugin - replica_reload_ruv:
    Warning: data for replica ou=node1,ou=group,o=company,c=us
    was reloaded and it no longer matches the data in the changelog.
    Recreating the changlog file. This could effect replication with
    replica's consumers in which case the consumers should be reinitialized.
    [10/Dec/2001:20:34:38 +0000] NSMMReplicationPlugin - replica_reload_ruv:
    Warning: data for replica ou=node4,ou=group,o=company,c=us was reloaded
    and it no longer matches the data in the changelog. Recreating the
    changlog file. This could effect replication with replica's consumers in
    which case the consumers should be reinitialized.
    [10/Dec/2001:20:34:38 +0000] NSMMReplicationPlugin - replica_reload_ruv:
    Warning: data for replica ou=group,o=company,c=us was reloaded a
    nd it no longer matches the data in the changelog. Recreating the
    changlog file. This could effect replication with replica's consumers in
    which case the consumers should be reinitialized.
    [10/Dec/2001:20:34:38 +0000] NSMMReplicationPlugin - replica_reload_ruv:
    Warning: data for replica ou=node2,ou=group,o=company,c=us was reloaded
    and it no longer matches the data in the changelog. Recreating the
    changlog file. This could effect replication with replica's consumers in
    which case the consumers should be reinitialized.
    [10/Dec/2001:20:34:38 +0000] NSMMReplicationPlugin - replica_reload_ruv:
    Warning: data for replica ou=node5,ou=group,o=company,c=us was reloaded
    and it no longer matches the data in the changelog. Recreating the
    changlog file. This could effect replication with replica's consumers in
    which case the consumers should be reinitialized.
    [10/Dec/2001:20:34:38 +0000] NSMMReplicationPlugin - replica_reload_ruv:
    Warning: data for replica ou=node6,ou=group,o=company,c=us was reloaded
    and it no longer matches the data in the changelog. Recreating the
    changlog file. This could effect replication with replica's consumers in
    which case the consumers should be reinitialized.
    [10/Dec/2001:20:34:39 +0000] - slapd started. Listening on all
    interfaces port 389 for LDAP requests
    [10/Dec/2001:20:34:39 +0000] - Listening on all interfaces port 636 for
    LDAPS requests
    [10/Dec/2001:20:34:39 +0000] - cos_cache_getref: no cos cache created
    [10/Dec/2001:20:34:44 +0000] NSMMReplicationPlugin - Data required to
    update replica "cn=to master2, cn=replica,
    cn="ou=node1,ou=group,o=company,c=us", cn=mapping tree, cn=config (host
    master2.domain.com, port 636)" has been purged. The replica must be
    reinitialized.
    [10/Dec/2001:20:34:44 +0000] NSMMReplicationPlugin - Replication:
    Incremental update failed and requires administrator action
    [10/Dec/2001:20:35:02 +0000] NSMMReplicationPlugin - Data required to
    update replica "cn=to master2, cn=replica, cn="ou=node2,ou=group,o=
    company,c=us", cn=mapping tree, cn=config (host master2.domain.com, port
    636)" has been purged. The replica must be reinitialized.
    [10/Dec/2001:20:35:03 +0000] NSMMReplicationPlugin - Replication:
    Incremental update failed and requires administrator action
    [10/Dec/2001:20:36:50 +0000] NSMMReplicationPlugin - Data required to
    update replica "cn=to master2, cn=replica,
    cn="ou=node3,ou=group,o=company,c=us", cn=mapping tree, cn=config (host
    master2.domain.com, port 636)" has been purged. The replica must be
    reinitialized.
    [10/Dec/2001:20:36:51 +0000] NSMMReplicationPlugin - Replication:
    Incremental update failed and requires administrator action
    [10/Dec/2001:20:38:34 +0000] NSMMReplicationPlugin - Data required to
    update replica "cn=to master2, cn=replica, cn="ou=node4,ou=group,o=
    company,c=us", cn=mapping tree, cn=config (host master2.domain.com, port
    636)" has been purged. The replica must be reinitialized.
    [10/Dec/2001:20:38:34 +0000] NSMMReplicationPlugin - Replication:
    Incremental update failed and requires administrator action
    [10/Dec/2001:20:42:31 +0000] NSMMReplicationPlugin - Data required to
    update replica "cn=to master2, cn=replica, cn="ou=node5,ou=group,o=
    company,c=us", cn=mapping tree, cn=config (host master2.domain.com, port
    636)" has been purged. The replica must be reinitialized.
    [10/Dec/2001:20:42:31 +0000] NSMMReplicationPlugin - Replication:
    Incremental update failed and requires administrator action
    [10/Dec/2001:20:44:18 +0000] NSMMReplicationPlugin - Data required to
    update replica "cn=to master2, cn=replica,
    cn="ou=node6,ou=group,o=company,c=us", cn=mapping tree, cn=config (host
    master2.domain.com, port 636)" has been purged. The replica must be
    reinitialized.
    [10/Dec/2001:20:44:19 +0000] NSMMReplicationPlugin - Replication:
    Incremental update failed and requires administrator action
    [10/Dec/2001:20:46:06 +0000] NSMMReplicationPlugin - Data required to
    update replica "cn=to master2, cn=replica, cn="ou=group,o=company,c=us",
    cn=mapping tree, cn=config (host master2.domain.com, port 636)" has been
    purged. The replica must be reinitialized.
    [10/Dec/2001:20:46:06 +0000] NSMMReplicationPlugin - Replication:
    Incremental update failed and requires administrator action

    Hi Jennifer,
    There's nothing in the logs that explain why your server is disappearing.
    I think that the reseting of the changelog is a known problem that should be fixed in 5.0SP2 and is fixed in iDS 5.1 which is now available.
    Just one quick question, when you say that you have 7 nodes, you mean 7 container entries or 7 suffixes that are separated replication areas ?
    Regards,
    Ludovic.

  • DSCC multi-master replication issue

    Hello All,
    I am trying to setup 2 DSCC consoles with multi-master replication enabled(cn=dscc), facing issue when I see directory server list in both dscc consoles, I see below 2 dscc instances also which should not be there(since they are ADS, should be hidden). Also the changes does not reflect immediately, takes around 30 minutes or so.
    Please note I am running 2 ADS instances on one box with port no 3998 and 4000 both are master. Seek your guidance on how to fix this issue.
         localhost:3998 (server not registered)      -      Started                -
         localhost:4000 (server not registered)      -      Started                -
    Below are the steps I carried out to setup multi-master replication-
    On instance 1
    Check the DSCC port no of instance 1
    D:\ldap_server\ds6\bin>dsadm info d:\ldap_server\var\dscc6\dcc\ads
    Instance Path: d:/ldap_server/var/dscc6/dcc/ads
    Owner: AT0094060
    Non-secure port: 3998
    Secure port: 3999
    Bit format: 32-bit
    State: Running
    Server PID: 2820
    DSCC url: -
    Windows service registration: Disabled
    Instance version: D-A00
    Enable replication-
    D:\ldap_server\ds6\bin>dsconf enable-repl -h localhost -p 3998 -e -d 10 master cn=dscc
    Enter "cn=Directory Manager" password:
    Use "dsconf create-repl-agmt" to create replication agreements on "cn=dscc".
    Setup repl agmt
    D:\ldap_server\ds6\bin>dsconf create-repl-agmt -h localhost -p 3998 -e cn=dscc localhost:4000
    Enter "cn=Directory Manager" password:
    Use "dsconf init-repl-dest cn=dscc localhost:3998" to start replication of "cn=dscc" data.
    Setup rep password
    D:\ldap_server\ds6\bin>dsconf set-server-prop -h localhost -p 3998 -D "cn=directory manager" -e def-repl-manager-pwd-file:d:\rmpassword.txt
    Enter "cn=Directory Manager" password:
    Check the password
    D:\ldap2_server\ds6\bin>dsconf get-server-prop -h localhost -p 3998 -e def-repl-manager-pwd
    Enter "cn=Directory Manager" password:
    def-repl-manager-pwd : {SSHA}g9OpeO2H57MH2Eq4xV5gbxVqHGzEG2VpdBSuIA==
    Restart ADS to read new changes
    D:\ldap_server\ds6\bin>dsadm restart d:\ldap-server\var\dscc\dcc\ads
    Check suffix prop-
    D:\ldap_server\ds6\bin>dsconf get-suffix-prop -h localhost -p 3998 -e cn=dscc
    Enter "cn=Directory Manager" password:
    all-ids-threshold : inherited (4000)
    db-name : bellatonus
    db-path : D:/ldap_server/var/dscc6/dcc/ads/db/bellatonus
    enabled : on
    entry-cache-count : unlimited
    entry-cache-size : 10M
    entry-count : 12
    moddn-enabled : inherited (off)
    parent-suffix-dn : undefined
    referral-mode : disabled
    referral-url : ldap://machine1:4000/cn%3Ddscc
    repl-accept-client-update-enabled : on
    repl-cl-max-age : 1w
    repl-cl-max-entry-count : 0
    repl-id : 10
    repl-manager-bind-dn : cn=replication manager,cn=replication,cn=config
    repl-purge-delay : 1w
    repl-rewrite-referrals-enabled : off
    repl-role : master
    require-index-enabled : off
    Run accord-
    D:\ldap_server\ds6\bin>dsconf accord-repl-agmt -h localhost -p 3998 -e cn=dscc localhost:4000
    To test replication manager password use-
    ldapsearch -h localhost -p 3998 -D "cn=replication manager,cn=replication,cn=config" -q -b "" -s base objectclass=*namingContexts
    Please enter bind password:
    check the replication status
    D:\ldap2_server\ds6\bin>dsconf show-repl-agmt-status -h localhost -p 3998 -e cn=dscc localhost:4000
    Enter "cn=Directory Manager" password:
    Configuration Status : OK
    Authentication Status : OK
    Initialization Status : OK
    Status : Enabled
    Last Update Date : Jun 13, 2012 4:04:22 PM
    On instance 2
    Check the DSCC port no-
    D:\ldap_server\ds6\bin>dsadm info d:\ldap2_server\var\dscc6\dcc\ads
    Instance Path: d:/ldap2_server/var/dscc6/dcc/ads
    Owner: AT0094060
    Non-secure port: 4000
    Secure port: 4001
    Bit format: 32-bit
    State: Running
    Server PID: 4264
    DSCC url: -
    Windows service registration: Disabled
    Instance version: D-A00
    Enable replication
    D:\ldap_server\ds6\bin>dsconf enable-repl -h localhost -p 4000 -e -d 10 master cn=dscc
    Enter "cn=Directory Manager" password:
    Use "dsconf create-repl-agmt" to create replication agreements
    on "cn=dscc".
    Setup repl agmt
    D:\ldap_server\ds6\bin>dsconf create-repl-agmt -h localhost -p 4000 -e cn=dscc localhost:3998
    Enter "cn=Directory Manager" password:
    Use "dsconf init-repl-dest cn=dscc localhost:3998" to start replication of "cn=dscc" data.
    Setup repl password
    D:\ldap_server\ds6\bin>dsconf set-server-prop -h localhost -p 4000 -D "cn=directory manager" -e def-repl-manager-pwd-file:d:\rmpassword.txt
    Enter "cn=Directory Manager" password:
    Check the password
    D:\ldap2_server\ds6\bin>dsconf get-server-prop -h localhost -p 4000 -e def-repl-manager-pwd
    Enter "cn=Directory Manager" password:
    def-repl-manager-pwd : {SSHA}g9OpeO2H57MH2Eq4xV5gbxVqHGzEG2VpdBSuIA==
    Restart ADS
    D:\ldap_server\ds6\bin>dsadm restart d:\ldap2-server\var\dscc\dcc\ads
    test replication manager password with
    ldapsearch -h localhost -p 4000 -D "cn=replication manager,cn=replication,cn=config" -q -b "" -s base objectclass=*namingContexts
    Please enter bind password:
    D:\ldap2_server\ds6\bin>dsconf get-suffix-prop -h localhost -p 4000 -e cn=dscc
    Enter "cn=Directory Manager" password:
    all-ids-threshold : inherited (4000)
    db-name : bellatonus
    db-path : D:/ldap2_server/var/dscc6/dcc/ads/db/bellatonus
    enabled : on
    entry-cache-count : unlimited
    entry-cache-size : 10M
    entry-count : 12
    moddn-enabled : inherited (off)
    parent-suffix-dn : undefined
    referral-mode : disabled
    referral-url : ldap://machine1:3998/cn%3Ddscc
    repl-accept-client-update-enabled : on
    repl-cl-max-age : 1w
    repl-cl-max-entry-count : 0
    repl-id : 20
    repl-manager-bind-dn : cn=replication manager,cn=replication,cn=config
    repl-purge-delay : 1w
    repl-rewrite-referrals-enabled : off
    repl-role : master
    require-index-enabled : off
    Initialize ADS2 from ADS1 using the replication agreement:
    dsconf init-repl-dest -e -i -h localhost -p 3998 cn=dscc localhost:4000
    Delete
    Check the replication status
    D:\ldap2_server\ds6\bin>dsconf show-repl-agmt-status -h localhost -p 4000 -e cn=dscc localhost:3998
    Enter "cn=Directory Manager" password:
    Configuration Status : OK
    Authentication Status : OK
    Initialization Status : OK
    Status : Enabled
    Last Update Date : Jun 13, 2012 4:07:36 PM
    Run insync
    D:\ldap2_server\ds6\bin>insync -D "cn=directory manager" -j d:\dmpw.txt -s localhost:3998 -c localhost:4000 20
    ReplicaDn Consumer Supplier Delay
    cn=dscc localhost:4000 localhost:3998 0
    cn=dscc localhost:4000 localhost:3998 0
    cn=dscc localhost:4000 localhost:3998 0
    ^C
    D:\ldap_server\ds6\bin>insync -D "cn=directory manager" -j d:\dmpw.txt -s localhost:4000 -c localhost:3998 20
    ReplicaDn Consumer Supplier Delay
    cn=dscc localhost:3998 localhost:4000 0
    cn=dscc localhost:3998 localhost:4000 0
    cn=dscc localhost:3998 localhost:4000 0

    Replicating the ADS instance, ie cn=dscc is not supported and not supposed to work so what you are trying to do is futile.

  • Multi-master replication - indefinite referrals

    Hi,
    I am trying to set two master ldap in multi master mode on college campus:
    LDAP1 and LDAP2.
    I get this in errorlog of LDAP1:
    Configuration warning This server will be referring client updates for replica o=comms-config indefinitely
    WARNING<20805> - Backend Database - .. - search is not indexed
    I guess the warning says the search is not indexed, but still that's not the mail problem...
    in other server logs on LDAP2, I get:
    ldap://LDAP2:389; Partial results and referral
    received|#]
    ...0400|SEVERE|sun-appserver|javax.enterpr
    ise.system.container.web|_ThreadID=24|SomeJSP:Servlet.service() for
    servlet jsp threw exception java.lang.Exception: The Error has occured due to
    :netscape.ldap.LDAPReferralException: referral (9); Referral:
    LDAP2; Partial results and referral received at...
    dse.ldif from LDAP1:
    nsslapd-referral: ...
    numsubordinates: 1
    nsslapd-state: referral on update
    from LDAP2:
    nsslapd-referral:...
    nsslapd-state: backend
    Should LDAP1 and LDAP2 be both set to "process write and read requests" ??
    Please advise

    The state of LDAP1 is probably the result of the way you've done the initialization of both servers and replication.
    What to do in this situation is precisely explained in the Administration Guide, Replication chapter.
    Regards,
    Ludovic

  • Multi-master replication on iDS 5.0 sp2

    Hi,
    I've configured multi-master replication on iDS 5.0 sp2 as per the admin guide, but have encountered a strange problem.
    I shut down Master B and add a user in Master A. The change is replicated to Master B when it starts up again. However, when I do the same on Master B, ie shut down Master A and add in Master B, the change doesn't get replicated to Master A when it comes online again.
    The Error Log shows the following:
    [10/Sep/2002:11:35:29 +0800] - iPlanet-Directory/5.0 ServicePack 2 B2002.033.1618 starting up
    [10/Sep/2002:11:35:33 +0800] NSMMReplicationPlugin - replica_check_for_data_reload: Warning: data for replica dc=company.net was reloaded and it no longer matches the data in the changelog. Recreating the changlog file. This could effect replication with replica's consumers in which case the consumers should be reinitialized.
    [10/Sep/2002:11:35:34 +0800] - slapd started. Listening on all interfaces port 389 for LDAP requests
    [10/Sep/2002:12:24:24 +0800] - cos_cache_getref: no cos cache created
    The above content appeared in both logs, even in the first case, where the replication was successful.
    Anybody know what's wrong? Any advice will be greatly appreciated.
    Thanks,
    Soon

    Here's the patch info for ids 5.1:
    Problem solved by this hot-fix (533850):
    nsTombstone entries not purged and get following error messages :
    [14/Dec/2001:15:05:33 -0500] NSMMReplicationPlugin - deletetombstone: unable to delete tombstone
    nsuniqueid=b40f9681-1dd111b2-805af370-0daeb637, uid=repuser7,dc=decisionone,dc=com,
    uniqueid b40f9681-1dd111b2-805af370-0daeb637: Operations error.
    [14/Dec/2001:16:05:33 -0500] - ancestorid BAD 13120, err=-666 Unknown error: -666
    Version impacted: iPlanet Directory Server 5.1
    Shared library delivered : libback-ldbm.so
    This library is build for Solaris 8.
    This library should be installed in <directory_server_install_root>/lib
    (e.g : /usr/iplanet/servers/lib/libback-ldbm.so)
    Checksums:
    47962 1309 libback-ldbm.so
    8606 396 libback-ldbm.so.gz
    I picked it up at this URL: ftp://icnc-cte.france/pub/ESCS/533850

  • Problem in multi master replication creation using DBA Studio -- replication

    Hi,
    I am trying to create multi master replication using DBA studio but facing following problem at the time of master group creation.
    ORA-04052 error occured when looking up remote object SYS.SYS@CBOLDATA
    ORA-00604 error occured at recursive SQL level 2
    ORA-01017 invalid username/password, logon denied
    ORA-02063 preceding line from CBOLDATA
    If you want to know how I am trying the whole thing then here is the way I am doing comfiguration.
    First I have created master site which is created successfully.
    Here I have used two database named UPP817 & CBOLDATA
    Step of master site creation is followed like this.
    1. Master Site addition (Added both UPP817 & CBOLDATA using SYSTEM username)
    2. Default User (No change is done, Default schema and password taken)
    3. Master Site Schema (Added SCOTT/tiger)
    4. Scheduled Link (No change is done, Default values taken)
    5. Purge (No change is done, Default values taken)
    6. Finished successfully
    Master group creation there is three option available
    1. General (Only value of Name is given)
    2. Object (No object added)
    3. Master Sites (It takes UPP817 by default as a master defination site then as a Master Site I have added CBOLDATA)
    when I pressed on Create It gives me the error listed above.
    I really appreceate your help
    Mukesh

    Create public database link at master site database for CBOLDATA.
    Also see that you are using SNMP protocol and Oracle Pipe at both
    databases. This will be usefull while making replication from remote
    place.
    I hope this will work.
    regards
    Avinash Jadhav

  • Multi Master Replication - Only works for some tables?

    I have a multi master replication between a 9i and an 816 database.
    All the tables are in the same tablespace and have the same owner. The replication user has full privs on all the tables.
    When setting up the replication some tables create properly, but others fail with a "table not found" error.
    Ideas anyone ?
    Andrew

    You said that you have a 9i replicated with a 816.
    I try the same thing but with two 9i enterprise version databases, free downloaded from www.oracle.com.
    when i ran
    exec dbms_repcat.add_master_database(gname=>'groupname', master=>'replica_link')
    this error appears
    ERROR at line 1:
    ORA-23375: feature is incompatible with database version at replica_link
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_REPCAT_MAS", line 2159
    ORA-06512: at "SYS.DBMS_REPCAT", line 146
    ORA-06512: at line 1
    please help me if u have any idea.

  • Multi-master Replication and Metadirectory 5.0

    The Metadirectory 5.0 documentation states that it cannot work with a directory server configured for multi-master replication. We need to use Metadirectory since we are integrating the Directory Server with other systems. Does this mean that we'll be forced away from MMR configuration? What are some of the alternatives? Does iPlanet have any plans for supporting MMR in future versions of Metadirectory?

    I think you can enable the retro changelog on a consumer replica. I'm pretty sure that works.
    You might be able to enable it on a hub. You also might get it to work on a master, but the changelog on the master will contain some replication housekeeping changes that may confuse Meta Directory. I'm not sure what they are but they are obvious if you look at the changelog. If you can figure out some way to screen those changes out from Meta Directory, you might be able to use it.

  • Multi-Master Replication Limits

    I'm starting new project and would like ask your opinion. They like to have multi-master replication of a single table between Chicago and Denver. A record could be written in Denver, then same row read in Chicago. The opposite is also true. Replication delay should be less than 100 ms. Load is 100 TSP
    I'm not even sure is possible to replicate that fast. What technology should I look at? I heard multi-master replication is not recommended any more.
    Many Thanks,
    Ed Jimenez

    Wich database version are you using?

  • Multi-Master Replication over the WAN

    DS verison: 5.1 sp1
    Did any one implement multi-master replication across the WAN or different IP subnets?

    Sun does mention about DS 5.2 being better than DS 5.1 in WAN based Muli-Master replication with respect to replication performance. I wanted to see if any one out there had implemented (or even played with this topology in their labs) it with out any major hickups.
    Thank you!

Maybe you are looking for