Replication question

Hi All,
Oracle database version: 8.1.7
Could you help with the following problem?
There is a table (table A) that's replicated and which has a cascade delete ('before delete') trigger (which also is replicated). In the trigger code, dbms_reputil.replication_off is called to disable replication in the beginning and replication is enabled at the end.
This cascade delete trigger deletes a few rows from other tables (which also have cascade delete triggers...and replication is turned off in these triggers as well)...
The problem is, when a row is deleted from table A, all the rows that were deleted (from other tables) in the trigger code are also becoming a part of the transaction and are sent over to the replicated site...
When the dbms_reputil.replication_off is being called, why are these delete calls (part of the trigger code) becoming a part of the replication transaction?
Any help would be appreciated!

Hi Justin,
Thanks much for your reply...
Yes, there is a 'BEFORE DELETE' trigger on the destination system...that's why I was hoping to see only one transaction go across, so that the trigger on the destination system can do the job...
Now, what's happening is, with all the transactions (that are part of the trigger code in the source system -- which are NOT supposed to be sent over) that are sent over, I am seeing a bunch of 'no data found' errors...
I don't understand as to how all the deletes that are part of the trigger code are being sent across (since replication is turned off in the trigger code)....
Any help would be highly appreciated!
Srini

Similar Messages

  • Multi-master replication questions for iPlanet 5.0, gurus out there?

    hi:
    I'm using iPlanet Dir Server 5.0 and I note that many gurus out there has
    been able
    to get this to work, that's good, but I have yet to. I have several
    questions, maybe
    someone can spend a few minutes and save me hours...
    I have a suffix called dc=calient,dc=net. I followed the suggestions in
    the
    iPlanet install guide and created 2 directory servers
    a) suffix o=NetscapeRoot, at some arbitrary port, 4601
    b) suffix dc=calient,dc=net, at the usual port 389.
    All my searches/create/delete work fine. However, when I try to replicate
    with multi-master between 2 machines, I keep getting into problems.
    Here's one set of questions...
    Q1: do people out there really split their tree from the o=NetscapeRoot
    tree?
    Q2: The admin guide says the the unit of replication is a database, and
    that each replication can only have 1 suffix. Is this true? Can
    a replicated db have more than 1 suffix?
    Q3: If I also want to replicate the o=NetscapeRoot tree, I have to set
    up yet 2 more replication agreements. Isn't this more work? If
    I just lump the 2 suffixes together, wouldn't it be easier? But would
    it work?
    Q4: I followed the instructions to enable replicas on the masters.
    But then I tried to create this cn=Replication Manager, cn=config
    object.
    But what is the object class of this entry? An iPlanet user has uid
    as its RDN... I tried a person object class, and I added a password.
    But then I keep getting error code 32, object not found in the error
    log. What gives? such as
    WARNING: 'get_entry' can't find entry 'cn=replication
    manager,cn=config', err 32
    Q5: Also, are there any access control issues with this cn=Replication
    Manager,
    cn=config object? By this I mean, I cannot seem to see this object
    using
    ldapsearch, I can only see cn=SNMP, cn=config. Also, do I have
    to give all access via aci to my suffix dc=calient,dc=net? Also,
    given the fact that my o=NetscapeRoot tree is at a different port (say
    4601),
    not 389, could this be an issue?
    Q6: when replication fails, should the Dir Server still come up? Mine does
    not anymore
    which is strange. I keep getting things like this in my log file
    [08/Nov/2001:21:49:13 -0800] NSMMReplicationPlugin - Could not send consumer
    mufasa.chromisys.com:389 the bind request
    [08/Nov/2001:21:49:13 -0800] NSMMReplicationPlugin - Failed to connect to
    replication consumer mufasa.chromisys.com:389
    But why shouldn't the dir server itself come up even if replication
    fails?
    steve

    Hi Steve,
    First, please read the 'Deployment Guide'. I think that is easier to
    understand when you want to setup multi-master replication. The
    'Administrator's Guide' gives you step-by-step instructions, but it may
    not help you to understand how to design your directory services.
    Stephen Tsun wrote:
    I have a suffix called dc=calient,dc=net. I followed the suggestions in
    the
    iPlanet install guide and created 2 directory servers
    a) suffix o=NetscapeRoot, at some arbitrary port, 4601
    b) suffix dc=calient,dc=net, at the usual port 389.
    All my searches/create/delete work fine. However, when I try to replicate
    with multi-master between 2 machines, I keep getting into problems.I don't understand something: which backend do you want to replicate?
    The one holding 'o=NetscapeRoot' or the one holding 'dc=calient,dc=net'?
    Do you want to setup replication between these two instances of the
    directory server (i.e. between port 4601 and 389 in your example)?
    Q1: do people out there really split their tree from the o=NetscapeRoot
    tree?If you have multiple directory servers installed in your environment, it
    is probably worth dedicating (at least) one directory server for the
    o=netscaperoot tree.
    Q2: The admin guide says the the unit of replication is a database, and
    that each replication can only have 1 suffix. Is this true? Can
    a replicated db have more than 1 suffix?Well, it is normal, since in iDS 5.x you have 1 suffix per database.
    You can, however, replicate multiple databases.
    Q3: If I also want to replicate the o=NetscapeRoot tree, I have to set
    up yet 2 more replication agreements. Isn't this more work? If
    I just lump the 2 suffixes together, wouldn't it be easier? But would
    it work?You can't lump the 2 suffixes together, because each backend has 1
    suffix associated with.
    Q4: I followed the instructions to enable replicas on the masters.
    But then I tried to create this cn=Replication Manager, cn=config
    object.
    But what is the object class of this entry?Usually, it is organizationalperson or inetorgperson. In most of the
    cases you want an objectclass which can have userPassword attribute.
    An iPlanet user has uid
    as its RDN... I tried a person object class, and I added a password.
    But then I keep getting error code 32, object not found in the error
    log. What gives? such asYou must have misconfigured something. Or perhaps, it is not
    cn=replication manager, cn=config, but 'uid=replication manager,cn=config'
    Q5: Also, are there any access control issues with this cn=Replication
    Manager,
    cn=config object? By this I mean, I cannot seem to see this object
    using
    ldapsearch, I can only see cn=SNMP, cn=config.The configuration tree is protected by ACIs, so you can not see them
    using anonymous BINDs. Try binding as 'directory manager' and you will
    find your entry.
    Also, do I have
    to give all access via aci to my suffix dc=calient,dc=net?For what purpose? For replication, it is enough to set user DN in the
    replication agreement and this user can update the replicated backend.
    Q6: when replication fails, should the Dir Server still come up?Yes.
    Bertold

  • Bi-directional replication question..

    I configured database A & database B for two-way replication.
    Basically, I configured from A to B without any problem, before runs "Start_Capture", I export & import the tables with rows=n.
    Question is, Now I need to configure from B to A. Do I need to export from B and import to A with rows=n before running "Start_Capture" on B? or just simply runs "Start_Capture" on B?? thanks..

    You can do this on B for instantiating table on A
    DECLARE iscn NUMBER;
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@A(
    source_object_name => 'schema.table',
    source_database_name => 'B',
    instantiation_scn => iscn);
    END;
    BEGIN
    DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION(table_name => 'schema.table');
    END;

  • CAD replication question- IPCC enterprise

    IPCC enterprise, 2 cad servers version 7.2.  One CAD server was down for more than four days causing replication between the primary and secondary CAD's to stop.  Ran postinstall.exe to restore the LDAP replication and same for recording and statistics , both showed successful but only LDAP seems to have taken.  when looking at the SQL database in CAD (using enterprise manager), the publication/subscription shows as deactivated for cra_agent_pub publication.
    question:  Can someone look at there server and let me know if the subscription is supposed to show as deactivated?  (using enterprise manager, it is the listed as the publication on the primary CAD server)  if not, does anyone know if it is as easy as reinitializing the subscription?
    thanks

    Possibly you have two separate issues, but let's address the replication issue first.
    1. Make sure that both RASS databases are well under the 2GB limit imposed by MS SQL Desktop Engine. Hit the 2GB limit and the DB will be deactivated. Go too far and it can even appear as "Unknown".
    2. Login as the local administrator and remove replication using PostInstall
    3. Use Enterprise Manager to ensure that all aspects of replication have been removed. All aspects!
    4. Again, using PostInstall apply replication again
    5. Check with Enterprise Manager that replication has been applied - all those red Xs should be gone - obviously .
    Regards,
    Geoff

  • Hyper-V Replication Question

    Hello,
    I just completed watching some YouTube videos and reading some helpful websites but still had a quick question. I have a .110 network in LocationA and a .60 network in LocationB.
    Both locations use static IP addresses. I can ping both locations from each other. If I want to replicate a VM from the .110 network or LocationA to the .60 network which is LocationB would this be possible since both locations have different networks? If
    I want to replicate from the .110 network, would my .110 network settings on the VM still be configured on the VM in the .60 network? If this is the case then when I turned on the replicated VM it would not be able to join the network without modifying the
    NIC settings. I was hoping someone could explain this process a little better or more in-depth so I have an idea of what I'm doing.
    On a different note as long as your two host servers are Server 2012 R2, can you replicate Server 2003 and Server 2008 systems or is it only for 2012 R2 VMs? Any and
    all information is helpful. Thank you!
    Pat

    Hello Pgrantland,
    I was checking some documents about this wonderful feature and I found this:
    2.3 To enable replication for a virtual machine
    9. A dialog box appears indicating that replication was successfully enabled. In this dialog, click the Settings button
    and provide settings to configure the network that the virtual machine will connect to on the Replica server. The Replica virtual machine does not automatically connect to any network on the Replica server (after a failover) by default,
    so these settings are important. You can configure the network settings so that the virtual machine will connect to a different network after a failover to the Replica server than it used when it was on the primary server.
    Step 2: Enable Replication
    https://technet.microsoft.com/en-us/library/d5d9e9f2-0f21-4d82-aa90-45e194de5ac9#BKMK_2_3
    These steps are configured before you finish the wizard to enable Hyper-V replica on a VM.
    Regarding the other question as far as I know a VM is just a VM from the Hyper-V side, the replica does not look what OS you are running.
    The only thing is that if you a re planing a VM to be replicated check that the hardware elements configured on the settings of you VM are able to be replicated.
    There are scenarios on which hardware configured on a VM makes that the VM is not supported for replica and you cannot replicated as long as those elements are configured on the VM.
    For example "Fibber Channel adapter" (Virtual SAN) and "Virtual hard disk sharing" are not supported to replicate.
    Hope this information help you to reach your goal. :D
    5ALU2!

  • Geo Replication question?

    I have a quick question about geo replication, is it possible to use the same DataCenter or it has to be a different DC when we set up Active Geo Replication for the Azure DB?

    Yes, you can choose the same region as the source database. I verified the scenario on my personal account.

  • Employee and Org Unit BP Replication Question - how to stop replication

    Hi,
       We are replicating business partners using BUPA_MAIN for IS_U Scenario.  This is working fine from R/3 to CRM, and changes are replicating back properly (CRM to R/3).  We have a filter on BUPA_MAIN  for the business partners in a particular number range that we want replicating back and forth.
       We are also using an ALE scenario from HR to CRM and this seems to be correctly creating our Org Unit BP's and Employee BP's in a specific number range.
        Our problem is that we are seeing failed bdoc's in SMW01 when the system creates new bp's in the Org Unit and Employee Role.  The error is because we have not set up corresponding number range in R/3.  We have not set this number range up in R/3 because we do not want these types of BP's to replicate back to R/3.  Why are these types of BP's trying to replicate back to R/3 when we have a filter on BUPA_MAIN that does not include these number ranges?  Is there something else we need to set somewhere so that these types of BP's will not try to replicate back to R/3?
    Thanks,
    Pam Cirssman

    Hi
    first u need to do the filter settings in object BUPA_MAIN to stop the employee replication to R3.
    When u create the no. ranges in r3 u can remove the filter so that the process will continue.
    Regards
    Manohar

  • Replication Questions

    I have a few questions on my plate. To start, I was thinking of getting my disc replicated through ProActionMedia.com. Has anyone heard anything about this company? I haven't been able to find any unbiased reviews.
    Second, the person I spoke to through this company said that if I give them a DVD-R master that I wouldn't be able to include CSS encryption or macrovision because they create a DLT from my DVD-R and would have to include it themselves for extra $$$. Does this sound accurate?
    Third, if I choose to create a DLT myself, what would be the best way? I know practically nothing about DLTs. What would be the best drive to buy? How do I learn how to create my master from it? What important things do I need to know when buying one? What are the best tapes?
    Any help will be great. Thanks

    You'll need a SCSI from Atto
    http://www.attotech.com/ultra5D.html
    They're the only cards I would use although others have used cards from Adaptec with good results.
    Also look on Ebay for both the SCSI and DLT. Make sure you get a 40/80 that writes type IV tapes or you'll be sitting for long periods of time waiting for your tape to finish.
    http://search.ebay.com/search/search.dll?sofocus=bs&sbrftog=1&from=R10&_trksid=m 37&satitle=dlt40%2F80external&sacat=-1%26catref%3DC6&sargn=-1%26saslc%3D2&sadis=200&fpos=91106&sabfmt s=1&saobfmts=insif&ftrt=1&ftrv=1&saprclo=&saprchi=&fsop=1%26fsoo%3D1&coaction=co mpare&copagenum=1&coentrypage=search
    Also consider buying AfterEdit if you plan on doing alot of work. I wouldn't dream of using anything else for pre-mastering. DVDSP does a sucky job in this department. It allows you to precisely choose your layer break and can correct several errors on a disc before the writing process. Many people have been burned by DVDSP in the past in the pre-mastering phase (myself included).
    http://www.dvdafteredit.com/
    Message was edited by: Eric Pautsch1

  • TimesTen Replication Question

    We have a customer that is experiencing some odd behavior with their TimesTen 7.0.3.0 install. Once in a while (every 3 or 4 days) the replication state (as shown by the ttRepAdmin command) goes into PAUSE state momentarily and at the same time the machine experiences a CPU spike. As far as I know the only way the replication state will change is via the ttRepAdmin command, meaning it would have to be done manually or by some process calling the ttRepAdmin command. They are certain that this is not being done manually and I’m certain that none of our processes is doing it.
    Does anyone know if there is any way that replication state goes from START to PAUSE and back to START on its own? As for the CPU spikes, my guess is that this is caused by the state transition from PAUSE to START.
    Thanks.

    There is nothing within TimesTen that will set the replication state to PAUSE but it could be a bug in the reporting of the state. Do these events coincide with any other events happenning on the system? Are there any messages reported in the TimesTen daemon log when this occurs? Have you observed what process(es) use more CPU when the CPU spike occurs? We've never heard of anything like this before so investigation will be needed to determine what is going on.
    Chris

  • ASA HTTP connection replication question

    I'm assessing the potential service impact of failing over from one ASA to another with HTTP replication disabled.
    There is some concern that HTTP flows may be broken or disrupted when we failover
    Surely HTTP is just an application running over TCP and the connection table is replicated by default in a stateful failover pair so I'm struggling to see how HTTP would be affected.
    Is HTTP replication only relevant if you have HTTP inspection enabled and all that inspection info can be replicated?
    Cheers, Dom

    Hi,
    From the command reference:-
    "By default, the ASA does not replicate HTTP session information when Stateful Failover is enabled. Because HTTP sessions are typically short-lived, and because HTTP clients typically retry failed connection attempts, not replicating HTTP sessions increases system performance without causing serious data or connection loss. The failover replication http command enables the stateful replication of HTTP sessions in a Stateful Failover environment, but could have a negative affect on system performance."
    Refer:-
    http://www.cisco.com/c/en/us/td/docs/security/asa/asa-command-reference/A-H/cmdref1/f1.html#pgfId-2014541
    Also . HTTP inspection would not have any effect on the stateful connection replication on the failover.
    I hope this answers your query. If you have any other query , please let me know.
    Thanks and Regards,
    Vibhor Amrodia

  • Lync Edge Server Replication Question

    Hi guys
    I hope someone can help me sort this out.
    We have 2 Lync server, 1 front-end and 1 edge server.
    Everything is working (IM, A/V, etc) but replication doesn't seem to be happening.
    The thing is, I rebuilt the Edge server few days ago.
    Before the rebuild, the second network interface on the Edge server is in the same network as the Lync front-end server (in the 192.168.0.0/16 network).
    After the rebuild, the second interface of the Edge server is in the 172.16.0.0/24 network and the Lync server remain on the 192.168.0.0/16.
    I then edited the topology and published it, and used the ZIP file to install the Edge server. The DNS names remained the same, only the IP address changed.
    Routing works, firewall ports are open, and IM + point-to-point AV are working.
    From the Lync server, I can access https://lync-edge:4443/ReplicationWebService via IE although I am getting a "Metadata publishing for this service is currently disabled" page.
    The other thing, get-csmanagementreplicationstore returns false for the edge server, and last update was the update before I rebuilt the Edge server.
    Running invoke-csmanagementstorereplication didn't return anything on the Event Log as well.
    Apart from that, everything seems OK, is this normal?

    Hi guys
    If anyone has their Lync Edge replication working, can you share with me what you get when you use your browser to visit https://lync-edge.address:4443/ReplicationWebService ?
    I managed to connect to that page from my Lync internal server and locally from the edge server, but what I get is a page with:
    This is a Windows© Communication Foundation service.
    Metadata publishing for this service is currently disabled.
    If you have access to the service, you can enable metadata publishing by completing the following steps to modify your web or application configuration file:
    1. Create the following service behavior configuration, or add the <serviceMetadata> element to an existing service behavior configuration:
    <behaviors>
    <serviceBehaviors>
    <behavior name="MyServiceTypeBehaviors" >
    <serviceMetadata httpGetEnabled="true" />
    </behavior>
    </serviceBehaviors>
    </behaviors>
    2. Add the behavior configuration to the service:
    <service name="MyNamespace.MyServiceType" behaviorConfiguration="MyServiceTypeBehaviors" >
    Note: the service name must match the configuration name for the service implementation.
    3. Add the following endpoint to your service configuration:
    <endpoint contract="IMetadataExchange" binding="mexHttpBinding" address="mex" />
    Note: your service must have an http base address to add this endpoint.
    The following is an example service configuration file with metadata publishing enabled:
    <configuration>
    <system.serviceModel>
    <services>
    <!-- Note: the service name must match the configuration name for the service implementation. -->
    <service name="MyNamespace.MyServiceType" behaviorConfiguration="MyServiceTypeBehaviors" >
    <!-- Add the following endpoint. -->
    <!-- Note: your service must have an http base address to add this endpoint. -->
    <endpoint contract="IMetadataExchange" binding="mexHttpBinding" address="mex" />
    </service>
    </services>
    <behaviors>
    <serviceBehaviors>
    <behavior name="MyServiceTypeBehaviors" >
    <!-- Add the following element to your service behavior configuration. -->
    <serviceMetadata httpGetEnabled="true" />
    </behavior>
    </serviceBehaviors>
    </behaviors>
    </system.serviceModel>
    </configuration>
    For more information on publishing metadata please see the following documentation: http://go.microsoft.com/fwlink/?LinkId=65455.
    Step 1: Go to C:\Program Files\Microsoft Lync Server 2010\Server\Replica Replicator Agent
    Step 2: Open the ReplicaReplicatorAgent.exe.config
    Step 3: Change enabled="false" to enabled="true:
    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
      <runtime>
        <generatePublisherEvidence enabled="true"/>
      </runtime>
    </configuration>
    Step 4: Revert any registry changes that you did.
    Step 5: Reboot the Server.
    AND IT WORKS THAN EDITING THE REGISTRY......

  • ODSEE 7 - 11.1.1.3.0 Replication from multiple masters(not multimaster) into one consumer (into same DIT)

    Hi All,
    Would you be able to help me regarding a replication question?
    We have an existing LDAP topology where we maintain masters and consumers.
    We have a request to expose (if it is possible) an additional suffixes into the current DIT on consumer side.
    Here is the situation :
    What do you think? is it possible to do this way?
    The goal is to get the objects from ou=europe and ou=us and from ou=company as well when the search is on the ou=company,dc=example,dc=com with scope =2 (subtree)
    Thank you for your help
    regards
    Laszlo

    Hi Laszlo,
    thank you for the additional clarification; in that scenario, adding the two sub-suffixes and creating the replication from the other masters (ou=europe and ou=us) shouldn't be an issue, as long as you have created the same structure also the other masters.
    Basically you could have on all the masters (company, europe and us) the root suffix which will always be: ou=company,dc=example,dc=com, then on the "europe" and "us" directories it will be just a kind of 'empty placeholder', whereas in the "company" directories will be fully populated:
    Master "Company" 1 - root suffix: ou=company,dc=example,dc=com                [This sub-suffix will contain the data and will be replicated]
    Master "Company" 2 - root suffix: ou=company,dc=example,dc=com                [This sub-suffix will contain the data and will be replicated]
    Master "Europe" 1 - root suffix: ou=company,dc=example,dc=com                    [This suffix will remain mostly empty and not replicated]
    Master "Europe" 1 - sub-suffix: ou=europe,ou=company,dc=example,dc=com    [This sub-suffix will contain the data and will be replicated]
    Master "Europe" 2 - root suffix: ou=company,dc=example,dc=com                    [This suffix will remain mostly empty and not replicated]
    Master "Europe" 2 - sub-suffix: ou=europe,ou=company,dc=example,dc=com    [This sub-suffix will contain the data and will be replicated]
    Master "US" 1 - root suffix: ou=company,dc=example,dc=com                   [This suffix will remain mostly empty and not replicated]
    Master "US" 1 - sub-suffix: ou=us,ou=company,dc=example,dc=com          [This sub-suffix will contain the data and will be replicated]
    Master "US" 2 - root suffix: ou=company,dc=example,dc=com                   [This suffix will remain mostly empty and not replicated]
    Master "US" 2 - sub-suffix: ou=us,ou=company,dc=example,dc=com          [This sub-suffix will contain the data and will be replicated]
    Replication:
    ou=company,dc=example,dc=com:
    msco1 <---MMR--> msco2
    msco1 ---> cons01, 02, ... 16
    msco2 ---> cons01, 02, ... 16
    ou=europe,ou=company,dc=example,dc=com
    mseu1 <---MMR--> mseu2
    mseu1 ---> cons01, 02, ... 16
    mseu2 ---> cons01, 02, ... 16
    ou=us,ou=company,dc=example,dc=com
    msus1 <---MMR--> msus2
    msus1 ---> cons01, 02, ... 16
    msus2 ---> cons01, 02, ... 16
    HTH,
    marco

  • I need help on Config Master Master Replication

    Hi :
    I got fail on config Master-Master Replica on Directory Server 5.2sp4
    . Can anyone give me some on configuring MMR.
    Error message I got as follows:
    [25/Jul/2006:17:11:17 +0800] - import userRoot: Processed 489736 entries -- average rate 191.8/sec, recent rate 104.7/sec, hit ratio 97%
    [25/Jul/2006:17:11:38 +0800] - import userRoot: Processed 492001 entries -- average rate 191.1/sec, recent rate 105.8/sec, hit ratio 97%
    [25/Jul/2006:17:12:00 +0800] - import userRoot: Processed 494072 entries -- average rate 190.3/sec, recent rate 100.8/sec, hit ratio 97%
    [25/Jul/2006:17:12:22 +0800] - import userRoot: Processed 496657 entries -- average rate 189.7/sec, recent rate 105.8/sec, hit ratio 97%
    [25/Jul/2006:17:12:43 +0800] - import userRoot: Processed 499113 entries -- average rate 189.1/sec, recent rate 114.6/sec, hit ratio 97%
    [25/Jul/2006:17:13:05 +0800] - import userRoot: Processed 501254 entries -- average rate 188.4/sec, recent rate 106.9/sec, hit ratio 97%
    [25/Jul/2006:17:13:08 +0800] - import userRoot: Workers finished; cleaning up...
    [25/Jul/2006:17:13:25 +0800] - import userRoot: Workers cleaned up.
    [25/Jul/2006:17:13:25 +0800] - import userRoot: Indexing complete. Post-processing...
    [25/Jul/2006:17:13:26 +0800] - import userRoot: Flushing caches...
    [25/Jul/2006:17:13:26 +0800] - import userRoot: Closing files...
    [25/Jul/2006:17:13:35 +0800] - import userRoot: Import complete. Processed 501537 entries in 2691 seconds. (186.38 entries/sec)
    [25/Jul/2006:17:13:35 +0800] - INFORMATION - NSMMReplicationPlugin - conn=-1 op=-1 msgId=-1 - multimaster_be_state_change: replica o=tfn.net.tw is coming online; enabling replication
    [25/Jul/2006:17:13:35 +0800] - INFORMATION - NSMMReplicationPlugin - conn=-1 op=-1 msgId=-1 - replica_reload_ruv: Warning: new data for replica o=tfn.net.tw does not match the data in the changelog.
    Recreating the changelog file. This could affect replication with replica's consumers in which case the consumers should be reinitialized.
    [25/Jul/2006:17:13:36 +0800] - INFORMATION - NSMMReplicationPlugin - conn=-1 op=-1 msgId=-1 - This supplier for replica o=tfn.net.tw will immediately start accepting client updates
    [25/Jul/2006:17:13:36 +0800] - INFORMATION - NSMMReplicationPlugin - conn=-1 op=-1 msgId=-1 - Replica (o=tfn.net.tw) has been initialized by total protocol as full replica
    [25/Jul/2006:17:13:36 +0800] - WARNING<10276> - Incremental Protocol - conn=-1 op=-1 msgId=-1 - Replication inconsistency Consumer Replica "ldap1.tfn.net.tw:389/o=tfn.net.tw" has a different data version. It may have not been initialized yet.
    The procedure I did as follows:
    Two LDAP , LDAP1, LDAP2
    1. Install LDAP1 and LDAP2
    2. Migrate Data from old LDAP Server to New LDAP1
    then
    On LDAP1:
    3. Enable Change Log and some parameter
    4. Enable Replication, select "Master"
    5. Create Replication Agreement to LDAP2
    and then
    On LDAP2:
    6. Enable Change Log and some parameter
    7. Enable Replication, select "Master"
    8. Create Replication Agreement to LDAP1
    Now Go back LDAP1
    9. select replication agreement and start initial LDAP2 now.
    10. wait for finished, LDAP1will receive initialiation completed message on startconsole.
    but On LDAP2
    11. check for error log. I got errors.
    [25/Jul/2006:17:13:36 +0800] - WARNING<10276> - Incremental Protocol - conn=-1 op=-1 msgId=-1 - Replication inconsistency Consumer Replica "ldap1.tfn.net.tw:389/o=tfn.net.tw" has a different data version. It may have not been initialized yet.
    Can anyone point out what steps I was wrong ?
    ps: I can't also find the button on procedure 3 mentioned on admin manual.==>
    "To Begin Accepting Updates Through the Console"
    Follow these steps to explicitly allow update operations after the initialization of a multi-master replica:
    3. Click the button to the right of the message to start accepting update
    operations immediately.
    Victor

    1. answer for your last question:
    data server -- configuration -- Data -- your suffix -- Replication
    In right panel, click on the SIR agreement -- click on "Action" on your right low corner -- choose "Send Updates now ..."
    2. answer for your replication question:
    Usually I will move step 2 down before step 9. That means setting up replication first, then feeding your master server using your ldif file: to ldap1, then init ldap2 using data from ldap1, either through Console or through command (if using command, have to use db2ldif -r to dump data from ldap1 and use ldif2db to init ldap2).
    If anytime in your log, you see "different verison of data", try to init again.

  • Calculating the size of the database in memory

    Hi,
    I am using BDB 5.1.19 with BTREE as an access method.
    I would like to calculate the memory footprint that would be needed for that.
    I followed the following doc:
    useful-bytes-per-page = (page-size - page-overhead) * page-fill-factor
    bytes-of-data = n-records *
    (bytes-per-entry + page-overhead-for-two-entries)
    n-pages-of-data = bytes-of-data / useful-bytes-per-page
    total-bytes-on-disk = n-pages-of-data * page-size
    I am not interested in calculating the size on disk but just in memory.
    Would the following be enough:
    bytes-of-data = n-records *(bytes-per-entry + page-overhead-for-two-entries)
    Since the rest is only for space on disk i don't need it, is that correct?
    Would this calculation help me also to calculate the cache size or is the cache size influenced by the page size?
    Thx.

    This is not a replication question and those of us monitoring this forum are not the best people to answer this question. You'll probably get a better and faster answer if you post it to the general Berkeley DB forum:
    Berkeley DB
    Paula Bingham
    Oracle

  • Howto load balancing

    Hi
    Using Dell 2U servers running FreeBSD 6, we are very exciting to get some new Xserve for our web needs.
    We plan to buy 2 XServe for sharing performances for a huge website and a Xserve RAID.
    As MySQL can be master-master replicated on its own, we only want to balance network load coming from the internet. What do you suggest to buy in front of the 2 Xserve ? How to sync files between the 2 Xserve but with manual rsyncs ?
    Thank you for your tips
    PowerBook 12" + MacBook rev1   Mac OS X (10.4.8)   Airport Express / 23" Cinema Display / Freebox

    What kind of traffic levels are you planning for?
    There are various load balancing techniques around ranging from the free to the very expensive, and the inefficient to the highly effective.
    At the lowest end of the scale is simple round-robin DNS. You configure your site's address with two IP addresses and the DNS server alternates between the answers. This gives you a crude load balancing option - there's no direct control over which server gets the traffic, levels may be uneven and, worst of all, there's no redundancy in case one server is down - the DNS server will continue to hand out it's IP address. Its advantage, though, is that it's free.
    Moving up the scale a little there are various Linux based solutions that can do simple load balancing through its IPTables (or ipchains in older distributions).
    I've never used them, so I don't know how effective they are.
    At the top end of the scale are load balancing appliances such as those from F5, Cisco, NetScaler and others.
    These move up the price chain a fair way but offer far more features, server health monitoring (to make sure the server is able to service the request), advanced load balancing rules to decide which server should handle the request, and multi-gigabit per second throughput.
    If you just have a couple of servers, the appliance path may be overkill, although if you expect to grow then it may be something worth considering.
    As for the replication question, there are many ways of doing that. At its simplest level, rsync can replicate a directory or filesystem using an efficient protocol that just transfers the differences. It's included in Mac OS X and the man page gives examples of its use.

Maybe you are looking for

  • Bash script: Rotate your wallpaper and SLiM theme at the same time

    EDIT; I've decided I should really thank Cerebral for his help with writing this script; this thank you should have been here from the get go. After writing: #!/usr/bin echo "Hello World!" I wrote a script to rotate my fluxbox background and SLiM the

  • Can't find flex2gateway

    Good morning Flexers... I am unable to connect to my CFC's on one machine. I think the problem is that I can not find my flex2gateway with a web browser. This machine is hosting multiple domains and is running WIndows 2000 Pro, IIS version 5.0, CF Se

  • Trying to execute shell process in Linux

    Hi. I'm trying to execute a simple command in the form of 'bash -c "ls ~root ~"' from within java. I've tried using both the java.lang.Runtime.exec() methods and the java.lang.ProcessBuilder class but both produce the same error message (which I get

  • Workbook settings macro is not visible for another user?

    Hi all, There is a work book with formulas inside the workbook, when execute the workbook i can see the formulas and the values derived from the excel formulas. But when another user is running it, he is not able to see those values, I think there sh

  • How can I find out what objects are in a datafile???

    My database is 8.1.7... I have a tablespace with multiple datafiles. How can I find out what objects are in a specific datafile??? Thanks in advance...