DSEE7 replication changelog delete

Hello there,
How do i delete replication changelog db in dsee7. I know in 5.2, i can put the db in readonly mode and run ldapmodify command...
dn: cn=changelog5,cn=config
changetype: delete
and then add the changelog which will create a new changelog db. But in dsee7 i am not sure how to do this since there is no separate DN like above in dsee7.
Have any one done this or know how to?
Thanks!

AFAIK that's pretty much how you do it in DS7. You could try a couple other things, like:
1) Just stop the server and manually remove the changelog DB file
2) Save off the omega tombstone (nsuniqueid=ffffffff-ffffffff-...) and then readd it after you have lost the replica state
Though I am fairly confident neither of these is supported. Deleting the changelog file may even make your DB unusable and force you to reinitialize from LDIF, since the latest Berkeley DB is much less forgiving about tinkering with its files.
If you are trying to shrink down the changelog to a more manageable size, and you know it is mostly empty, have you tried using the DB compaction tool? It's supposed to reclaim space in the DB files, including the changelog.

Similar Messages

  • Table for Changelog Deletion

    Hi experts,
    I would like to know is there any table which stores the changelog Deletion entries information.
    In production the output of infospoke and the output of the ODS is different for the same selections.I think this problem is coming its bcz of the change log deletion in the ODS.
    So please provide me with the table name so that I can clarify the same to my client.
    Thanks
    Bulli

    Bulli,
    Go to Manage on your ODS.  On the main menu, go to 'environment' then choose 'delete change log data'.  In the ensuing window, give your selections, your change log entries will be gone.   On the contents tab strip, if you open the change log 'button', you will see your change lot table technical name on the top.  Use it and go to SE11 or SE16 to open the table.
    Hope this helps.

  • Replication prevents deletion of nnn files by Cleaner

    Hello,
    we are implementing JE HA for the following case in our application: in the same process we have a master and a replication node. With that we want to "duplicate" the logs of JE.
    Now we are performing some performance tests that looks as follows:
    - inserting approx.  5 mio records. approx 5 records are inserted per transaction. As durability for these transactions we use NO_SYNC = new Durability(SyncPolicy.NO_SYNC, SyncPolicy.NO_SYNC, ReplicaAckPolicy.NONE);
    - after the insertion of the 5 mio records we commit a transaction with the durability SYNC_ALL = new Durability(SyncPolicy.SYNC, SyncPolicy.SYNC, ReplicaAckPolicy.ALL);
    This test is repeated 18 times without replication and with replication. Db dir of master node and of replication node are on different hard drives.
    Results without replication:
    run    size of db dir
    1    1,82 GB       
    2    3,14 GB       
    3    4,76 GB       
    4    6,39 GB       
    5    8,35 GB       
    6    9,41 GB       
    7    10,98 GB   
    8    12,29 GB   
    9    14,87 GB   
    10    15,91 GB   
    11    17,14 GB   
    12    18,40 GB   
    13    19,85 GB   
    14    21,34 GB   
    15    22,82 GB   
    16    23,95 GB   
    17    30,33 GB   
    18    29,09 GB   
    Results with replication:
    run    size of db dir 
    1    2,90 GB  
    2    5,13 GB  
    3    8,04 GB  
    4    9,99 GB  
    5    13,98 GB 
    6    15,90 GB 
    7    18,11 GB 
    8    20,56 GB 
    9    24,84 GB 
    10    26,70 GB 
    11    29,52 GB 
    12    31,78 GB 
    13    35,29 GB 
    14    37,84 GB 
    15    41,75 GB 
    16    44,17 GB 
    17    58,30 GB 
    18    62,45 GB 
    So the db dir of the master node with repliation is 2 times greater then without replication. The replication dir has quite the same size as the db dir.
    In the log I see the following warnings:
    15214087 [Checkpointer] WARN com.sleepycat.je.rep.impl.node.RepNode  - Replication prevents deletion of 271 files by Cleaner. Start file=0x69 holds CBVLSN 37.910.724, end file=0x30a holds last VLSN 203.831.541
    15214087 [Checkpointer] WARN com.sleepycat.je.cleaner.Cleaner  - Cleaner has 271 files not deleted because they are protected by replication.
    So I thinkt the root of the problem is the cleaner (271 log files are approx. 27 GB with 100 MB for each log file).
    I have the following settings:
    final ReplicationConfig repConfig = new ReplicationConfig();
    repConfig.setGroupName(buildReplicationGroupName(config));
    repConfig.setNodeName(nodeName);
    repConfig.setNodeHostPort(buildNodeHostPort(config, isMaster));
    repConfig.setHelperHosts(buildNodeHostPort(config, true));
    repConfig.setDesignatedPrimaryVoid(isMaster);
    if (!isMaster) {
        repConfig.setConfigParam(ReplicationConfig.TXN_ROLLBACK_LIMIT, "0"); //$NON-NLS-1$
    final EnvironmentConfig ec = new EnvironmentConfig();
    ec.setAllowCreate(allowCreate);
    ec.setTransactional(true);
    ec.setConfigParam(EnvironmentConfig.STATS_COLLECT, "false"); //$NON-NLS-1$
    ec.setConfigParam(EnvironmentConfig.LOG_FAULT_READ_SIZE, "5120"); //in kbytes //$NON-NLS-1$
    ec.setDurability(new Durability(SyncPolicy.NO_SYNC, SyncPolicy.NO_SYNC, ReplicaAckPolicy.NONE));
    ec.setConfigParam(EnvironmentConfig.LOG_FILE_MAX, String.valueOf(logFileSize));
    ec.setCacheSize(cacheSize);
    So what is the reason that log files are not cleaned by the cleaner? A backup is not running and there is only one master and one replication node (which are both active) so I don't see any reason why the cleaner is not able to clean up the log files.
    Thanks and regards
    Arthur

    Hi Mark,
    > If JE does not delete a cleaned file, it shouldn't interfere with the file system cache.
    Good to know that. In the test where we have observed it we also had a hard drive failure so that the secondory node was not active all time. During the time when the secondy node was not active we had the following periodic "jumps" (here is an extract):
    test iteration
    Size of master node (in GB)
    duration of iteation in seconds
    16
    41,75 GB
    238,95 s
    17
    44,17 GB
    1.401,79 s
    18
    58,30 GB
    3.645,19 s
    19
    62,45 GB
    4.012,21 s
    20
    65,63 GB
    258,41 s
    21
    68,26 GB
    2.857,59 s
    22
    72,20 GB
    4.308,58 s
    23
    76,71 GB
    257,48 s
    24
    79,43 GB
    2.402,81 s
    25
    83,35 GB
    3.909,78 s
    26
    87,53 GB
    264,53 s
    27
    90,18 GB
    2.998,03 s
    28
    94,38 GB
    4.577,57 s
    29
    97,86 GB
    348,04 s
    30
    101,18 GB
    3.791,21 s
    Now we have repeated the test with turning off REPLAY_FREE_DISK_PERCENT and we could also observe such "jumps" but not so periodically. Also the secondy node was active all the time:
    test iteation
    Size of master node (in GB)
    duration of iteation in seconds
    16
    23,24254608 GB
    210,873
    17
    24,40531349 GB
    1039,509
    18
    30,64518356 GB
    474,456
    19
    29,79491043 GB
    388,555
    20
    30,99015427 GB
    381,95
    21
    32,11663818 GB
    534,577
    22
    33,74309921 GB
    525,605
    23
    34,45024109 GB
    374,743
    24
    35,47125244 GB
    329,264
    25
    36,43193817 GB
    306,769
    26
    38,03277588 GB
    382,974
    27
    39,30369949 GB
    429,551
    28
    40,77051926 GB
    480,432
    29
    42,08839798 GB
    466,209
    30
    43,61213684 GB
    562,23
    31
    45,43471527 GB
    555,052
    32
    46,59885025 GB
    464,182
    33
    47,82064056 GB
    2725,801
    34
    64,02706909 GB
    2705,762
    35
    54,47792435 GB
    662,78
    36
    55,66398621 GB
    574,658
    37
    56,98921967 GB
    532,457
    38
    58,36198425 GB
    605,702
    39
    59,97261047 GB
    616,732
    40
    61,31098557 GB
    668,84
    41
    63,08939362 GB
    877,646
    42
    64,31282043 GB
    984,774
    43
    65,94416046 GB
    1741,718
    44
    66,21831512 GB
    1099,786
    45
    66,77852631 GB
    1096,8
    In the log files I can also see that e.g. in iteration 16 the checkpointer was ~ 150 times active, in iteration 17 ~ 800 times, in iteration iteration 23 ~130 times , in teration 33 ~1300 times, so there are log statements like:
    10549968 [Checkpointer] DEBUG com.sleepycat.je.recovery.Checkpointer  - Checkpoint 6329: source=daemon success=true nFullINFlushThisRun=80 nDeltaINFlushThisRun=1136
    10549983 [Checkpointer] DEBUG com.sleepycat.je.recovery.Checkpointer  - Checkpoint 6424: source=daemon success=true nFullINFlushThisRun=51 nDeltaINFlushThisRun=521
    10550577 [Checkpointer] DEBUG com.sleepycat.je.recovery.Checkpointer  - Checkpoint 6330: source=daemon success=true nFullINFlushThisRun=90 nDeltaINFlushThisRun=1574
    10550593 [Checkpointer] DEBUG com.sleepycat.je.recovery.Checkpointer  - Checkpoint 6425: source=daemon success=true nFullINFlushThisRun=91 nDeltaINFlushThisRun=1563
    10551388 [Checkpointer] DEBUG com.sleepycat.je.recovery.Checkpointer  - Checkpoint 6331: source=daemon success=true nFullINFlushThisRun=99 nDeltaINFlushThisRun=1557
    10551592 [Checkpointer] DEBUG com.sleepycat.je.recovery.Checkpointer  - Checkpoint 6426: source=daemon success=true nFullINFlushThisRun=84 nDeltaINFlushThisRun=1587
    10552138 [Checkpointer] DEBUG com.sleepycat.je.recovery.Checkpointer  - Checkpoint 6332: source=daemon success=true nFullINFlushThisRun=85 nDeltaINFlushThisRun=1613
    So the number of checkpoint runs corresponds to the duration. What might be the reason for that? I could imagine that the JE cache is too small (100 MB for each node).
    Regards
    Arthur

  • Empty Log files not deleted by Cleaner

    Hi,
    we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
    We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
    store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
    store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    During the test the space occupied by the database continues to grow !!
    Cleaner threads are running but logs these warnings:
    2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
    2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
    Log files are not delete even if empty as seen using DBSpace utility:
    Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
      File    Size (KB)  % Used
    00000000      12743       0
    00000001      12785       0
    00000002      12725       0
    00000003      12719       0
    00000004      12703       0
    00000005      12751       0
    00000006      12795       0
    00000007      12725       0
    00000008      12752       0
    00000009      12720       0
    0000000a      12723       0
    0000000b      12764       0
    0000000c      12715       0
    0000000d      12799       0
    0000000e      12724       1
    0000000f       5717       0
    TOTALS      196867       0
    Here is the configured topology:
    kv-> show topology
    store=MMS-KVstore  numPartitions=90 sequence=106
      zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
      sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
        [rg1-rn1] RUNNING
                 single-op avg latency=4.414467 ms   multi-op avg latency=0.0 ms
        [rg2-rn1] RUNNING
                 single-op avg latency=1.5962526 ms   multi-op avg latency=0.0 ms
        [rg3-rn1] RUNNING
                 single-op avg latency=1.3068943 ms   multi-op avg latency=0.0 ms
      sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
        [rg1-rn2] RUNNING
                 single-op avg latency=1.5670061 ms   multi-op avg latency=0.0 ms
        [rg2-rn2] RUNNING
                 single-op avg latency=8.637241 ms   multi-op avg latency=0.0 ms
        [rg3-rn2] RUNNING
                 single-op avg latency=1.370075 ms   multi-op avg latency=0.0 ms
      sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
        [rg1-rn3] RUNNING
                 single-op avg latency=1.4707285 ms   multi-op avg latency=0.0 ms
        [rg2-rn3] RUNNING
                 single-op avg latency=1.5334034 ms   multi-op avg latency=0.0 ms
        [rg3-rn3] RUNNING
                 single-op avg latency=9.05199 ms   multi-op avg latency=0.0 ms
      shard=[rg1] num partitions=30
        [rg1-rn1] sn=sn1
        [rg1-rn2] sn=sn2
        [rg1-rn3] sn=sn3
      shard=[rg2] num partitions=30
        [rg2-rn1] sn=sn1
        [rg2-rn2] sn=sn2
        [rg2-rn3] sn=sn3
      shard=[rg3] num partitions=30
        [rg3-rn1] sn=sn1
        [rg3-rn2] sn=sn2
        [rg3-rn3] sn=sn3
    Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
    java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
    Pinging components of store MMS-KVstore based upon topology sequence #106
    Time: 2015-02-03 13:44:57 UTC
    MMS-KVstore comprises 90 partitions and 3 Storage Nodes
    Storage Node [sn1] on 192.168.144.11:5000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
            Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
            Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
    Storage Node [sn2] on 192.168.144.12:6000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
            Rep Node [rg2-rn2]      Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
            Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
    Storage Node [sn3] on 192.168.144.35:7000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
            Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
            Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013

    Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
    The solution is described in NoSql forum:   Store cleaning policy

  • Property auth-protocol don't know a ssl-simple as valid value

    hi...
    I use dsee70 and i configure replica between two instances...
    For default the auth-protocol is "clear", but now i want to change to "ssl-simple".
    To do that, i execute the following command:
         # /opt/dsee7/bin/dsconf set-repl-agmt-prop dc=maximatt,dc=org:389 auth-protocol:ssl-simple
         Introduzca la contraseña de "cn=Directory Manager": *****
         El valor de la propiedad "auth-protocol" es "clear".
         "ssl-simple" no es un valor permitido para "auth-protocol".
         No se ha podido realizar la operación "set-repl-agmt-prop" en "localhost:389".
    But if i check the valid values to auth-protocol executing the following command, i have:
         # /opt/dsee7/bin/dsconf help-properties | grep auth-protocol
    RAG auth-protocol rw clear|ssl-simple|ssl-client Protocolo de autenticación utilizada para el enlace con el consumidor (Predeterminado: CLEAR)
    So... i'm some comfused....
    - ¿is "ssl-simple" a valid value? or
    - ¿exist a bug in dsconf? or
    - ¿can i change the auth-protocol after replica created or i must set them quen i create them?
    thanks for advance!!!

    hi...
    finally i can setup replication over ssl... to do that i execute the following comands in both servers (i show only executed in one of them - maximatt -):
    1) create a replication aggrement
    # /opt/dsee7/bin/dsconf create-repl-agmt -A "ssl-simple" dc=maximatt,dc=org relay.maximatt.org:636
    2) set pwd replication
    # echo ***** > /home/mdsee7/dsee70-maximatt/rep.pwd
    # chown mdsee7:mdsee7 /home/mdsee7/dsee70-maximatt/rep.pwd
    # /opt/dsee7/bin/dsconf set-repl-agmt-prop -h maximatt.org -p 389 dc=maximatt,dc=org relay.maximatt.org:636 auth-pwd-file:/home/mdsee7/dsee70-maximatt/rep.pwd
    3) restart the instance
    # /opt/dsee7/bin/dsadm restart /home/mdsee7/dsee70-maximatt
    4) init sufixes
    # /opt/dsee7/bin/dsconf init-repl-dest dc=maximatt,dc=org relay.maximatt.org:636
    5) delete non ssl replication aggrements
    # /opt/dsee7/bin/dsconf delete-repl-agmt dc=maximatt,dc=org relay.maximatt.org:389
    so.. we cant modify a non ssl replication to ssl replication... its necesary create a new replication setting (aggrement)...
    thanks for your help!!
    Salu2!

  • Moving Hyper-V VM to another disk

    Hi all, I have read several forums about moving the Hyper-V VM to another disk but I still don't understand on the replication side. I have two physical server running Hyper-V and it's HA between each other with all VM running replication. I have install
    a new disk on both plysical machine and both need to move all the VM to the new disk. 
    Please let me know if my steps are correct.
    In this scenario, I have 2 hour downtime so I could power off all VMs:
    1) Pause all replication for all VMs. 
    2) Export the replicated powered-off VMs on the inactive server.
    3) Copy it to the new disk.
    4) Import the VMs from the new location.
    5) Power off the VMs on the active server
    6) Export the VMs.
    7) Copy it to the new disk location
    8) Export the VMs from the new location
    9) Power up the VMs and start the replication
    10) Delete the old vhd on the old location
    Qns 1: Will the replication know the new location on the other server or as long the VM on both server are on the exact location so it will automatically replicate over ?
    In another scenario let's say I don't have any downtime and just for my knowledge:
    1) Pause all replication for all VMs. 
    2) Export the replicated powered-off VMs on the inactive server.
    3) Copy it to the new disk.
    4) Import the VMs from the new location.
    5) Failover the VMs to the inactive server
    6) Power off the VMs on the other server
    7) Export the VMs.
    8) Copy it to the new disk location
    9) Export the VMs from the new location
    10) Power up the VMs
    11) Failover the VMs
    12) Delete the old vhd files at the old location
    Qns 2: Will the failover work in this case after I have change the location of the other server ?

    1) What Operating System are you running on the HyperV Server?
    Win 2012
    2) You are talking about HA, but I would assume that you don't have a cluster here. Right?
    Yes no cluster here. HA means if one of my Hyper-V is down, the other VMs on the other Hyper-V would take over
    3) You are talking about Replication, but I would assume that you don't mean HyperV Replica. Right?
    But If you have HyperV Replica running you don't have to export / delete / copy / import /etc. your VMs. This is all done by HyperV replica...
    I don't understand this part. You mean Hyper-V Replica could help me to move the vmhd to another disk ?

  • Rollback in  a trigger

    Using a trigger on update.
    In the code for the trigger I want to do the following:
    Update a table in another database on the same server using a database link.
    Update a table in another database on a remote server using a database link.
    Delete the row that is currently being updated.
    If any errors occur, I have:
    EXCEPTION
         WHEN OTHERS
         THEN ROLLBACK
    The idea is that if any of the three actions fail, then an exception will occur and the previous actions would roll back.
    If I take down the remote server, I would expect that the error handling would come into play. It’s not. The changes on the local database have occurred. Am I doing something wrong, flawed logic?

    Thanks for the help Kamal.
    Not real familiar with autonomous transaction, but you have to define that in the trigger, right?
    I'm including the code below. If I'm not using autonomous transaction, is the rollback in the error handling causing my problem? Should I just remove the whole error handling portion?
    CREATE OR REPLACE TRIGGER F_SW.TR_CENTERA_REPL
    AFTER INSERT
    ON F_SW.REPLICATION_VALUES
    DECLARE
    rowRV f_sw.replication_values%ROWTYPE;
    cursor c1 is select f_docnumber, fnp_clipid, fnp_archive
    from f_sw.replication_values;
    BEGIN
    Open c1;
    loop
    fetch c1 into rowRV;
    exit when c1%NOTFOUND;
    update DOCTABA@IDBDEV2
    set A60 = rowRV.fnp_archive, A61 = rowRV.fnp_clipid
    where F_DOCNUMBER = rowRV.f_docnumber;
    --second, update the local FOCUS FIIV table
    update FIIV_INDEX_VAL@FIT1
    set CENTERA_ARCH_D = TO_DATE('1/1/1970', 'mm/dd/yyyy') + rowRV.fnp_archive,
    CENTERA_ADDR_X = rowRV.fnp_clipid
    where DOC_ID_N = rowRV.f_docnumber;
    --finally, delete this row from replication values
    delete
    from REPLICATION_VALUES
    where F_DOCNUMBER = rowRV.f_docnumber;
    end loop;
    --clean up
    close c1;
    EXCEPTION
    WHEN others
    THEN rollback;
    END;

  • How to stop BDOC?

    Hi Gurus,
    Business partner with consumer role is getting created in CRM, after BP creation BDOC is getting generated which needs to be stopped.
    Please let me know how i can stop this bdoc generation.
    Vinay

    Hi Vinay,
    You can convert BUPA_MAIN replication object from simple bulk into simple intelligent.
    In this way you will be able to filter the partners that are going to be replicated from CRM into R/3.
    The steps are basically:
    1 - If you already have some subscription in SMOEAC for BUPA_MAIN replication object --> Delete all subscriptions from all clients. ( read note 650569 to help you on this task).
    2 - Delete publication that uses BUPA_MAIN replication object (SMOEAC transaction)
    3 - Eliminate replication object BUPA_MAIN (SMOEAC transaction)
    4 - Recreate replication object BUPA_MAIN but as simple intelligent. (SMOEAC transaction)
    Set all the possible fields to filter.
    5 - Recreate the publication for replication object BUPA_MAIN (SMOEAC transaction)
    Create a All BPs except consumers publication to be used, instead of the one existent today which must be All Business Partners.
    Set the filter to exclude consumers: IS_CONSUMER NE X.
    6 - Recreate the subscription using the filter above. (SMOEAC transaction)
    Kind regards,
    Susana Messias

  • Error replicating materials from R/3 to SRM

    I had to delete all materials from SRM master record to replicate agaim the materials object MATERIAL.
    I have for my country one table CRMM_PR_TAX that contaim tax datas that comes with object MATERIAL during replication, after deletion I started the new replication and an error occurr:
    Data cannot be maintained for set type CRMM_PR_TAX
    Diagnosis
    The data for a set can be maintained only if the product is assigned to a category to which the set type is assigned
    Procedure
    Check whether the product is assigned to a category that allows the set
    Please someone knows can I chect it and configur agaim.
    Thanks
    Nilson

    Hi
    <b>Please go through the links below -></b>
    Note 1039863 - COD Performance : Problem with table CRMM_PR_TAX
    Set_types - Replication from Backend to SRM
    /people/marcin.gajewski/blog/2007/02/05/how-to-replicate-material-master-from-r3-to-srm
    Set type 380BDF7B502D63F7E10000009B38FA0B does not exist
    Note 381051 - Leasing: Category not found financing
    Re: Mandatory set types
    importnace of set type in comm_hierarchy transaction
    Hope this will definitely help. Do let me know.
    Regards
    - Atul

  • Removing .db files

    Hi,
    We want to write a script that removes all the .db files whenever the user wishes. We noticed that besides the .db files that contain the plain stored data, there exist some files with the same .db extension but they are internal files, I don't know what those files contain but I'm afraid that the meta data that they contain is beyond the content of the DB and removing them can cause damage to the system, while the user is just interested to remove the data stored in the database not messing up the system.
    The problem is that we didn't find any way to distinct between those files and the DB content files as both have the same extension.
    Any idea?
    Edited by: 975807 on Dec 17, 2012 12:06 AM

    I just want to make sure: can I count on this format? I mean, Isn't that possible that internal files with a prefix other than __db will exist?Your original question was about internal database files. Replication is the only part of Berkeley DB that uses internal databases. If you exclude files matching __db*.db, you will not delete our internal database files.
    If you are using replication and you are planning to delete all user databases, are you planning to do so on all sites or just an individual client? Are you planning to continue replication after deleting all user databases? Why do you want to use the file system to do the deletes instead of using the Berkeley DB remove or dbremove call to remove each database?
    I am less familiar with all the other internal files for all the other parts of Berkeley DB. I believe that we use the __db prefix for all of them, but I am not 100% sure. You may want to consider posting this more general question to the Berkeley DB forum:
    Berkeley DB
    We can make no guarantees about internal file names in future releases.
    And one more question: Assuming we erase only files that are not internal files, is it eligible to remove some of the .db files, or there might be any dependencies between files so that if we erase few of them it will damage the others' functionality? If you are using Berkeley DB Replication, I would not recommend you delete a subset of your user databases using the file system. This will compromise the consistency of your replicated sites and you will have to reinitialize them from scratch. While using replication, if you want to remove individual databases you should use the Berkeley DB remove or dbremove calls to do this on your master so that these changes can be properly replicated to clients.
    I am less certain of the answer if you are not using replication. I believe that there are some cases where databases do have consistency relationships with each other. For example, the Berkeley DB secondary and foreign index features require consistency between more than one database. And, of course, an application could easily have its own consistency requirements between databases. This sounds risky to me, but you may want to get another opinion on the general Berkeley DB forum.
    Paula Bingham
    Oracle

  • ERROR 8261

    Dear All,
    i am using DS6.3 and recent days i am getting the error in the error log as follows,
    [04/Oct/2012:15:02:16 +1000] - ERROR<8261> - Replication - conn=-1 op=-1 msgId=-1 - Internal error Failed to retrieve change while trimming changelog, DB error -30989 - DB_NOTFOUND: No matching key/data pair found
    [04/Oct/2012:15:02:17 +1000] - ERROR<8261> - Replication - conn=-1 op=-1 msgId=-1 - Internal error Failed to retrieve change while trimming changelog, DB error -30989 - DB_NOTFOUND: No matching key/data pair found
    [04/Oct/2012:15:02:18 +1000] - ERROR<8261> - Replication - conn=-1 op=-1 msgId=-1 - Internal error Failed to retrieve change while trimming changelog, DB error -30989 - DB_NOTFOUND: No matching key/data pair found
    Could some tell me how to stop this error from the error log. thanks!
    Regards,
    Karthik

    It looks like your replication changelog database has been damaged. You may need to recreate it and possibly reinitialize replication. You may also want to investigate whether this system is trying to replay changes to a remote replica that have been trimmed from the replication changelog, though the error message is saying something slightly different from that.

  • UserCertificate updates in DS7.0

    Hello there,
    I see different behavior in DS7.0 when we update the 'userCertificate' attribute through our Certificate Authority software. It removes the 'binary' string in the 'userCertificate' attribute while updating it. Please see below both 5.2 and 7.0 behavior...
    if our CA software updates directly in 5.2 LDAP, i see the following...
    5.2 Audit log:
    dn: uid= certTest,ou=customer,o=domain.com
    changetype: modify
    replace: userCertificate
    userCertificate;binary:: MIIEJDCCA42gAwIBAgIQBwKuvF5DDvvywxUuzAMFMDANBgkqhkiG9w0BAQU
    if our CA software updates directly in 7.0 LDAP, i see the following...which has no 'binary' string attached.
    7.0 Audit log:
    dn: uid= certTest,ou=customer,o=domain.com
    changetype: modify
    replace: userCertificate
    userCertificate:: MIIEJDCCA42gAwIBAgIQBwKuvF5DDvvywxUuzAMFMDANBgkqhkiG9w0BAQU
    Can anyone tell me whether anything needs to be changed since it breaks our APP.
    Thanks

    That is for a search within DS7.0, but my problem is updating userCertficiate attribute in 5.2 from DS7. Anyway, i got the below response from Sun engineer...
    In 5.2, userCertificate and userCertificate;binary refer to TWO different values. What happens is :
    1) a MOD replace is performed on the DS7.0 instance on userCertificate;binary
    2) the ;binary subtype is not put in the [7.0] replication changelog so the DS5.2 instance receives a replicated MOD on userCertificate (and not on userCertificate;binary)
    3) Since these two refer two different values, we have a problem afterwards
    A fix was introduced in 5.2 2004 Q2 to change the default behaviour (see #4819710 in release notes http://docs.sun.com/source/817-5216/index.html#wp52562).
    A new config parameter ; nsslapd-binary-mode has been introduced. If set to auto, the behaviour changes means that userCertificate and userCertificate;binary would be treated as the same value by DS5.2 (the default value of the parameter is compat51 which acts as the old behaviour)

  • Replication - Max Changelog Records and Age

    hi,
    could anyone explain when setting the Max Changelog Records and Changelog Age paramters for the chnagelogs, what exactly happens when these thresholds are reached?
    does the log get trimmed/deleted when it's max entries or age is reached and a new one established?
    are there any guidelines on how to set these given the number of changes happening to the directory?
    does a large number of allowed changelog entries have an impact on performance? depending on how it scans the changelog file to determine updates to replicate?
    Thanks

    At first there is no performance impact directly related to the changelog age and size limitation parameters.
    The problem seems to appear when one of these limits are reached. The changelog DB starts to be trimmed (you can see it by enabling "replication" log level) and slapd proccess can consume up to 25% CPU (on a 440 with 4 CPUs) and intensive disk usage.

  • How to completly delete DFSR replication group in Windows 2008 R2?

    I just relete a DFSR replication group from DFS management console. Now I need to recreate it but it says the replication group already existed.
    So that mean there is still trace of that replication group. How can I delete it completely (but I keep the namespace)?

    Like John said, it's possible it hadn't replicated yet. 
    You could also go into ADSI and see if there are still objects under CN=Topology,CN=(Your Namespace), cn=DFSR-GlobalSettings,CN=System,DC=domain,DC=com. 

  • Deleting cn=changelog suffix

    Does anybody know of any way to delete the cn=changelog suffix from the command line? dsconf delete-suffix does not work. I am using DSEE 6.3.1.
    Thanks!
    Enrique.
    Edited by: enriquec on Jul 22, 2009 9:14 AM

    You can turn off retro changelog using dsconf, then stop instance and delete the backend if you're sure you don't want it anymore.
    $ dsconf set-server-prop retro-cl-enabled:offHowever, the backend directory will get recreated at the next restart. I've logged a bug about this. Let's see what engineering have to say about this.
    6863425: cn=changelog backend gets recreated even after RCL plugin is turned off

Maybe you are looking for