Cache Insert for an already existing key is firing two update events

Using an INamedCache I perform an insert to update the values of my item that is already in the cache. I have listeners on the cache, in particular I have a DelegatingCacheListener that listens on EntryUpdated.
On doing this insert once (and only once, I think!) my update event handler gets invoked twice. I have put breakpoints in all conceivable places that might raise this event, do an insert or update this key but it only ever hits my insertion point once.
I have looked at the args of the event and seen that on the first invocation my old value is A and new value is B. For the second invocation my old value is B and New value is C. The second invocation new value object is only different by a single member which is a time value that I set and this difference is only a second. This kind of implies that somehow my own application/code is inserting twice into the cache for the same key/value.
Has anyone seen this symptom before?
Is there somehow that I can see what is inserting into the cache?
Let me know if I can provide more information to help.
Edited by: Kunal on Feb 9, 2011 8:08 PM
Changed the title grammar

Hi Kunal -
First, just to narrow this down, it appears that your code is running in .NET. Is there other logic deployed as part of this application, e.g. application logic on the server side (the "back end") that might be doing something? This would include anything from the incubator (e.g. push replication).
Second, what does your listener do when it receives the event?
Third, are there other listeners for the event that may be doing something when the first change appears?
The event originates from within the cluster, from the server that "owns" the specific piece of data that your are inserting, so unless the value is changing there twice, you should not get two events. I would start by tracking down all of the places that change the time value, since you can see that is the value that is being changed.
Peace,
Cameron Purdy | Oracle Coherence
http://coherence.oracle.com/

Similar Messages

  • Mess SR 053 - Creation not possible (Mapping for product already exist..

    There is an error ocurring when the CIF is executed, this message I can see in the transaction SMQ1, the error " SR 053 - Creation not possible (Mapping for <product> already exists <product> ", the product really exists in APO but I don't know why this error occur because in this case the material should be updated. Anyone know why this error occur ?? and how can I fix ??
      When I try to execute only this material in CFM2 no errors occur.
      Regards,
    Edson Suzuki

    We did faced a similar problem. The reason was the external material number was different in both cases.
        ie the already existing record has the external material number padded with '0000' and the one coming doesn't have leading '0000'
    If this suggestion helps you, please reward by points..
    Shibu

  • Generate a Number for a already existing table with records

    Hi All,
    Is it possible to generate number for a column in a table. The table already has 50,000 records. I have a empty column in which I want to generate number sequentially for reference purpose......
    Whether it can be done ?????
    I was thinking og Merge/ rownum I didnt get any possibls solution out of it .........any suggestions....
    Thanks
    Ananda

    I have a empty column in which I want to generate number sequentially for reference purposeThe following table content :
    Oracle
    DB2
    MSSQLNow, you have to put a kind of ID, but what row will get the 1st ? What row will get the 2nd ? etc.
    If you don't care, then sequence as suggested is your friend. Then, you could use also sequence for further insert.
    Nicolas.

  • Note 947091 - Persistance Exception on Client-"Entity key already exists."

    Persistance Exception on Client-"Entity key already exists."
    I am using SP18.
    Tell me what is the <b>correction steps</b> for this PROGRAM ERROR.
    did SP19 already solve this problem??
    Message was edited by:
            yzme yzme
    Message was edited by:
            yzme yzme

    <b>step 1) when i sync the application , i will get this </b>
    Exception while proccessing method SMARTSYNC : java.lang.IllegalStateException: No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E : No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E 
    • Exception while proccessing method SMARTSYNC : java.lang.IllegalStateException: No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E : No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E 
    • Exception while proccessing method SMARTSYNC : java.lang.IllegalStateException: No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E : No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E 
    <b>step 2)i modify on the values 1st time, and sync</b>
    Persistance Exception on Client-"Entity key already exists."
    <b>step 3)after that i modify same row with different values</b>
    Creation on Field cZFLIGHT2_010_PAYMENTSUM with value 1600 is not allowed because SyncBo of Row 0001213668 is CHANGED_GLOBAL_INSYNC and CreateInputQualifyType is FORBIDDEN
    b)step 4) try to check the merep_mon, consequtive row with action "M" and no "D"</b>
    try to check the worklist with seq no 8
    msg: No download data from R/3 found in downloader
    <b>step 5) try to sync again</b>
    • Synchronization started 
    • Connection set up (without proxy) to: http://emi-sap:50100/meSync/servlet/meSync?~sysid=N01& 
    • Successfully connected with server. 
    • Processing of inbound data began. 
    • Exception while proccessing method SMARTSYNC : java.lang.IllegalStateException: No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E : No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E 
    • Exception while proccessing method SMARTSYNC : java.lang.IllegalStateException: No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E : No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E 
    • Exception while proccessing method SMARTSYNC : java.lang.IllegalStateException: No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E : No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E 
    • Exception while proccessing method SMARTSYNC : java.lang.IllegalStateException: No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E : No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E 
    • Exception while proccessing method SMARTSYNC : java.lang.IllegalStateException: No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E : No Context available for ConversationId 47AF7ABD0FEF894BBF49E61E6732223E 
    • Exception while proccessing method SMARTSYNC : java.lang.RuntimeException: Inbound processing of container with index 7 failed: Cannot insert as entity already exists for IClassDescriptor/Key: sZFLIGHT2_010/1213683 : r0001213682 : Inbound processing of container with index 7 failed: Cannot insert as entity already exists for IClassDescriptor/Key: sZFLIGHT2_010/1213683 : r0001213682
    anyone can pls tell me if my coding got problem, i just have to modify the row values, and upload it, or do i need to do anything on the uploader/or SyncBoDelta
    my code for modifying values,
    i have change the values of "STREET" to <b>"ANG MO KIO"</b>
    http://i192.photobucket.com/albums/z231/yzme/d1.gif
    but the data in server still <b>"HEAVEN ST"</b>
    http://i192.photobucket.com/albums/z231/yzme/d1.gif
    public String modifyRecordAmt(String eventName,boolean didNavigate){
                             String syncBoName="ZFLIGHT2";
                             String syncKey="0001213682";
                             tableViewBean.setString(syncBoName +" "+syncKey);
                             System.out.println("SyncBoName: " +syncBoName + " syncKey: " +syncKey);
                             tcp = TableContentProvider.instance(syncBoName);
                             tcp.modifyTable(syncBoName,syncKey);                                   return JSP_DETAIL_SYNCBOINSTANCE;
    public void modifyRecordAmt(String syncBoName,String syncKey){
                System.out.println("modifyRecordAmt");
                SyncBoDescriptor sbd=null;
                sbd=descriptorFacade.getSyncBoDescriptor(syncBoName);
                System.out.println("bp 1");
                SyncBo sb=null;
                SyncBo sb2=null;
                try{
                   System.out.println("bp 2");
                     sb=dataFacade.getSyncBo(sbd,syncKey);
                     //sb2=dataFacade.getSyncBo(sbd,"010");
                   System.out.println("bp 3");
                }catch(PersistenceException pex){
                     System.out.println("Exception in modifyRecordLoc:" +pex.getMessage());
                SmartSyncTransactionManager transactionManager;
                try{
                   System.out.println("bp 4");
                     transactionManager=dataFacade.getSmartSyncTransactionManager();
                   System.out.println("bp 5");
                     if(!transactionManager.isTransactionStarted()){
                        System.out.println("bp 6");
                          transactionManager.beginTransaction();
                        System.out.println("bp 7");
                          boolean b1,b2,b3,b4,b5,b6,b7,b8;
                          //b1=setHeaderFieldValue2(sb,"CARRID","AA");
                         //b2=setHeaderFieldValue2(sb,"CONNID","0017");
                          //b3=setHeaderFieldValue2(sb2,"FLDATE","2005-10-10");
                          //b4=setHeaderFieldValue2(sb2,"COUNTRYFR","US");
                          //b5=setHeaderFieldValue2(sb2,"CITYFROM","NEW YORK");
                         // b6=setHeaderFieldValue2(sb2,"AIRPFROM","JFK");
                         // b7=setHeaderFieldValue2(sb2,"SEATSOCC","375");
                         b8=setHeaderFieldValue2(sb,"PAYMENTSUM","1600");
                         //b8=setHeaderFieldValue2(sb,"CITYTO","YORK");
                        System.out.println("bp 8");
                        System.out.println("b8=" +b8);
                        //System.out.println(b3 +" " +" "+b4 +" "+b5 +" " +b6 +" "+b7 +" " +b8);
                          transactionManager.commit();
                        System.out.println("bp 9");
                }catch(Exception e){
                     System.out.println("Exception in modifyRecordAmt2:" +e.getMessage());
         public boolean setHeaderFieldValue2(
              SyncBo sb,
              String headerFieldName,
              Object value) {
              SyncBoDescriptor sbd = sb.getSyncBoDescriptor();
              //RowDescriptor trd = sbd.getTopRowDescriptor();
              System.out.println("bp 10");
              RowDescriptor trd=sbd.getRowDescriptor("010");
              System.out.println("bp 11");
              FieldDescriptor fd = trd.getFieldDescriptor(headerFieldName);
              System.out.println("fd:" +fd.getFieldName());
              if (fd != null) {
              BasisFieldType bft = fd.getFieldType();
              //Row header = sb.getTopRow();
              System.out.println("bp 12");
              //Row header = null;
              Row[] header=null;
              //try {
                   //header = sb.getRow("0001211181");
                   //header=sb.getTopRow();
                   header=getItemInstances(sb,"010");
                   if(header==null){
                        System.out.println("is null");
                   }else{
                        System.out.println("not null");
              //} catch (PersistenceException e1) {
                   // TODO Auto-generated catch block
              //     System.out.println("Exception getRow:" +e1.getMessage());
              //     e1.printStackTrace();
              System.out.println("bp 13");
              try {
    //             Integer operator
              if (bft == BasisFieldType.N) {
                   System.out.println("Numeric");
              NumericField nf = header[0].getNumericField(fd);
              if (nf != null) {
              BigInteger ii = new BigInteger(value.toString());
              nf.setValue(ii);
              return true;
              } else {
              return false;
    //             Character operator
              if (bft == BasisFieldType.C) {
                   System.out.println("Character");
              CharacterField cf = header[0].getCharacterField(fd);
              if (cf != null) {
              cf.setValue(value.toString());
              return true;
              } else {
              return false;
    //             Decimal operator
              if (bft == BasisFieldType.P) {
                   System.out.println("Decimal");
              DecimalField df = header[0].getDecimalField(fd);
              System.out.println("bp 1.1");
              if (df != null) {
                   System.out.println("bp 1.2");
              BigDecimal bd = new BigDecimal(value.toString());
              System.out.println("bp 1.3");
              df.setValue(bd);
              System.out.println("bp 1.4");
              return true;
              } else {
                   System.out.println("bp 1.5");
              return false;
    //             Similar operation for time and date operator fields
              if (bft == BasisFieldType.D) {
                   System.out.println("Date");
              DateField df = header[0].getDateField(fd);
              if (df != null) {
              if (value.toString().equals("0")) {
              Date dat = Date.valueOf("0000-00-00");
              df.setValue(dat);
              } else if (!value.toString().equals("")) {
              Date dat = Date.valueOf(value.toString());
              df.setValue(dat);
              } else {
              Calendar cal = Calendar.getInstance();
              java.sql.Date bd =
              new java.sql.Date(cal.getTime().getTime());
              df.setValue(bd);
              return true;
              } else {
              return false;
    //             Similar operation for time and date operator fields
              } catch (SmartSyncException ex) {
              System.out.println(ex.getMessage());
              } catch (PersistenceException e) {
              System.out.println(e.getMessage());
              return false;
    how to check in the table if the data is uploaded to the Middleware RDB ??
    anyone

  • Global-Cache-Manager for Multi-Environment Applications

    Hi,
    Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
    Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
    Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
    Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
    If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
    Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
    Do you think a global cache manager is something worth back-donating?
    Related Postings: Multiple envs in one process?
    Stefan Walgenbach

    Here is the updated DbCacheSize.java to allow calling it with an API.
    Charles Lamb
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2005-2006
    *      Oracle Corporation.  All rights reserved.
    * $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
    package com.sleepycat.je.util;
    import java.io.File;
    import java.io.PrintStream;
    import java.math.BigInteger;
    import java.text.NumberFormat;
    import java.util.Random;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.je.EnvironmentStats;
    import com.sleepycat.je.OperationStatus;
    import com.sleepycat.je.dbi.MemoryBudget;
    import com.sleepycat.je.utilint.CmdUtil;
    * Estimating JE in-memory sizes as a function of key and data size is not
    * straightforward for two reasons. There is some fixed overhead for each btree
    * internal node, so tree fanout and degree of node sparseness impacts memory
    * consumption. In addition, JE compresses some of the internal nodes where
    * possible, but compression depends on on-disk layouts.
    * DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
    * in-memory footprint for a given database, specify the number of records and
    * record characteristics and DbCacheSize will return a minimum and maximum
    * estimate of the cache size required for holding the database in memory.
    * If the user specifies the record's data size, the utility will return both
    * values for holding just the internal nodes of the btree, and for holding the
    * entire database in cache.
    * Note that "cache size" is a percentage more than "btree size", to cover
    * general environment resources like log buffers. Each invocation of the
    * utility returns an estimate for a single database in an environment.  For an
    * environment with multiple databases, run the utility for each database, add
    * up the btree sizes, and then add 10 percent.
    * Note that the utility does not yet cover duplicate records and the API is
    * subject to change release to release.
    * The only required parameters are the number of records and key size.
    * Data size, non-tree cache overhead, btree fanout, and other parameters
    * can also be provided. For example:
    * $ java DbCacheSize -records 554719 -key 16 -data 100
    * Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
    * overhead=10%
    *    Cache Size      Btree Size  Description
    *    30,547,440      27,492,696  Minimum, internal nodes only
    *    41,460,720      37,314,648  Maximum, internal nodes only
    *   114,371,644     102,934,480  Minimum, internal nodes and leaf nodes
    *   125,284,924     112,756,432  Maximum, internal nodes and leaf nodes
    * Btree levels: 3
    * This says that the minimum cache size to hold only the internal nodes of the
    * btree in cache is approximately 30MB. The maximum size to hold the entire
    * database in cache, both internal nodes and datarecords, is 125Mb.
    public class DbCacheSize {
        private static final NumberFormat INT_FORMAT =
            NumberFormat.getIntegerInstance();
        private static final String HEADER =
            "    Cache Size      Btree Size  Description\n" +
        //   12345678901234  12345678901234
        //                 12
        private static final int COLUMN_WIDTH = 14;
        private static final int COLUMN_SEPARATOR = 2;
        private long records;
        private int keySize;
        private int dataSize;
        private int nodeMax;
        private int density;
        private long overhead;
        private long minInBtreeSize;
        private long maxInBtreeSize;
        private long minInCacheSize;
        private long maxInCacheSize;
        private long maxInBtreeSizeWithData;
        private long maxInCacheSizeWithData;
        private long minInBtreeSizeWithData;
        private long minInCacheSizeWithData;
        private int nLevels = 1;
        public DbCacheSize (long records,
                   int keySize,
                   int dataSize,
                   int nodeMax,
                   int density,
                   long overhead) {
         this.records = records;
         this.keySize = keySize;
         this.dataSize = dataSize;
         this.nodeMax = nodeMax;
         this.density = density;
         this.overhead = overhead;
        public long getMinCacheSizeInternalNodesOnly() {
         return minInCacheSize;
        public long getMaxCacheSizeInternalNodesOnly() {
         return maxInCacheSize;
        public long getMinBtreeSizeInternalNodesOnly() {
         return minInBtreeSize;
        public long getMaxBtreeSizeInternalNodesOnly() {
         return maxInBtreeSize;
        public long getMinCacheSizeWithData() {
         return minInCacheSizeWithData;
        public long getMaxCacheSizeWithData() {
         return maxInCacheSizeWithData;
        public long getMinBtreeSizeWithData() {
         return minInBtreeSizeWithData;
        public long getMaxBtreeSizeWithData() {
         return maxInBtreeSizeWithData;
        public int getNLevels() {
         return nLevels;
        public static void main(String[] args) {
            try {
                long records = 0;
                int keySize = 0;
                int dataSize = 0;
                int nodeMax = 128;
                int density = 80;
                long overhead = 0;
                File measureDir = null;
                boolean measureRandom = false;
                for (int i = 0; i < args.length; i += 1) {
                    String name = args;
    String val = null;
    if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
    i += 1;
    val = args[i];
    if (name.equals("-records")) {
    if (val == null) {
    usage("No value after -records");
    try {
    records = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (records <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-key")) {
    if (val == null) {
    usage("No value after -key");
    try {
    keySize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (keySize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-data")) {
    if (val == null) {
    usage("No value after -data");
    try {
    dataSize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (dataSize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-nodemax")) {
    if (val == null) {
    usage("No value after -nodemax");
    try {
    nodeMax = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (nodeMax <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-density")) {
    if (val == null) {
    usage("No value after -density");
    try {
    density = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (density < 1 || density > 100) {
    usage(val + " is not betwen 1 and 100");
    } else if (name.equals("-overhead")) {
    if (val == null) {
    usage("No value after -overhead");
    try {
    overhead = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (overhead < 0) {
    usage(val + " is not a non-negative integer");
    } else if (name.equals("-measure")) {
    if (val == null) {
    usage("No value after -measure");
    measureDir = new File(val);
    } else if (name.equals("-measurerandom")) {
    measureRandom = true;
    } else {
    usage("Unknown arg: " + name);
    if (records == 0) {
    usage("-records not specified");
    if (keySize == 0) {
    usage("-key not specified");
         DbCacheSize dbCacheSize = new DbCacheSize
              (records, keySize, dataSize, nodeMax, density, overhead);
         dbCacheSize.caclulateCacheSizes();
         dbCacheSize.printCacheSizes(System.out);
    if (measureDir != null) {
    measure(System.out, measureDir, records, keySize, dataSize,
    nodeMax, measureRandom);
    } catch (Throwable e) {
    e.printStackTrace(System.out);
    private static void usage(String msg) {
    if (msg != null) {
    System.out.println(msg);
    System.out.println
    ("usage:" +
    "\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
    "\n -records <count>" +
    "\n # Total records (key/data pairs); required" +
    "\n -key <bytes> " +
    "\n # Average key bytes per record; required" +
    "\n [-data <bytes>]" +
    "\n # Average data bytes per record; if omitted no leaf" +
    "\n # node sizes are included in the output" +
    "\n [-nodemax <entries>]" +
    "\n # Number of entries per Btree node; default: 128" +
    "\n [-density <percentage>]" +
    "\n # Percentage of node entries occupied; default: 80" +
    "\n [-overhead <bytes>]" +
    "\n # Overhead of non-Btree objects (log buffers, locks," +
    "\n # etc); default: 10% of total cache size" +
    "\n [-measure <environmentHomeDirectory>]" +
    "\n # An empty directory used to write a database to find" +
    "\n # the actual cache size; default: do not measure" +
    "\n [-measurerandom" +
    "\n # With -measure insert randomly generated keys;" +
    "\n # default: insert sequential keys");
    System.exit(2);
    private void caclulateCacheSizes() {
    int nodeAvg = (nodeMax * density) / 100;
    long nBinEntries = (records * nodeMax) / nodeAvg;
    long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
    long nInNodes = 0;
         long lnSize = 0;
    for (long n = nBinNodes; n > 0; n /= nodeMax) {
    nInNodes += n;
    nLevels += 1;
    minInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, true);
    maxInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, false);
         minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
         maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
    if (dataSize > 0) {
    lnSize = records * calcLnSize(dataSize);
         maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
         maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
                                  overhead);
         minInBtreeSizeWithData = minInBtreeSize + lnSize;
         minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
                                  overhead);
    private void printCacheSizes(PrintStream out) {
    out.println("Inputs:" +
    " records=" + records +
    " keySize=" + keySize +
    " dataSize=" + dataSize +
    " nodeMax=" + nodeMax +
    " density=" + density + '%' +
    " overhead=" + ((overhead > 0) ? overhead : 10) + "%");
    out.println();
    out.println(HEADER);
    out.println(line(minInBtreeSize, minInCacheSize,
                   "Minimum, internal nodes only"));
    out.println(line(maxInBtreeSize, maxInCacheSize,
                   "Maximum, internal nodes only"));
    if (dataSize > 0) {
    out.println(line(minInBtreeSizeWithData,
                   minInCacheSizeWithData,
                   "Minimum, internal nodes and leaf nodes"));
    out.println(line(maxInBtreeSizeWithData,
                   maxInCacheSizeWithData,
    "Maximum, internal nodes and leaf nodes"));
    } else {
    out.println("\nTo get leaf node sizing specify -data");
    out.println("\nBtree levels: " + nLevels);
    private int calcInSize(int nodeMax,
                   int nodeAvg,
                   int keySize,
                   boolean lsnCompression) {
    /* Fixed overhead */
    int size = MemoryBudget.IN_FIXED_OVERHEAD;
    /* Byte state array plus keys and nodes arrays */
    size += MemoryBudget.byteArraySize(nodeMax) +
    (nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
    /* LSN array */
         if (lsnCompression) {
         size += MemoryBudget.byteArraySize(nodeMax * 2);
         } else {
         size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
    (nodeMax * MemoryBudget.LONG_OVERHEAD);
    /* Keys for populated entries plus the identifier key */
    size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
    return size;
    private int calcLnSize(int dataSize) {
    return MemoryBudget.LN_OVERHEAD +
    MemoryBudget.byteArraySize(dataSize);
    private long calculateOverhead(long btreeSize, long overhead) {
    long cacheSize;
    if (overhead == 0) {
    cacheSize = (100 * btreeSize) / 90;
    } else {
    cacheSize = btreeSize + overhead;
         return cacheSize;
    private String line(long btreeSize,
                   long cacheSize,
                   String comment) {
    StringBuffer buf = new StringBuffer(100);
    column(buf, INT_FORMAT.format(cacheSize));
    column(buf, INT_FORMAT.format(btreeSize));
    column(buf, comment);
    return buf.toString();
    private void column(StringBuffer buf, String str) {
    int start = buf.length();
    while (buf.length() - start + str.length() < COLUMN_WIDTH) {
    buf.append(' ');
    buf.append(str);
    for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
    buf.append(' ');
    private static void measure(PrintStream out,
    File dir,
    long records,
    int keySize,
    int dataSize,
    int nodeMax,
    boolean randomKeys)
    throws DatabaseException {
    String[] fileNames = dir.list();
    if (fileNames != null && fileNames.length > 0) {
    usage("Directory is not empty: " + dir);
    Environment env = openEnvironment(dir, true);
    Database db = openDatabase(env, nodeMax, true);
    try {
    out.println("\nMeasuring with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
    printStats(out, env,
    "Stats for internal and leaf nodes (after insert)");
    db.close();
    env.close();
    env = openEnvironment(dir, false);
    db = openDatabase(env, nodeMax, false);
    out.println("\nPreloading with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    preloadRecords(out, db);
    printStats(out, env,
    "Stats for internal nodes only (after preload)");
    } finally {
    try {
    db.close();
    env.close();
    } catch (Exception e) {
    out.println("During close: " + e);
    private static Environment openEnvironment(File dir, boolean allowCreate)
    throws DatabaseException {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(allowCreate);
    envConfig.setCachePercent(90);
    return new Environment(dir, envConfig);
    private static Database openDatabase(Environment env, int nodeMax,
    boolean allowCreate)
    throws DatabaseException {
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(allowCreate);
    dbConfig.setNodeMaxEntries(nodeMax);
    return env.openDatabase(null, "foo", dbConfig);
    private static void insertRecords(PrintStream out,
    Environment env,
    Database db,
    long records,
    int keySize,
    int dataSize,
    boolean randomKeys)
    throws DatabaseException {
    DatabaseEntry key = new DatabaseEntry();
    DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
    BigInteger bigInt = BigInteger.ZERO;
    Random rnd = new Random(123);
    for (int i = 0; i < records; i += 1) {
    if (randomKeys) {
    byte[] a = new byte[keySize];
    rnd.nextBytes(a);
    key.setData(a);
    } else {
    bigInt = bigInt.add(BigInteger.ONE);
    byte[] a = bigInt.toByteArray();
    if (a.length < keySize) {
    byte[] a2 = new byte[keySize];
    System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
    a = a2;
    } else if (a.length > keySize) {
    out.println("*** Key doesn't fit value=" + bigInt +
    " byte length=" + a.length);
    return;
    key.setData(a);
    OperationStatus status = db.putNoOverwrite(null, key, data);
    if (status == OperationStatus.KEYEXIST && randomKeys) {
    i -= 1;
    out.println("Random key already exists -- retrying");
    continue;
    if (status != OperationStatus.SUCCESS) {
    out.println("*** " + status);
    return;
    if (i % 10000 == 0) {
    EnvironmentStats stats = env.getStats(null);
    if (stats.getNNodesScanned() > 0) {
    out.println("*** Ran out of cache memory at record " + i +
    " -- try increasing the Java heap size ***");
    return;
    out.print(".");
    out.flush();
    private static void preloadRecords(final PrintStream out,
    final Database db)
    throws DatabaseException {
    Thread thread = new Thread() {
    public void run() {
    while (true) {
    try {
    out.print(".");
    out.flush();
    Thread.sleep(5 * 1000);
    } catch (InterruptedException e) {
    break;
    thread.start();
    db.preload(0);
    thread.interrupt();
    try {
    thread.join();
    } catch (InterruptedException e) {
    e.printStackTrace(out);
    private static void printStats(PrintStream out,
    Environment env,
    String msg)
    throws DatabaseException {
    out.println();
    out.println(msg + ':');
    EnvironmentStats stats = env.getStats(null);
    out.println("CacheSize=" +
    INT_FORMAT.format(stats.getCacheTotalBytes()) +
    " BtreeSize=" +
    INT_FORMAT.format(stats.getCacheDataBytes()));
    if (stats.getNNodesScanned() > 0) {
    out.println("*** All records did not fit in the cache ***");

  • Update Time where Date already exists

    Hi,
    When i am trying to update time in the column where date already exists, i tried using the update query, but time is not getting updated.
    UPDATE ABC SET ATTRIBUTE=TO_DATE('09/30/2011 12:00:00 AM','MM/DD/YYYY HH:MI:SS')
    Attribute = 9/30/2011 (As per current record in DB) and i want to add time also..
    So when i try updating the record, i dont get any errors, but the time does not get updated, can any one help me for this issue.???
    Datatype of Attribute is Varchar2(150)
    Thanks

    Is there anyway to update a record throgh TO_CHAR function???
    SQL> @test
    SQL> create table abc (Attribute Varchar2(150));
    Table created.
    SQL> insert into abc values('09/30/2011');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from abc;
    ATTRIBUTE
    09/30/2011
    SQL> UPDATE ABC SET ATTRIBUTE=to_char(sysdate,'YYYY_MM_DD HH24:MI:SS');
    1 row updated.
    SQL> COMMIT;
    Commit complete.
    SQL> select * from abc;
    ATTRIBUTE
    2011_03_06 19:59:37
    SQL> drop table abc;
    Table dropped.
    SQL>

  • Folio Upgrade: Error #3115table 'folioFolderCache' already exists

    I just upgraded the Folio Builder with a recent version for Indesign CS5.
    It worked ok on initial install for a few hours until I closed Indesign. And when I re-opened Indesign I started getting this error message:
    An error has occured while saving your changes to local cache [Error #3115table 'folioFolderCache' already exists]
    I cannot use Folio Builder. Does anyone know how to get it working again?
    :-S
    Thanks in advance.
    B

    Hi,
    have a look at this thread: http://forums.adobe.com/message/4265514#4265514
    See if the suggested solution helped to resolve the problem.

  • I am trying to install iTunes 12.0 on my Windows 7 PC to sync my iPhone6. I am getting error message, "Item already exists" on installation of iTunes 12.0.  Apple has no fix. Help???

    I am trying to install iTunes 12.0 on my Windows 7 PC to sync my iPhone6. I am getting error message, "Item already exists" on installation of iTunes 12.0.  Apple has no fix. Help???

    Item or object?
    For "Object already exists" or "Access denied" errors when installing try opening Control Panel > Programs and Features > View installed updates then checking for Security Update for Microsoft Windows (KB2918614). Uninstall if you have it, then reboot and try installing again.
    For general advice see Troubleshooting issues with iTunes for Windows updates.
    The steps in the second box are a guide to removing everything related to iTunes and then rebuilding it which is often a good starting point unless the symptoms indicate a more specific approach. Review the other boxes and the list of support documents further down the page in case one of them applies.
    Your library should be unaffected by these steps but there is backup and recovery advice elsewhere in the user tip.
    tt2

  • Function module in Infotyp which checks if identical data already exists

    I want to create a new hr master data (tcode pa30) for Hans Müller in e.g Company X and fill out the necessary fields in Infotype 0001,0002 (Name, Birthdate, etc. ) u2013 done !
    After  creating the new hr master data for Hans Müller Iu2019d like to fill out a special Infotyp (0032) namely Internal Data.
    PROBLEM:
    While filling out the IT 0032 I do not know if the person - Iu2019ve created a new master data for - is already existing in the system ( but in other e.g Company Z) .
    So what I need is a function modul in the Infotype 0032 which checks if there is already a person with the same data existing in any company (the whole data bank) using 3 characters from Infotype 0002 (Personal Data) namely ,   First name, Last name, Birthd. . 
    And if a person with the same first name ,last name and birthday is already existing the function module should link to that one or just show me a information.
    Also tried to show you my problem on a graphic,
    Function_Module_Infotyp0032.png
    I hope you can help me. I would be very thankful for any hints , information or programming approach.
    Regards
    Sean

    the problem with your solution approach is that there could be different pers. no. for the same person in my issue.
    If I understood you well, that won't be a problem, since you already know which PERNR to use (The one you are using to update IT0032) In that case you only have to check if for that PERNR more than one record in PA0001 with different Company Code is present.
    If your problem is that another person with the same name and same date of birth is active in the system with different Company Codes it would be similar.
    For instance: Let's say that you find 2 different PERNRs for a given Name and Date of Birth in PA0002, then you can check IT0001 and look for the BUKRS value for those 2 Personel Numbers. If the values found are different you can raise the error message or whatever you need.

  • I'm getting an 'objects already exist' error when trying to download itunes

    My iPhone 6 wouldn't connect to iTunes since it need to update. So I went to uninstall iTunes and all it's contents. I went to download iTunes again, but when I did I received an 'Object already exists' error. I went to look at more information on the Apple website and I tried everything I could but I keep receiving the error. I uninstalled anything related to Apple on my computer and I went to the Program Files folder on my computer and found nothing related to Apple so I'm pretty sure I deleted all the data already. I really need help with this. I have a Windows 7 computer.

    For "Object already exists" or "Access denied" errors when installing try opening Control Panel > Programs and Features > View installed updates then checking for Security Update for Microsoft Windows (KB2918614). Uninstall if you have it, then reboot and try installing again.
    For general advice see Troubleshooting issues with iTunes for Windows updates.
    The steps in the second box are a guide to removing everything related to iTunes and then rebuilding it which is often a good starting point unless the symptoms indicate a more specific approach. Review the other boxes and the list of support documents further down the page in case one of them applies.
    The further information area has direct links to the current and recent builds in case you have problems downloading, need to revert an older version or want to try the iTunes for Windows (64-bit - for older video cards) release as a workaround for performance issues.
    Your library should be unaffected by these steps but there is backup and recovery advice elsewhere in the user tip.
    tt2

  • Primary key constraint firing when there's no need

    Hi All,
    I've got a very weird problem.
    I've written a PL/SQL procedure to insert addresses into a tabel. The adresses are assigned an unique number by means of a sequence. This unique number is the primary key of the table and has to be unique.
    The following thing occurs: The primary key constraint fires whenever i try to insert a record and i can't figure out why.
    I use the following code
    PROCEDURE mk_adr(klant NUMBER, receiver NUMBER) IS
    oidtje NUMBER DEFAULT 0;
    BEGIN
    BEGIN
    SELECT lea_adr_seq.NEXTVAL INTO oidtje
    FROM dual
    INSERT INTO lea_adr (
    oid, object, streetname,
    housenumber, housealpha, ponumber,
    postcode, cityname, locationdesc,
    province, kind, country_id,
    relation_id, offerrec_id,
    h_trans_van,
    h_geldig_van,
    h_gebruiker
    SELECT oidtje, oidtje, streetname,
    housenumber, housealpha, ponumber,
    postcode, cityname, locationdesc,
    province, kind, country_id,
    null, receiver,
    To_Date('01-01-2000'),
    To_Date('01-01-2000'),
    'conv'
    FROM lea_adr
    WHERE relation_id = klant
    EXCEPTION
    WHEN Others THEN
    conv_algemeen.debugMessage('mk_adr :: '||SQLERRM);
    END;
    END;
    as you can see the very first thing i'm doing is selecting a new unique number into oidtje from the sequence which provides the numbers. Then i use it to insert the record. The insert fails with the primary key constraint firing saying that oid is not filled with an unique number.
    When i do "select max(oid) from lea_adr;" I get a number lower than the number the i get when doing "select lea_adr_seq.nextval from dual;"
    So am i overseeing something or is something very obvious going wrong ?
    Patrick

    Write your procedure like the following and it will work.
    PROCEDURE mk_adr(klant NUMBER, receiver NUMBER) IS
    BEGIN
    INSERT INTO lea_adr (
    oid, object, streetname,
    housenumber, housealpha, ponumber,
    postcode, cityname, locationdesc,
    province, kind, country_id,
    relation_id, offerrec_id,
    h_trans_van,
    h_geldig_van,
    h_gebruiker
    SELECT lea_adr_seq.NEXTVAL, lea_adr_seq.NEXTVAL, streetname,
    housenumber, housealpha, ponumber,
    postcode, cityname, locationdesc,
    province, kind, country_id,
    null, receiver,
    To_Date('01-01-2000'),
    To_Date('01-01-2000'),
    'conv'
    FROM lea_adr
    WHERE relation_id = klant
    EXCEPTION
    WHEN Others THEN
    conv_algemeen.debugMessage('mk_adr :: '||SQLERRM);
    END;
    Using the lea_adr_seq.NEXTVAL more then once in the SAME select will return the same value for both calls.
    SQL> create sequence test
    2 ;
    Sequence created.
    SQL> select test.nextval,test.nextval from dual;
    NEXTVAL NEXTVAL
    1 1
    SQL>

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • How to insert data from a table into itself if they don't already exist meeting a certain condition

    First, let me apologize as I did not write/design this mess, just inherited this terrible design and legacy application!
    I have table of companies and a table of suppliers with various attributes and of them being a bit column of is_public.  The public suppliers then need to be inserted into the table as suppliers for the other suppliers.
    As an example, in the company table company 'PUB01' is flagged as a public supplier (Company.IS_PUBLIC_SUPPLIER = 1).  Therefore in the Suppliers table I need to insert a row for each each of PUB01's suppliers in that same table to all the other suppliers
    in that table if they do not already exist as follows:
    Here is a sample of the table structure:
    CREATE TABLE [dbo].[COMPANY](
    [COMPANY_ID] [nvarchar](15) NOT NULL,
    [COMPANY_NAME] [nvarchar](100) NOT NULL,
    [IS_PUBLIC_SUPPLIER] bit NOT NULL,
     CONSTRAINT [PK_COMPANIES] PRIMARY KEY CLUSTERED 
    [COMPANY_ID] ASC
    ) ON [PRIMARY]
    GO
    CREATE TABLE [dbo].[SUPPLIERS](
    [SUPPLIER_ID] [int] IDENTITY(1,1) NOT NULL,
    [COMPANY_ID] [nvarchar](15) NOT NULL,
    [SUPPLIER_NAME] [nvarchar](100) NOT NULL,
    [PUBLIC_SUPPLIER_ID] [int] NULL,
     CONSTRAINT [PK_SUPPLIER_MASTER] PRIMARY KEY CLUSTERED 
    [SUPPLIER_ID] ASC
    GO
    Thanks!

    Thanks Visakh!  Will that work if some have already been inserted previously?
    Yes
    but if you want to exclude them use this
    INSERT INTO Suppliers(Company_ID,Supplier_Name,PUblic_Supplier_ID)
    SELECT s1.Company_ID,
    s2.Supplier_Name,
    s2.Supplier_ID
    FROM Suppliers s1
    CROSS JOIN Suppliers s2
    WHERE EXISTS(
    SELECT 1
    FROM Company
    WHERE Company_ID = s2.Company_ID
    AND IS_PUBLIC_SUPPLIER = 1
    AND s1.COMPANY_ID <> s2.COMPANY_ID
    AND NOT EXISTS (SELECT 1
    FROM Suppliers
    WHERE Company_ID = s1.Company_ID
    AND Supplier_Name = s2.Supplier_Name
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • A key/value pair with this key already exists in the store

    Hi all,
    In the installation of PI components on the AS ABAP+ AS JAVA with finalization of the Java Addin the iam facing this problem :
    In short it mentioning that the " A key/value pair with this key already exists in the store.", Please see below for more information....:
    CJSlibModule::writeWarning_impl()
    Execution of the command "/opt/java1.4/bin/java -classpath
    /oracle/stage/PIDtemp/sapinst_instdir/NW04S/LM/AS-JAVA/ADDIN/ORA/CENTRAL/CI/install/sharedli
    b/launcher.jar -Xmx256m -d64 com.sap.engine.offline.OfflineToolStart
    com.sap.security.core.server.secstorefs.SecStoreFS
    /usr/sap/PID/SYS/global/security/lib/tools/iaik_jce.jar:/usr/sap/PID/SYS/global/security/lib
    /tools/iaik_jsse.jar:/usr/sap/PID/SYS/global/security/lib/tools/iaik_smime.jar:/usr/sap/PID/
    SYS/global/security/lib/tools/iaik_ssl.jar:/usr/sap/PID/SYS/global/security/lib/tools/w3c_ht
    tp.jar:/oracle/stage/PIDtemp/sapinst_instdir/NW04S/LM/AS-JAVA/ADDIN/ORA/CENTRAL/CI/install/l
    ib:/oracle/stage/PIDtemp/sapinst_instdir/NW04S/LM/AS-JAVA/ADDIN/ORA/CENTRAL/CI/install/share
    dlib:/oracle/client/10x_64/instantclient/ojdbc14.jar insert -s PID -f
    /usr/sap/PID/SYS/global/security/data/SecStore.properties -k
    /usr/sap/PID/SYS/global/security/data/SecStore.key admin/host/PID tern" finished with return
    code 2. Output:
    SAP Secure Store in the File System - Copyright (c) 2003 SAP AG
    A key/value pair with this key already exists in the store.
    ERROR      2007-02-15 18:35:29
               CJSlibModule::writeError_impl()
    CJS-30051  Cannot insert a key value pair into the secure store fails; see output of log
    file SecureStoreInsert.log:
    SAP Secure Store in the File System - Copyright (c) 2003 SAP AG
    A key/value pair with this key already exists in the store..
    ERROR      2007-02-15 18:35:29 [iaxxgenimp.cpp:731]
               showDialog()
    FCO-00011  The step insertAdminDataInSecStore with step key
    |NW_Addin_CI|ind|ind|ind|ind|0|0|NW_CI_Instance|ind|ind|ind|ind|8|0|NW_CI_Instance_Configure
    _Java|ind|ind|ind|ind|4|0|insertAdminDataInSecStore was executed with status ERROR

    Hi Kamalakar
    I think this problem happens due to wrong content of SDMKIT.JAR - incorrect crypto SDA is deployed.I dont know the installation procedure you followed because normally this error wont come.
    I would rather suggest a work around seeing your problem
    <b>Edit the keydb.xml exchanging [ERROR] with [OK]</b>
    Follow the above step and restart your installation.
    Do not forget to reward points:

  • Inserting a new row in a child table referencing an already existing parent

    I have two tables PARENT & CHILD (one to many), both of which are populated at different times.
    In our toplink mappings, parent contains a collection of child Domain Objects, & and child Domain object contains a one - one to parent.
    How can I insert a new row in a child table with reference to an already existing row in parent?
    When I fetch the parent Domain object and try to set it in the child Domain Object and use the unitOfWork.registerObject() it goes into a circular loop of selecting from 2 other tables.
    Please suggest.

    Odd, have you disabled caching and indirection? (NoIdentityMap, dontUseIndirection, or alwaysRefresh/disableCacheHits). If so, then this could be the issue.
    Otherwise please include the sample code you use to perform this, and verify that you do not have any unusual code in your set/get methods or in descriptor events. Also turn TopLink logging on and include a sample. Also ensure that you do not modify your objects until after registering them in the unit of work, and only modify the unit of work clones.

Maybe you are looking for

  • Problem while using read_text fm

    Hi, iam having one problem if i call read_text fm only one line is coming in the output. eg: material(Text) plant (Text) my progaram write only one line ie. material i want to consider f all the line in single line. opt should be: materialplant. plea

  • Guest Portal - untrusted certificate

    All, My ISE integration is on our local domain,for example  company.local. I created a rule in the authorization policy that used a static IP address, say guest.company.com for our guests to use for the redirection. When guests get the web auth redir

  • Save syslog to local USB HD

    Does anyone know if there is a way to save the Airport Extreme 'N' syslog to an attached USB drive?

  • Re: The Forte Stopwatch

    We had a similar problem. We reported the problem to Forte technical support and they determined that it is a bug. I don't know if this has been fixed in the 3.0.F release. The Stopwatch seems to be accurate for long (several second) intervals, but i

  • Displaying album art and bio info

    I know how to get thumbnail sized album art to appear in the bottom left-hand of iTunes. Is there a more souped-up way to get album art and band bio to appear?