Create with duplicate key throws BaseEJBException instead of DuplicateKeyEx

WAS640 SP9: implemented an entity EJB, with a primary key field that is a String. The database table has the mapped column as a key. create(keyfield) works fine (I can get the data out with either sqlcli or through the EJB). If I try to add a 2nd bean with the same primary key, I don't get the javax.ejb.DuplicateKeyException that I would expect, I get get a com.sap.engine.services.ejb.exceptions.BaseEJBException (trace at end of message)
Any idea what I am doing incorrectly?
com.sap.engine.services.ejb.exceptions.BaseEJBException: Transaction system failure in method com.whirlpool.stock.ejb.StockLocalHomeImpl0.create(java.lang.String,java.lang.String,java.lang.String,java.lang.String,boolean).
     at com.whirlpool.stock.ejb.StockLocalHomeImpl0.create(StockLocalHomeImpl0.java:479)
     at com.whirlpool.stock.ejb.StockProcessorBean.createStock(StockProcessorBean.java:86)
     at com.whirlpool.stock.ejb.StockProcessorObjectImpl0.createStock(StockProcessorObjectImpl0.java:119)
     at com.whirlpool.stock.ejb.StockProcessor_Stub.createStock(StockProcessor_Stub.java:64)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:324)
     at com.sap.engine.services.ejb.session.stateless_sp5.ObjectStubProxyImpl.invoke(ObjectStubProxyImpl.java:187)
     at $Proxy67.createStock(Unknown Source)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:324)
     at com.sap.engine.services.webservices.runtime.EJBImplementationContainer.invokeMethod(EJBImplementationContainer.java:126)
     at com.sap.engine.services.webservices.runtime.RuntimeProcessor.process(RuntimeProcessor.java:146)
     at com.sap.engine.services.webservices.runtime.RuntimeProcessor.process(RuntimeProcessor.java:68)
     at com.sap.engine.services.webservices.runtime.servlet.ServletDispatcherImpl.doPost(ServletDispatcherImpl.java:92)
     at SoapServlet.doPost(SoapServlet.java:51)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
     at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:385)
     at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:263)
     at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:340)
     at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:318)
     at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:821)
     at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:239)
     at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
     at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:147)
     at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:37)
     at com.sap.engine.core.cluster.impl6.session.UnorderedChannel$MessageRunner.run(UnorderedChannel.java:71)
     at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
     at java.security.AccessController.doPrivileged(Native Method)
     at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:94)
     at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:162)
Caused by: com.sap.engine.services.ts.exceptions.BaseRollbackException: Exception in beforeCompletition of ( SAP J2EE Engine JTA Transaction : [075ffffffd56e000e] ).
     at com.sap.engine.services.ts.jta.impl.TransactionImpl.commit(TransactionImpl.java:217)
     at com.whirlpool.stock.ejb.StockLocalHomeImpl0.create(StockLocalHomeImpl0.java:452)
     ... 34 more
Caused by: com.sap.engine.services.ejb.exceptions.BaseEJBException: SQLException while the data is being flushed. The persistent object is com.whirlpool.stock.ejb.StockBean0Persistent.
     at com.sap.engine.services.ejb.entity.pm.UpdatablePersistent.ejbFlush(UpdatablePersistent.java:101)
     at com.sap.engine.services.ejb.entity.pm.TransactionContext.flushAll(TransactionContext.java:429)
     at com.sap.engine.services.ejb.entity.pm.TransactionContext.flush(TransactionContext.java:378)
     at com.sap.engine.services.ejb.entity.pm.TransactionContext.beforeCompletion(TransactionContext.java:506)
     at com.sap.engine.services.ejb.entity.SynchronizationList.beforeCompletion(SynchronizationList.java:136)
     at com.sap.engine.services.ts.jta.impl.TransactionImpl.commit(TransactionImpl.java:205)
     ... 35 more
Caused by: com.sap.sql.DuplicateKeyException: [200]: Duplicate key
     at com.sap.sql.jdbc.common.CommonPreparedStatement.executeUpdate(CommonPreparedStatement.java:326)
     at com.sap.engine.services.dbpool.wrappers.PreparedStatementWrapper.executeUpdate(PreparedStatementWrapper.java:240)
     at com.whirlpool.stock.ejb.StockBean0Persistent.ejb_iInsert(StockBean0Persistent.java:443)
     at com.sap.engine.services.ejb.entity.pm.UpdatablePersistent.ejbFlush(UpdatablePersistent.java:92)
     ... 40 more
com.sap.engine.services.ts.exceptions.BaseRollbackException: Exception in beforeCompletition of ( SAP J2EE Engine JTA Transaction : [075ffffffd56e000e] ).
     at com.sap.engine.services.ts.jta.impl.TransactionImpl.commit(TransactionImpl.java:217)
     at com.whirlpool.stock.ejb.StockLocalHomeImpl0.create(StockLocalHomeImpl0.java:452)
     at com.whirlpool.stock.ejb.StockProcessorBean.createStock(StockProcessorBean.java:86)
     at com.whirlpool.stock.ejb.StockProcessorObjectImpl0.createStock(StockProcessorObjectImpl0.java:119)
     at com.whirlpool.stock.ejb.StockProcessor_Stub.createStock(StockProcessor_Stub.java:64)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:324)
     at com.sap.engine.services.ejb.session.stateless_sp5.ObjectStubProxyImpl.invoke(ObjectStubProxyImpl.java:187)
     at $Proxy67.createStock(Unknown Source)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:324)
     at com.sap.engine.services.webservices.runtime.EJBImplementationContainer.invokeMethod(EJBImplementationContainer.java:126)
     at com.sap.engine.services.webservices.runtime.RuntimeProcessor.process(RuntimeProcessor.java:146)
     at com.sap.engine.services.webservices.runtime.RuntimeProcessor.process(RuntimeProcessor.java:68)
     at com.sap.engine.services.webservices.runtime.servlet.ServletDispatcherImpl.doPost(ServletDispatcherImpl.java:92)
     at SoapServlet.doPost(SoapServlet.java:51)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
     at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:385)
     at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:263)
     at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:340)
     at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:318)
     at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:821)
     at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:239)
     at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
     at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:147)
     at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:37)
     at com.sap.engine.core.cluster.impl6.session.UnorderedChannel$MessageRunner.run(UnorderedChannel.java:71)
     at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
     at java.security.AccessController.doPrivileged(Native Method)
     at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:94)
     at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:162)
Caused by: com.sap.engine.services.ejb.exceptions.BaseEJBException: SQLException while the data is being flushed. The persistent object is com.whirlpool.stock.ejb.StockBean0Persistent.
     at com.sap.engine.services.ejb.entity.pm.UpdatablePersistent.ejbFlush(UpdatablePersistent.java:101)
     at com.sap.engine.services.ejb.entity.pm.TransactionContext.flushAll(TransactionContext.java:429)
     at com.sap.engine.services.ejb.entity.pm.TransactionContext.flush(TransactionContext.java:378)
     at com.sap.engine.services.ejb.entity.pm.TransactionContext.beforeCompletion(TransactionContext.java:506)
     at com.sap.engine.services.ejb.entity.SynchronizationList.beforeCompletion(SynchronizationList.java:136)
     at com.sap.engine.services.ts.jta.impl.TransactionImpl.commit(TransactionImpl.java:205)
     ... 35 more
Caused by: com.sap.sql.DuplicateKeyException: [200]: Duplicate key
     at com.sap.sql.jdbc.common.CommonPreparedStatement.executeUpdate(CommonPreparedStatement.java:326)
     at com.sap.engine.services.dbpool.wrappers.PreparedStatementWrapper.executeUpdate(PreparedStatementWrapper.java:240)
     at com.whirlpool.stock.ejb.StockBean0Persistent.ejb_iInsert(StockBean0Persistent.java:443)
     at com.sap.engine.services.ejb.entity.pm.UpdatablePersistent.ejbFlush(UpdatablePersistent.java:92)
     ... 40 more

Hi Douglas,
The answer is that the EJB container does not have to throw DuplicateKeyException in case of an attempt for creating an already existing entity object. The EJB 2.0 specification, chapter 10.5.3 says "The container may, but is not required to, throw the DuplicateKeyException on the Bean Provider’s attempt to create an entity object with a duplicate primary key[15]." And the explanation follows in the footnote "[15] Containers using optimistic caching strategies, for example, may rollback the transaction at a later point."
In fact this is what happens in your case – the EJB container postpones the insert database operation for the end of the transaction, thus optimizing the number of database access calls. Then, the primary key constraint violation is detected by the database, which leads to transaction rollback, but nevertheless an exception different from DuplicateKeyException is thrown.
The EJB container can detect primary key constraint violation in case two entities with the same primary keys are created in one transaction and then to throw the DuplicateKeyException (you can check this).
The conclusion is that EJB applications cannot rely on the DuplicateKeyException to distinguish the case when  primary key constraint is violated. The EJB container can improve the primary key constraint violation handling  by introducing an additional option for immediately storing a newly created entity object, thus checking whether an entity with the same primary key already exists and throwing DuplicateKeyException. This option will be introduced with the next versions of the EJB container, but nevertheless it again does not cover all possible scenarios. So my advice is simply to not rely on the DuplicateKeyException to distinguish primary key constraint violation from other transaction failures.
Best regards,
Viliana

Similar Messages

  • Optimal read write performance for data with duplicate keys

    Hi,
    I am constructing a database that will store data with duplicate keys.
    For each key (a String) there will be multiple data objects, there is no upper limit to the number of data objects, but let's say there could be a million.
    Data objects have a time-stamp (Long) field and a message (String) field.
    At the moment I write these data objects into the database in chronological order, as i receive them, for any given key.
    When I retrieve data for a key, and iterate across the duplicates for any given primary key using a cursor they are fetched in ascending chronological order.
    What I would like to do is start fetching these records in reverse order, say just the last 10 records that were written to the database for a given key, and was wondering if anyone had some suggestions on the optimal way to do this.
    I have considered writing data out in the order that i want to retrieve it, by supplying the database with a custom duplicate comparator. If I were to do this then the query above would return the latest data first, and I would be able to iterate over the most recent inserts quickly. but Is there a performance penalty paid on writing to the database if I do this?
    I have also considered using the time-stamp field as the unique primary key for the primary database instead of the String, and creating a secondary database for the String, this would allow me to index into the data using a cursor join, but I'm not certain it would be any more performant, at least not on writing to the database, since it would result in a very flat b-tree.
    Is there a fundamental choice that I will have to make between write versus read performance? Any suggestions on tackling this much appreciated.
    Many Thanks,
    Joel

    Hi Joel,
    Using a duplicate comparator will slow down Btree access (writes and reads) to
    some degree because the comparator is called a lot during searching. But
    whether this is a problem depends on whether your app is CPU bound and how much
    CPU time your comparator uses. If you can avoid de-serializing the object in
    the comparator, that will help. For example, if you keep the timestamp at the
    beginning of the data and only read the one long timestamp field in your
    comparator, that should be pretty fast.
    Another approach is to store the negation of the timestamp so that records
    are sorted naturally in reverse timestamp order.
    Another approach is to read backwards using a cursor. This takes a couple
    steps:
    1) Find the last duplicate for the primary key you're interested in:
      cursor.getSearchKey(keyOfInterest, ...)
      status = cursor.getNextNoDup(...)
      if (status == SUCCESS) {
          // Found the next primary key, now back up one record.
          status = cursor.getPrev(...)
      } else {
          // This is the last primary key, find the last record.
          status = cursor.getLast(...)
      }2) Scan backwards over the duplicates:
      while (status == SUCCESS) {
          // Process one record
          // Move backwards
          status = cursor.getPrev(...)
      }Finally another approach is to use a two-part primary key: {string,timestamp}.
    Duplicates are not configured because every key is unique. I mention this
    because using duplicates in JE has more overhead than using a unique primary
    key. You can combine this with either of the above approaches -- using a
    comparator, negating the timestamp, or scanning backwards.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Updating database table with DUPLICATE keys

    i have an internal table having data as follows.
    emp_id     name     date           proj_id  activity_id    Hours  Remarks
    101     Pavan    09.10.2007      123     1         2.00       Coding
    101     Pavan     09.10.2007   124     2         1.00        Documentation
    102     Raj     09.10.2007   123        3         6.00     Testing
    Now i need to update a Ztable with above mentioned data.
    The structure of the Ztable is as follows.
    Mandt     emp_id  name    date    proj_id   activity_id   Hours   Remarks
    NOte: i have ticked both check boxes for the field MANDT in table.
    Rest didnt select the check boxes.
    I believe now the field MANDT alone is a primary key for the z-table.
    NOw i have tried with UPDATE/INSERT statments to update the database table.
    But instead of inserting all the rows, the system is over writing on the same emp_id row.
    Even tried with the statments like INSERT INTO <Ztable> values <Internal table> ACCEPTING DUPLICATE KEYS.
    But its not getting inserted as a separate row in the table.
    Requirement is to insert the multiple rows in the database table without any over writing activity.
    Can anyone give me the code to do this?
    Regards
    Pavan

    Hi Pavan,
        Please let me know what are the key fields in your Ztable. Try with below code it may help you. change the "Ztablename" as your database table name and I_INTERNAL TABLE  as your internal table name. still you are facing the problem please let me know.
    lock the custom table before updating the table.
      CALL FUNCTION 'ENQUEUE_E_TABLE'
       EXPORTING
      MODE_RSTABLE         = 'E'
         TABNAME              = 'ZTABLENAME'
      VARKEY               =
      X_TABNAME            = ' '
      X_VARKEY             = ' '
      _SCOPE               = '2'
      _WAIT                = ' '
      _COLLECT             = ' '
       EXCEPTIONS
         FOREIGN_LOCK         = 1
         SYSTEM_FAILURE       = 2
         OTHERS               = 3
      IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ELSE.
        INSERT  ZTABLENAME FROM TABLE I_INTERNALTABLE ACCEPTING DUPLICATE KEYS.
        COMMIT WORK.
      ENDIF.
    unlock after updating the custom table After updation is done.
      CALL FUNCTION 'DEQUEUE_E_TABLE'
       EXPORTING
        MODE_RSTABLE       = 'E'
         TABNAME            = 'ZTABLENAME'
        VARKEY             =
        X_TABNAME          = ' '
        X_VARKEY           = ' '
        _SCOPE             = '3'
        _SYNCHRON          = ' '
        _COLLECT           = ' '

  • Records with duplicate key in Infospoke

    Hello
    I have a question with regard to the output produced by a Delta Extraction Mode infospoke.
    I have an Infospoke running across a single cube that pulls a number of characteristics and key figures from said cube.  The infospoke uses delta extraction and is run as part of a process chain after the successful update of the said infocube.
    The output file contains two records for each record that exists in the cube .....
    The first record contains all the characteristics and the correct values for the key figures - this record corresponds with the associated record in the cube.
    The second record contains all the characteristics as the previous record but, all the key figures contain '0' values. 
    The two records in the infospoke output file are always sequential.  Every valid record (ie record I would expect to see) has a duplicate with '0' value key figures.
    Has anyone experienced this?  As the key figure values for the output always correspond when totalled I am not too concerned at this stage but just wondered if anyone has seen this before.
    Thanks in advance for your help.
    Sally

    Hi Sally,
      Info spoke first extract data from F fact table and then from E fact table. If you are extracting data into an table you can choose request id and package id as key on top of characteristics. But when you are extracting data into flat file you cant. so you will get multiple records.
      Try to extract the data into a table and check the difference. If you have same characteristic combination records available in E & F fact table you will get 2 records in 2 different request nos(Check in the Open Hub Monitor).
    For more information see the how to papers on open hub...
    https://websmp201.sap-ag.de/bi
    right side "Info Index".
      seach for the papers
    · BW 3.0B Open Hub - Positioning (zip)
    · BW Open Hub (ppt)
    · BW Open Hub - Transcript (doc)
    · BW Open Hub Service - Update (zip)
    Hope this Helps
    Srini

  • Getting Multiple Records with Same Key throws Access Violation

    Are there any known issues with PInvoke Signatures for GET .. Im gettign consistently AVL's when attempting to read Multiple Values under single key ..
    Error doesn't happens while debugging in VS 2007 , Tried turning off optimizations with no luck ..
    Im using VS 2010 and .NET 4.0 ( WIN XP) . I do have a repro. ( Im using latest bits from Oracle site as of 9/10/2010 )
    Unhandled Exception: System.AccessViolationException: Attempted to read or write
    protected memory. This is often an indication that other memory is corrupt.
    at BerkeleyDB.Internal.libdb_csharpPINVOKE.db_strerror(Int32 jarg1)
    at BerkeleyDB.DatabaseException..ctor(Int32 err) in C:\Users\gmf\db\db-5.0.26
    \csharp\DatabaseException.cs:line 78
    at BerkeleyDB.DatabaseException.ThrowException(Int32 err) in C:\Users\gmf\db\
    db-5.0.26\csharp\DatabaseException.cs:line 34
    at BerkeleyDB.Internal.DB.get(DB_TXN txn, DatabaseEntry key, DatabaseEntry da
    ta, UInt32 flags) in C:\Users\gmf\db\db-5.0.26\csharp\Internal\DB.cs:line 187
    at BerkeleyDB.BaseDatabase.Get(DatabaseEntry key, DatabaseEntry data, Transac
    tion txn, LockingInfo info, UInt32 flags) in C:\Users\gmf\db\db-5.0.26\csharp\Ba
    seDatabase.cs:line 900
    at BerkeleyDB.Database.GetMultiple(DatabaseEntry key, Int32 BufferSize, Trans
    action txn, LockingInfo info) in C:\Users\gmf\db\db-5.0.26\csharp\Database.cs:li
    ne 494
    at helloworld.Program.Main(String[] args) in C:\bdm\helloworld\helloworld\Pro
    gram.cs:line 53
    Appreciate any help.
    Thanks
    Nirmal
    Edited by: user8050299 on Sep 10, 2010 3:29 PM
    Updated with OS

    seems like oracle .NET bindings doesnt work for new CLR . It works fine with .NET 2.0
    May be Oracle folks will release new version for 4.0

  • Read statement with duplicate key fields

    Hello,
    I have an internal table(EKBE) with 8 fields
    ebeln ebelp vgabe gjahr belnr budat wrbtr refkey
    populating the first 7 fields from EKBE table with VGABE = 2 and PO#, concatenating the WRBTR and GJAHR to get the Refkey and passing this to the 8th field.
    Using this refkey, getting the document numbers from BKPF.
    I am looping another internal table into a work area and reading the above internal table by passing the ebeln field and I am doing some calculations ,etc.
    A custom table is getting updated from the above work area.
    The issue is-
    The internal table EKBE can have the same PO numbers with the same line item numbers but with different ref key's.
    So, when I am reading the int. tab, it is reading only one record, since I have no other key's to read except EBELN. So for one PO# it is reading only 1 ref key, I cannot use the line item# as a key, as the line item#'s never match, even if they match they could be same PO# with same line item# and different ref key.
    Any inputs highly appreciated.
    Regards,
    Kiran

    Loop on EKBE Internal table as well to fetch all entries of PO. Use Parallel cursor to optmize performance.
    http://wiki.sdn.sap.com/wiki/display/Snippets/ABAPCodeforParallelCursor-Loop+Processing

  • Java.util.set: problem with duplicate keys

    Hello, everybody,
    i have implemented a custom class C with its own equals()-method. I have two instances I1 and I2 of C with
    I1.equals(I2) = true
    if I add I1 and I2 to a set
    Set S = new HashSet();
    S.add(I1);
    S.add(I2);
    there are two elements I1 and I2 in the set. How is this possible?

    You must override the method hashCode() of your class C to be consistent with the comparison by equals() method.
    http://java.sun.com/j2se/1.4.1/docs/api/java/lang/Object.html#hashCode()

  • Bug in cursor behaviour with duplicates

    Dear Oracle guys and girls,
    first of all: it sucks that i HAVE to provide business information (company name, address, even phone number) even if i just want to participate in this forum for private reasons. I even have to "unsubscribe" to the newsletters although i never subscribed. Then i have to re-enter my timezone information and email address for the forum, because the settings in my profile are ignored. I think there's some room for improvement in this registration process.
    OK - back to topic. i think i found a bug in the cursor behaviour with duplicate keys. But the behaviour is very consistent, so maybe it's not a bug, but a bad design. (I call it bad because it's totally unexpected and not logical to me).
    I insert some dupes with DB_KEYFIRST; then i create a cursor and iterate over all items in the reverse order (!) with DB_PREV (i also tried DB_PREV|DB_NEXT_DUPE) - no keys are shown.
    Alternatively:
    I insert some dupes with DB_KEYLAST; then i create a cursor and iterate over all items in the reverse order (!) with DB_NEXT (i also tried DB_NEXT|DB_NEXT_DUPE) - no keys are shown.
    cursor->c_get returns the error code -30989 (DB_NOTFOUND).
    Why is it not possible to traverse duplicates in the reverse order? To me it looks like a bug.
    I tested against db 4.5.20.
    Regards
    Chris
    PS: I would love to hear if the bug i reported here: http://groups.google.com/group/comp.databases.berkeley-db/browse_thread/thread/ed471cf6837cb2a6/dd9cda0ad105f401#dd9cda0ad105f401
    will be fixed in the next version.
    Here's a test program:
    int
    main(int argc, char **argv)
    unsigned i;
    int st;
    DB *db;
    DBT key, record;
    DBC cursor, cursor2;
    unlink("test.bdb");
    st=db_create(&db, 0, 0);
    if (st)
    error("db_create", st);
    st=db->set_flags(db, DB_DUP);
    if (st)
    error("db->set_flags", st);
    st=db->open(db, 0, "test.bdb", 0, DB_BTREE, DB_CREATE, 0);
    if (st)
    error("db->open", st);
    memset(&key, 0, sizeof(key));
    memset(&record, 0, sizeof(record));
    st=db->cursor(db, 0, &cursor, 0);
    if (st)
    error("db->cursor", st);
    st=db->cursor(db, 0, &cursor2, 0);
    if (st)
    error("db->cursor", st);
    for (i=0; i<LOOPS; i++) {
    record.data=&i;
    record.size=sizeof(i);
    st=cursor->c_put(cursor, &key, &record, DB_KEYFIRST);
    st=cursor->c_put(cursor, &key, &record, DB_KEYLAST);
    if (st)
    error("cursor->c_put", st);
    while (!(st=cursor2->c_get(cursor, &key, &record, DB_NEXT))) {
    printf("%d\n", *(int *)record.data);
    st=cursor->c_close(cursor);
    if (st)
    error("cursor->c_close", st);
    st=db->close(db, 0);
    if (st)
    error("db->close", st);
    return (0);
    }

    st=cursor->c_put(cursor, &key, &record, DB_KEYFIRST);
    st=cursor->c_put(cursor, &key, &record, DB_KEYLAST);
    if (st)
    error("cursor->c_put", st);
    please delete the first line, it was a cut and paste error. as i said earlier: insert with KEYLAST, query with NEXT.

  • Credit memo to be created with reference to return delivery

    Dear Gurus,
    We have a requirement for mapping return process
    Credit memo to be created with reference to return delivery (instead of with reference to return order)
    Following issue faced in mapping the same:
    Billing document No. 1234 dated 25.03.2010 where there is 4% VAT.
    when we are creating return order with reference to billing document 1234
    4% VAT is calculated since the Service Rendered date is 25.03.2010 (Copied form Billing document)
    return delivery is created today 08.04.2010. (VAT revised to 5%)
    when credit memo is created with reference to delivery document
    VAT is calculated 5% which is applicable from 01.04.2010
    since the service rendered date is appearing in billing document
    08.04.2010 (the date of Post Goods Receipt).
    The VAT should be actually 4% in Credit Memo.
    regards,
    Rajesh T

    In the copy controls from delivery to Invoice, at the item level try to use pricing type C and Pricing source Blank ( Blank is order).
    If it doesnt work , then you might have to go for a requirement routine for this Tax , based on the document category that is being processed, read the tax from the originating order.Assign the routine to tax condition in the pricing procedure.
    Regards
    Sai

  • Change master with release key

    Hello,
    When change master is created with release key, changes to material master is not possible.
    However changes to BOM are possible. If someone wants to make changes possible for material master with release key then how one can do that?
    Thanks in advance,
    jagruti

    Dear,
    You can create a change master using t-code CC01 and you can maintain and you can maintain the revision level in the objects. Then using this change master you can change the material master . Then automatically the system will create a new revision level.
    Using T code CC11 - Create Revision Level ,we can maintain the revision levels.
    One more way of doing this is ,using MM02 T code,after going into Basic 1
    view,just below the material field,you can find the creation button for revision level
    You can come to CC02 T code enter the change number set the Release key values as 01.
    Hope clear to you.
    Regards,
    R.Brahmankar

  • OEM12C BP1 Default preferred credentials with SSH key credentials ?

    Is it possible to configure Host Default Preferred Credentials to use a named credential created with SSH Key credentials ?
    The drop down list only lists credentials configured with host credential types.

    host target type has 2 out of the box credential set HostCredsNormal and HostCredsPriv
    And both these are of type HostCreds (Username and password)
    And there's code which assumes these to be of type HostCreds and process password.
    Hence these are left as is.
    What customers can do is, create additional credential set (emcli create_credential_set) and use HostSSHCreds type for it.

  • W_PARTY_D_U1 = ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found.

    Hi,
    We are implementing OBIA 11.1.1.7.1 which comes with ODI for Fianance and Procurement analytics.When we do full load from EBS the load plan gets failed and it throws the below error
    Caused By: java.sql.SQLException: ORA-20000: Error creating Index/Constraint : W_PARTY_D_U1 => ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found.
    Please help us to resolve the same.
    Thanks
    Rama

    This might need Patch:10402735
    if helps mark

  • BizTalk 2006 Event Log Warnings - Cannot insert duplicate key row in object 'dta_MessageFieldValues' with unique index 'IX_MessageFieldValues'.

    We have been seeing the following 'warnings' in the event log of our BizTalk machine since upgrading to BTS 2006. They seem to occur randomly 6 or 8 times per day.
    Does anyone know what this means and what needs to be done to clear it up? we have only one BizTalk server which is running on only one machine.
    I am new to BizTalk, so I am unable to find how many tracking host instances running for BizTalk server. Also, can you please let me know that we can configure only one instance for one server/machine?
    Source: BAM EventBus Service
    Event: 5
    Warning Details: Execute batch error. Exception information: TDDS failed to batch execution of streams. SQLServer: bizprod, Database: BizTalkDTADb.Cannot insert duplicate key row in object 'dta_MessageFieldValues'
    with unique index 'IX_MessageFieldValues'. The statement has been terminated..

    Other than ensuring that there exists a separate and single tracking host instance, you're getting an error about duplicate keys.. which implies that you're trying to Create a BAM Activity twice with the same data.
    I suggest you have a in-depth examination of the BAM (TPE or API) associated with the orchestration. In TPE ensure that the first binding you select is the "Instance Id" or "Message Id" before going ahead to map the ports or others.
    Regards.

  • JVM issue with applet - Duplicate Key in Parameter Table - Bad Magic Number

    Hey, I have Googled this one to death and have seen a few vague references to this problem, but nothing I can relate back as solution.
    I wonder if I need to tell the customer to reinstall the OS and, ultimately, the JRE. I'm just looking for a little guidance on what any of you may think. Am I missing a setting or something?
    The user is trying to download an applet with IE with 1.3.1_16 and Firefox 1.07 with the Java Embeded Plug-in.
    Even though JRE 1.3.1_16, 1.4.2_09 and 1.5 are all installed and Firefox has the JEP also installed, Firefox still wants to use 1.3.1_16 as does IE. I'm guessing that the JEP didn't work.
    They can't use Safari which does seem to be using 1.4.2_09 because their RSA ID won't authenticate through it.
    On my machine, when I run with the same OS,browsers, and Java 2 plug-ins, I can successfully load the applet.
    This is the error that the user gets in the java console is:
    Duplicate key in parameter table: code using the htmlAttribute.
    first: com.ibm.eNetwork.HOD.HostOnDemand
    second: com.ibm.eNetwork.HOD.HostOnDemand.class
    java.lang.ClassFormatError: com/ibm/eNetwork/HOD/HostOnDemand (Bad magic number) at java.lang.ClassLoader.defineClass0(NativeMethod)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:488)...
    Would you reinstall the OS and Java? or look at another setting? or look at the JEP installation again?
    Thank you.

    Without reading the full post (sorry for that) my guess is that the applet needed to be fetched through a
    proxy, the proxy corrupts the applet because it's signed (finjan is one that does that).
    The reason why a proxy would do so is because the default settings of sun jre will ask the user the
    "do you trust" question to the user which could result in the user loading an applet that is allowed to do
    anything the programmer wants (4us, spyware, nasty stuff).
    Try to do the following:
    1. create a html file locally with the following content:
    rightclick and save target as
    2. open the page, right click the link and save the file
    3. apen the file with an unzip programm (winzip) and check if the content has changed.
    4. If the content has changed than the proxy might have done this, contact the system administrator
    It might allso have been done by a firewall installed locally.

  • Sybase Error 2601 Attempt to insert duplicate key row in object with unique

    RE: Sybase Error 2601 Attempt to insert duplicate key row in object with unique index.
    Hi Folks,
    I'm getting the following error whilst executing a stored procedure in Sybase.
    ERROR: Sybase Error 2601 Attempt to insert duplicate key row in object with unique index.
    I understand that duplicate values have been inserted into a column that has a unique constraint.
    I just can't figure out how to rectify the problem.
    Your help will be greatly appreciated!
    Many thanks in advance.

    If the value I'm trying to insert (using update)
    already exists in the unique-value field then the DB
    refuses to update the field.
    If the value is different it will update.Are you trying to insert or update in SQL (identified by the keyword INSERT or UPDATE respectively)?
    Even in case of an UPDATE query, if you are going to update the values of some columns to violate the unique constraints, the update will not succeed and you will get the error message.
    or is it?
    it tries to create a new row, but can't because there
    is another row with the same unique-values.
    If this is the case, I am only trying to update and
    not create a new item.To put it in simpler words, if you have a set of values defining the uniqueness of a record, you cannot insert another record with the same set of unique values. Similarly, you cannot update an existing record by modifying the set to conflict with another set of unique values which already exist in the database.
    Suppose there are two columns A and B defining the uniqueness of the record and you have only two records at the moment like -
    A B
    ========
    1 1
    2 1
    If you try to insert a record with A = 1 and B = 1, it will fail because a record already exists. You cannot violate uniqueness because the database has already been told that there will be only one record for any given combination of A and B.
    Similarly, if you try to update the second record from A = 2 to A = 1, the end result would be A =1 and B = 1. There is already a record with that set of values and this will result in a violation of the uniqueness. So, this update will also be disallowed. On the other hand, if you try to update B to some value, say 3, there is no problem in doing so.
    For convenience, you can imagine an UPDATE operation to be equivalent to DELETE + INSERT operation, though it doesn't necessarily work the same way internally.
    I hope I was clear enough.

Maybe you are looking for

  • Satellite Pro A200: Recovery using a network ghost boot disk

    Hi all, Need to ghost several sat pro a200 machines using a network ghost boot disk. when i use the ghost disk provided by ghost boot wizard and booting into PC dos i get the message "No Fast Ethernet PCI Adapter Found!" Serious Internal Error! (hard

  • Equipment Replication Error from ECC to CRM

    Hi, We are trying to replicate the Equipments form ECC 5.0 to CRM 2007 facing some error in CRM like inbound queue is not generated and our queue generated with status SYSFAIL error massage is u201CBAPISTRUCTURE_NAME_UNDEFINIEDu201D. I checked in ECC

  • Firefox does not open and support article has not helped

    Firefox will not start when I click on the icon on my desktop but it shows up under Processes in the Task Manager. I have read the related support articles, but it has not solved my problem. I have previously been using Firefox normally.

  • Macbook pro 10.8.3 Safe boot hangs

    Retina MacBook Pro Mid 2012 running 10.8.3 with latest SMC update. My system will not complete a "safe mode" boot, it hangs with the progres bar at about 1/4 of the way. I am trying safe boot to be able to run the system while all extra drivers etc a

  • Synchronization with Outlook Notes

    Can the 9300i sync with Outlook Notes? Do i need to get additional software or upgrade??