Batch writing

My application currently has batch writing turned off. I'm implementing a new feature that requires creating and updating a large number of objects, so batch writing would be very beneficial. I'm using the latest Oracle JDBC driver, which supports batch writing.
Unfortunately, TopLink doesn't allow turning batch writing on/off on a per-UnitOfWork basis, so my only option is to turn it only globally by setting batch-writing and jdbc-batch-writing to true in sessions.xml. I'm wondering if doing that is going to have any negative effects on pre-existing code that does not need batch writing. Are there any scenarios where using batch writing is not a good idea? Or should I just turn it on and not worry about it?

I've come up with this code. How does it look?
uow.getEventManager().addListener(new SessionEventAdapter() {
private DatabaseAccessor writeConnection;
private DatabasePlatform bkupPlatform;
public void preCommitUnitOfWork(SessionEvent event) {
UnitOfWork uow = (UnitOfWork)event.getSession();
ClientSession clientSession = (ClientSession)uow.getParent();
clientSession.getParent().acquireClientConnection(clientSession);
writeConnection = (DatabaseAccessor)clientSession.getWriteConnection();
bkupPlatform = (DatabasePlatform)writeConnection.getPlatform();
DatabasePlatform modifiedPlatform = (DatabasePlatform)bkupPlatform.clone();
modifiedPlatform.setUsesBatchWriting(true);
modifiedPlatform.setUsesJDBCBatchWriting(true);
writeConnection.setDatasourcePlatform(modifiedPlatform);
public void postReleaseUnitOfWork(SessionEvent event) {
if (writeConnection != null) {
writeConnection.setDatasourcePlatform(bkupPlatform);
});

Similar Messages

  • Private-owned deletes and batch writing test case

    Has someone dealt with the following scenario ?
    I have an order containing order items a a private owned one-to-many relation ship.
    When i delete many order lines toplink generates the delete commands for order items and orders. When using batch writing the delete commands are generated in a way that the batch writing is useless (each delete command is generated on its own batch command as the following:
    [TopLink Finer]: 2008.03.03 05:57:10.343--ClientSession(22130853)--Connection(27081530)--Thread(Thread[main,5,main])--Begin batch statements
    [TopLink Fine]: 2008.03.03 05:57:10.343--ClientSession(22130853)--Connection(27081530)--Thread(Thread[main,5,main])--DELETE FROM TL_ORDER_ITEM WHERE (ORDER_ID = ?)
    [TopLink Fine]: 2008.03.03 05:57:10.343--ClientSession(22130853)--Connection(27081530)--Thread(Thread[main,5,main])--     bind => [1]
    [TopLink Finer]: 2008.03.03 05:57:10.343--ClientSession(22130853)--Connection(27081530)--Thread(Thread[main,5,main])--End Batch Statements
    [TopLink Finest]: 2008.03.03 05:57:10.359--UnitOfWork(8066625)--Thread(Thread[main,5,main])--Execute query DeleteObjectQuery( Customer Order #26 Date Ordered: java.util.GregorianCalendar[time=?,areFieldsSet=false,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Europe/Paris",offset=3600000,dstSavings=3600000,useDaylight=true,transitions=184,lastRule=java.util.SimpleTimeZone[id=Europe/Paris,offset=3600000,dstSavings=3600000,useDaylight=true,startYear=0,startMode=2,startMonth=2,startDay=-1,startDayOfWeek=1,startTime=3600000,startTimeMode=2,endMode=2,endMonth=9,endDay=-1,endDayOfWeek=1,endTime=3600000,endTimeMode=2]],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2004,MONTH=9,WEEK_OF_YEAR=43,WEEK_OF_MONTH=4,DAY_OF_MONTH=22,DAY_OF_YEAR=296,DAY_OF_WEEK=6,DAY_OF_WEEK_IN_MONTH=4,AM_PM=0,HOUR=0,HOUR_OF_DAY=0,MINUTE=0,SECOND=0,MILLISECOND=0,ZONE_OFFSET=3600000,DST_OFFSET=3600000] Order Total: 2550.0)
    [TopLink Finest]: 2008.03.03 05:57:10.359--UnitOfWork(8066625)--Thread(Thread[main,5,main])--Register the existing object Order Item: #5 Quantity: 1
    [TopLink Finer]: 2008.03.03 05:57:10.359--ClientSession(22130853)--Connection(27081530)--Thread(Thread[main,5,main])--Begin batch statements
    [TopLink Fine]: 2008.03.03 05:57:10.359--ClientSession(22130853)--Connection(27081530)--Thread(Thread[main,5,main])--DELETE FROM TL_CUSTOMER_ORDER WHERE (ORDER_ID = ?)
    [TopLink Fine]: 2008.03.03 05:57:10.359--ClientSession(22130853)--Connection(27081530)--Thread(Thread[main,5,main])--     bind => [1]
    [TopLink Finer]: 2008.03.03 05:57:10.359--ClientSession(22130853)--Connection(27081530)--Thread(Thread[main,5,main])--End Batch Statements
    If the test case does not define a private owned relation and all the deletes are done in the toplink application (first order items and the orders then the bach writing contains all the deletes.
    Any hint to have batch writing and private-owned deletes behaving on the same batch chunk on Toplink 10.1.3.3 ?

    Ok ill try to make a incompete code with the example of the problem in comments.
    class studenttest;
    //this is the messed up part all the way at the bottom of the class
                case 6:
                    ts.write(NodeList);
                    break;\\\\\\\\\\\\\\
    public class TStream{
    private StudentNodeList NodeList = new StudentNodeList();
    //calls this method
    //had it as public void write (StudentNodeList NodeList)
        public void write(Student NodeList)
            try
                output = new PrintWriter("database.txt");
                for(int i = 0; i < 5; i++)
    // had it as output.print(NodeList.equals(LastName) + "\t");
                    output.print(NodeList.getFirstname() + "\t");
                    output.print(NodeList.getLastname() + "\t");
                    output.print(NodeList.getID() + "\t");
                    output.print(NodeList.getyear() + "\t"); 
                    output.print(NodeList.getgpa() + "\t");   
                    output.println();
              }

  • Sequencing problem with Batch Writing

    I'm using TopLink - 9.0.3.5 and I want to use the registerAllObjects method from the UnitOfWork instead of registering each individually to batch write records to the db. Here is a snippet of what I'm doing:
    Session session = ToplinkUtils.getSession();
    session.getLogin().useBatchWriting();
    session.getLogin().dontUseJDBCBatchWriting();
    session.getLogin().setSequencePreallocationSize(200);
    session.getLogin().setMaxBatchWritingSize(200);
    session.getLogin().bindAllParameters();
    session.getLogin().cacheAllStatements();
    session.getLogin().useNativeSequencing();
    UnitOfWork uow = ToplinkUtils.getActiveUnitOfWork(userId, ip);
    for loop{
    Notification dao = (Notification)
    ToplinkUtils.createObject (Notification.class.getName());
    dao.setName("someName");
    dao.setAddress("someAddress");
    allObjects.add(dao);
    uow.registerAllObjects(allObjects);
    This is the error I'm getting:
    2007-03-06 15:28:40,482 DEBUG (11776:9:127.0.0.1) TOPLINK - JTS#beforeCompletion()
    2007-03-06 15:28:40,482 DEBUG (11776:9:127.0.0.1) TOPLINK - SELECT GMS_NOTIFICATION_SEQ.NEXTVAL FROM DUAL
    2007-03-06 15:28:40,497 DEBUG (11776:9:127.0.0.1) TOPLINK - INSERT INTO GMS_NOTIFICATION ...
    2007-03-06 15:28:40,716 DEBUG (11776:9:127.0.0.1) TOPLINK - EXCEPTION [TOPLINK-4002] (TopLink - 9.0.3.5 (Build 436)): oracle.toplink.exceptions.DatabaseException
    EXCEPTION DESCRIPTION: java.sql.SQLException: ORA-00001: unique constraint (GMSG2K.GMS_NOTIFICATION_PK) violated
    It appears that the next sequence number is aquired for the primary key and a record is added but when Toplink tries to add the next record either the sequence is not aquired or the same sequence number is being used.
    Do I need to set up a table in memory to acquire the sequences? Can anyone give me some guidence?
    thanks

    registerAllObjects() does not do anything special it just calls registerObject(), it does not affect batching.
    In 9.0.3 batch writing was not supported with parameter binding, so this is why you are not seeing batching. This support was added in 9.0.4.
    In 9.0.4 you should use,
    Session session = ToplinkUtils.getSession();
    session.getLogin().useBatchWriting();
    session.getLogin().setSequencePreallocationSize(200);
    session.getLogin().setMaxBatchWritingSize(200);
    session.getLogin().bindAllParameters();
    session.getLogin().cacheAllStatements();
    session.getLogin().useNativeSequencing();
    where setMaxBatchWritingSize(200) means 200 statements.
    In 9.0.3 you can only use dynamic batch writing, which may not perform well. For dynamic batch writing, setMaxBatchWritingSize(200) means 200 characters (the size of the SQL buffer) so is too small, the default is 32,000.
    If you are concerned about performance enabling sequence preallocation would also be beneficial.
    If you still get timeouts, you may consider splitting your inserts into smaller batches (100 per each unit of work) instead of all in a single transaction.

  • JAP Batch Writing

    Hi All,
    In my project i have used JPA eclipse link and i want to know that is it possible to write batch writing using JPA and namedQuery?
    If it is possible please give me examples

    I've come up with this code. How does it look?
    uow.getEventManager().addListener(new SessionEventAdapter() {
    private DatabaseAccessor writeConnection;
    private DatabasePlatform bkupPlatform;
    public void preCommitUnitOfWork(SessionEvent event) {
    UnitOfWork uow = (UnitOfWork)event.getSession();
    ClientSession clientSession = (ClientSession)uow.getParent();
    clientSession.getParent().acquireClientConnection(clientSession);
    writeConnection = (DatabaseAccessor)clientSession.getWriteConnection();
    bkupPlatform = (DatabasePlatform)writeConnection.getPlatform();
    DatabasePlatform modifiedPlatform = (DatabasePlatform)bkupPlatform.clone();
    modifiedPlatform.setUsesBatchWriting(true);
    modifiedPlatform.setUsesJDBCBatchWriting(true);
    writeConnection.setDatasourcePlatform(modifiedPlatform);
    public void postReleaseUnitOfWork(SessionEvent event) {
    if (writeConnection != null) {
    writeConnection.setDatasourcePlatform(bkupPlatform);
    });

  • JPA Batch writing

    Hi all
    i have used JPA and Named query in my project.
    I want to know is it possible to use batch writing using JPA and Named query.
    if it is possible please give me some example

    You may want to ask in a more general Java question forum--this one's for issues specific to solving your problem in Eclipse.

  • CRIO9074 RT jitter every 10 sec. when writing data to SD card through 9802 module

    Hi, 
     I am trying to use cRIO 9074 to collect data from different sensors at 50Hz  and log the data to SD card plugged in 9802 module. I use FPGA- scan interface hybrid mode because some sensors are just based on analog signals and others(i.e.. IMUs) are based on RS232.  This structure worked well. I used to transfer all the data to host VI in my desktop and log them there.
    Recently, since I have more and more sensors, I decide to log all the data locally  to SD card on cRIO 9074.  However, I have a trouble.
    I found my data are not continuous and always miss one second after ever other 10 seconds.  So I tried to insert a FIFO to buffer the data between two timed loops. The high priority loop is in charge of collecting data and the low one is in charge of writing.  But the problem still exists.  Actually, the problem has nothing to do with FIFO because 50Hz is  slow and I have validated that 9074 can collect and write all data within 10ms(100Hz).  Even with out FIFO, the program should work.
    Finally, I find out an phenomenon that both the  two interaction variables "i" in the two timed loops will periodically stop for one second after every 10 second( I check them through front panel and probe).  At the same time, the CPU usage increase dramatically and after one second, it will go back to normal level.  That means the RT jitter every other 10 seconds and cannot work at this instant. If I fully delete the SD writing blocks, the problem is gone and the system works very smoothly.  So most likely, the problem is due to SD card module.
    I tried different ways, i,e, changing the loop priority,  clock resources, the size of fifos.  Nothing works.  I even reduce the system frequency by two to 25 HZ, and even by 5 to 10 Hz. The problem still exists. 
    Attachments are two CPU usage check at 50HZ and 25HZ.  Usually it is below 80% at 50HZ so that the capability of CRIO9074 is not a problem at all. 
    And I also attach my RT VI which is messy,  because I almost go crazy thess days due to the problem . Does anybody have chance to check is there anything wrong in my VI? Or do I need to change the configuration of CRIO 9074 or update the RT engine?  It seems like a hardware problem. 
    Does anybody face the same problem before?
    Yizhai
    Rugers Univesity

    Hi,Zach-H,
         Thanks. Actually, my problem is already solved after I contacted one NI engineer online. 
         I shouldn't use timed loop to write SD card.   Because SD card batch writing is time consuming and its priority can't be high. "while loop" is good for SD card writing. 
       Yizhai

  • Re: Batch updates & deletes

    John,
    I suspect that you could accomplish what you want like this:
    Pass the values needed in your WHERE clause to the SQL method, and use them
    as host variables in the SQL statement. Something like this:
    .....deleteTableBlah(p_One, p_Two, ...., p_N) ;
    deleteTableBlah(p_One: Textdata, p_Two: TextData, ...., p_N: Textdata)
    SQL
    delete from tableBlah
    where ColumnOne = :p_One
    and ColumnTwo = :p_Two
    and ColumnN = :p_N .... and so on
    If there is a large number of values to be passed you could pass an object
    and parse the values in the SQL method.
    I hope this helps.
    /\/\ark /\/ichols
    Forte Technical Consultant
    [email protected] on 03/16/99 11:12:03 PM
    Please respond to [email protected]

    Hello,
    I'm not sure how you are going to do the comparison as a batch and not record by record. A JPA query will give you objects back which you can use to compare and make changes to as neccessary. When you commit, the JPA provider will commit those changes to the database for you. TopLink/EclipseLink has options to efficiently comit those changes, even combining them into batches where possible. See http://wiki.eclipse.org/Optimizing_the_EclipseLink_Application_(ELUG)#How_to_Use_Batch_Writing_for_Optimization for details on using batch writing in EclipseLink.
    Best Regards,
    Chris

  • Batch Updates with Toplink/JPA

    Hi All,
    I am new to JPA/Toplink and I wonder is there a way that we can do the batch updates/insertions using the JPA? If yes please help me how to do it?
    My requirement is this, I need to fetch the n number of records from database and compare with the records from a file and insert/update/delete as required.
    I don't wanted to do it one by one record but as a batch. Is there a way that I can do batch updates/inserts/deletes?
    Any suggestion would be appreciated.
    Thank you.

    Hello,
    I'm not sure how you are going to do the comparison as a batch and not record by record. A JPA query will give you objects back which you can use to compare and make changes to as neccessary. When you commit, the JPA provider will commit those changes to the database for you. TopLink/EclipseLink has options to efficiently comit those changes, even combining them into batches where possible. See http://wiki.eclipse.org/Optimizing_the_EclipseLink_Application_(ELUG)#How_to_Use_Batch_Writing_for_Optimization for details on using batch writing in EclipseLink.
    Best Regards,
    Chris

  • Help with writing optimization

    In my application, i have to create 3600 records. Here is my code :
    uow is my unitOfWork.
    for(int k = 1; k <=10;k ++) {
    for(int i = 1 ; i <= 12; i++) {
    for(int j = 1; j <=30; j++ {
    MyRecord clone = (MyRecord) uow.registerObject(New MyRecord());
    clone.setField1(k);
    clone.setField2(i);
    clone.setField1(j);
    uow.commit();
    To insert 3600 records in my database, it takes between 1 and 1,5 minutes. It seems to be very long.
    How can i do to optimize this ? I know batch writing exist but i do not know it works.
    Help please !!!

    I'm no SQL expert, but if your insert is really this straight-forward, you could probably just write an ad hoc SQL statement that does all this directly.
    I assume there is more to your business domain for the sake of argument. There is no bulk-update/insert feature in TopLink (yet), so your only real option is to turn on batch writing, which should help minimize the number of database round trips in your app.
    See these threads for more info on batch writing:
    Batch Writing
    Re: Performance Optimization for large batch updates

  • Batch processing - How to Bates Number?

    I have an adobe file that is apparently created from a batch sequence.  The file contains several hundred emails and some of the emails have attachments.  Is it possible to create a new file with all of the emails and their attachements opened so that I can bates number each page?  If so, where can I find instructions?

    Binding can be configured using the "eclipselink.jdbc.bind-parameters" property, and is on by default - it should be on for jdbc batch writing.
    Batch writing defaults to 100 statements, so I am not sure why it would include all statements in one batch unless it is not batching at all. If you set the logs to finest or all it should print of the values it is using for each property, and also show the SQL and statments it is executing. Can you turn on logging and post portions of the logs, particularly the part showing the transaction in question (though maybe only 6 lines of consecutive inserts).
    Logging is controlled through the "eclipselink.logging.level" properties.
    Best Regards,
    Chris

  • How to control number of records in batch insert by eclipselink

    Hi,
    We are using eclipselink(2.2) to persist objects and we use following configuration -
    <property name="eclipselink.jdbc.batch-writing" value="Oracle-JDBC" />
    <property name="eclipselink.jdbc.batch-writing.size" value="5" />
    however the number of records inserted is much more than 5( I have seen 5000 records being inserted ). How can we control the number of records inserted once?
    Thanks.

    Binding can be configured using the "eclipselink.jdbc.bind-parameters" property, and is on by default - it should be on for jdbc batch writing.
    Batch writing defaults to 100 statements, so I am not sure why it would include all statements in one batch unless it is not batching at all. If you set the logs to finest or all it should print of the values it is using for each property, and also show the SQL and statments it is executing. Can you turn on logging and post portions of the logs, particularly the part showing the transaction in question (though maybe only 6 lines of consecutive inserts).
    Logging is controlled through the "eclipselink.logging.level" properties.
    Best Regards,
    Chris

  • NullPointerException was thrown while extracting a value from an instance

    Dear all,
    We have got a null point exception during commit call. According to the stack trace shown below, it seems that it is a problem due to instance variable accessor. As we know toplink use reflection to access the instance variable value. I am curious whether the exception we got is related to any class-loader setting. Thanks a lot.
    =================
    Stack Trace:
    Exception [TOPLINK-69] (OracleAS TopLink - 10g (9.0.4.5) (Build 040930)): oracle
    .toplink.exceptions.DescriptorException
    Exception Description: A NullPointerException was thrown while extracting a valu
    e from the instance variable [versionID] in the object [com.oocl.csc.frm.pom.tes
    t.model.Company].
    Internal Exception: java.lang.NullPointerException
    Mapping: oracle.toplink.mappings.DirectToFieldMapping[versionID-->COMPANY.VERSIO
    NID]
    Descriptor: Descriptor(com.oocl.csc.frm.pom.test.model.Company --> [DatabaseTabl
    e(COMPANY)])
    at oracle.toplink.exceptions.DescriptorException.nullPointerWhileGetting
    ValueThruInstanceVariableAccessor(DescriptorException.java:1022)
    at oracle.toplink.internal.descriptors.InstanceVariableAttributeAccessor
    .getAttributeValueFromObject(InstanceVariableAttributeAccessor.java:68)
    at oracle.toplink.mappings.DatabaseMapping.getAttributeValueFromObject(D
    atabaseMapping.java:304)
    at oracle.toplink.mappings.DirectToFieldMapping.iterate(DirectToFieldMap
    ping.java:355)
    at oracle.toplink.internal.descriptors.ObjectBuilder.iterate(ObjectBuild
    er.java:1438)
    at oracle.toplink.internal.descriptors.DescriptorIterator.iterateReferen
    ceObjects(DescriptorIterator.java:258)
    at oracle.toplink.internal.descriptors.DescriptorIterator.startIteration
    On(DescriptorIterator.java:407)
    at oracle.toplink.publicinterface.UnitOfWork.discoverUnregisteredNewObje
    cts(UnitOfWork.java:1368)
    at oracle.toplink.publicinterface.UnitOfWork.discoverAllUnregisteredNewO
    bjects(UnitOfWork.java:1290)
    at oracle.toplink.publicinterface.UnitOfWork.assignSequenceNumbers(UnitO
    fWork.java:326)
    at oracle.toplink.publicinterface.UnitOfWork.collectAndPrepareObjectsFor
    Commit(UnitOfWork.java:664)
    at oracle.toplink.publicinterface.UnitOfWork.commitToDatabaseWithChangeS
    et(UnitOfWork.java:1130)
    at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(UnitOf
    Work.java:956)
    at oracle.toplink.publicinterface.UnitOfWork.commit(UnitOfWork.java:771)
    ====================
    ====================
    Mapping Description:
    <?xml version = '1.0' encoding = 'UTF-8'?>
    <project>
    <project-name>POM-TEST</project-name>
    <login>
    <database-login>
    <platform>oracle.toplink.oraclespecific.Oracle9Platform</platform>
    <driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
    <connection-url>jdbc:oracle:thin:@sjcngdb2:1521:cdrfrmdv</connection-ur
    l>
    <user-name>pomowner</user-name>
    <password>BB742416276274A47F360CCDD2711570</password>
    <uses-native-sequencing>false</uses-native-sequencing>
    <sequence-preallocation-size>50</sequence-preallocation-size>
    <sequence-table>SEQUENCE</sequence-table>
    <sequence-name-field>SEQ_NAME</sequence-name-field>
    <sequence-counter-field>SEQ_COUNT</sequence-counter-field>
    <should-bind-all-parameters>false</should-bind-all-parameters>
    <should-cache-all-statements>false</should-cache-all-statements>
    <uses-byte-array-binding>true</uses-byte-array-binding>
    <uses-string-binding>false</uses-string-binding>
    <uses-streams-for-binding>false</uses-streams-for-binding>
    <should-force-field-names-to-upper-case>false</should-force-field-names
    -to-upper-case>
    <should-optimize-data-conversion>true</should-optimize-data-conversion>
    <should-trim-strings>true</should-trim-strings>
    <uses-batch-writing>false</uses-batch-writing>
    <uses-jdbc-batch-writing>true</uses-jdbc-batch-writing>
    <uses-external-connection-pooling>false</uses-external-connection-pooli
    ng>
    <uses-external-transaction-controller>false</uses-external-transaction-
    controller>
    <type>oracle.toplink.sessions.DatabaseLogin</type>
    </database-login>
    </login>
    <java-class>com.oocl.csc.frm.pom.test.model.Company</java-class>
    <tables>
    <table>COMPANY</table>
    </tables>
    <primary-key-fields>
    <field>COMPANY.COMPANY_KEY</field>
    </primary-key-fields>
    <descriptor-type-value>Normal</descriptor-type-value>
    <identity-map-class>oracle.toplink.internal.identitymaps.SoftCacheWeakI
    dentityMap</identity-map-class>
    <remote-identity-map-class>oracle.toplink.internal.identitymaps.SoftCac
    heWeakIdentityMap</remote-identity-map-class>
    <identity-map-size>100</identity-map-size>
    <remote-identity-map-size>100</remote-identity-map-size>
    <should-always-refresh-cache>false</should-always-refresh-cache>
    <should-always-refresh-cache-on-remote>false</should-always-refresh-cac
    he-on-remote>
    <should-only-refresh-cache-if-newer-version>false</should-only-refresh-
    cache-if-newer-version>
    <should-disable-cache-hits>false</should-disable-cache-hits>
    <should-disable-cache-hits-on-remote>false</should-disable-cache-hits-o
    n-remote>
    <alias>Company</alias>
    <copy-policy>
    <descriptor-copy-policy>
    <type>oracle.toplink.internal.descriptors.CopyPolicy</type>
    </descriptor-copy-policy>
    </copy-policy>
    <instantiation-policy>
    <descriptor-instantiation-policy>
    <type>oracle.toplink.internal.descriptors.InstantiationPolicy</ty
    pe>
    </descriptor-instantiation-policy>
    </instantiation-policy>
    <query-manager>
    <descriptor-query-manager>
    <existence-check>Check cache</existence-check>
    </descriptor-query-manager>
    </query-manager>
    <event-manager>
    <descriptor-event-manager empty-aggregate="true"/>
    </event-manager>
    <mappings>
    <database-mapping>
    <attribute-name>companyKey</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.COMPANY_KEY</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>contact</attribute-name>
    <read-only>false</read-only>
    <reference-class>com.oocl.csc.frm.pom.test.model.Contact</referen
    ce-class>
    <is-private-owned>false</is-private-owned>
    <uses-batch-reading>false</uses-batch-reading>
    <indirection-policy>
    <mapping-indirection-policy>
    <type>oracle.toplink.internal.indirection.NoIndirectionPoli
    cy</type>
    </mapping-indirection-policy>
    </indirection-policy>
    <uses-joining>false</uses-joining>
    <foreign-key-fields>
    <field>COMPANY.CONTACT_OID</field>
    </foreign-key-fields>
    <source-to-target-key-field-associations>
    <association>
    <association-key>COMPANY.CONTACT_OID</association-key>
    <association-value>CONTACT.POID</association-value>
    </association>
    </source-to-target-key-field-associations>
    <type>oracle.toplink.mappings.OneToOneMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>createdBy</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.CREATED_BY</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>creationClientID</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.CREATION_CLIENTID</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>creationTime</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.CREATION_TIME</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>employeeList</attribute-name>
    <read-only>false</read-only>
    <reference-class>com.oocl.csc.frm.pom.test.model.Person</referenc
    e-class>
    <is-private-owned>false</is-private-owned>
    <uses-batch-reading>false</uses-batch-reading>
    <indirection-policy>
    <mapping-indirection-policy>
    <type>oracle.toplink.internal.indirection.NoIndirectionPoli
    cy</type>
    </mapping-indirection-policy>
    </indirection-policy>
    <container-policy>
    <mapping-container-policy>
    <container-class>com.oocl.csc.frm.pom.impl.FWPersistentArra
    yList</container-class>
    <type>oracle.toplink.internal.queryframework.ListContainerP
    olicy</type>
    </mapping-container-policy>
    </container-policy>
    <relation-table>EMPLOYEMENT</relation-table>
    <source-key-fields>
    <field>COMPANY.COMPANY_KEY</field>
    </source-key-fields>
    <source-relation-key-fields>
    <field>EMPLOYEMENT.EMPLOYER_KEY</field>
    </source-relation-key-fields>
    <target-key-fields>
    <field>PERSON.POID</field>
    </target-key-fields>
    <target-relation-key-fields>
    <field>EMPLOYEMENT.EMPLOYEE_ID</field>
    </target-relation-key-fields>
    <type>oracle.toplink.mappings.ManyToManyMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>lastUpdateClientID</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.LAST_UPDATE_CLIENTID</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>lastUpdatedBy</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.LAST_UPDATED_BY</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>lastUpdateTime</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.LAST_UPDATE_TIME</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>name</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.NAME</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>partner</attribute-name>
    <read-only>false</read-only>
    <reference-class>com.oocl.csc.frm.pom.test.model.Company</referen
    ce-class>
    <is-private-owned>false</is-private-owned>
    <uses-batch-reading>false</uses-batch-reading>
    <indirection-policy>
    <mapping-indirection-policy>
    <type>oracle.toplink.internal.indirection.NoIndirectionPoli
    cy</type>
    </mapping-indirection-policy>
    </indirection-policy>
    <uses-joining>false</uses-joining>
    <foreign-key-fields>
    <field>COMPANY.PARTNER</field>
    </foreign-key-fields>
    <source-to-target-key-field-associations>
    <association>
    <association-key>COMPANY.PARTNER</association-key>
    <association-value>COMPANY.COMPANY_KEY</association-value>
    </association>
    </source-to-target-key-field-associations>
    <type>oracle.toplink.mappings.OneToOneMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>persistentCtxt</attribute-name>
    <read-only>false</read-only>
    <reference-class>com.oocl.csc.frm.pom.impl.FWPOMPersistentContext
    </reference-class>
    <is-null-allowed>false</is-null-allowed>
    <aggregate-to-source-field-name-associations>
    <association>
    <association-key>OWNERID</association-key>
    <association-value>COMPANY.OWNERID</association-value>
    </association>
    <association>
    <association-key>ROOTID</association-key>
    <association-value>COMPANY.ROOTID</association-value>
    </association>
    </aggregate-to-source-field-name-associations>
    <type>oracle.toplink.mappings.AggregateObjectMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>poid</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.POID</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>version</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.VERSION</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    <database-mapping>
    <attribute-name>versionID</attribute-name>
    <read-only>false</read-only>
    <field-name>COMPANY.VERSIONID</field-name>
    <type>oracle.toplink.mappings.DirectToFieldMapping</type>
    </database-mapping>
    </mappings>
    <type>oracle.toplink.publicinterface.Descriptor</type>
    </descriptor>
    </descriptors>
    </project>
    ====================
    ====================
    Session Creation Method:
    SessionBroker broker = (SessionBroker) manager.getSession(brokerName, Thread.currentThread().getContextClassLoader());
    broker.getLogin().getPlatform().getConversionManager().setLoader(Thread.currentThread().getContextClassLoader());
    ====================
    ===================
    Class Hierarchy:
    Object extends> ..xxx.. extends> FWObject extends> Company
    The problematic attribute -- versionID -- is defined at "FWObject" level.
    ===================
    ===================
    Environment Configuration:
    Application Server version: 10.1.2
    TopLink version : 9.0.4.5
    TopLink classpath: specified at container level
    FWObject classpath: specified at container level
    Company classpath: specified at application level
    ===================
    Thanks and regards,
    William

    Dear All,
    We have loaded the toplink.jar to container level instead of application level. Don't know whether it is a possible source of error. Moreover, what is the purpose of loading antlr.jar? What is this jar for?
    Thanks and regards,
    William

  • Org.xml.sax.SAXParseException in sessions.xml

    Hello,
    Recently I migrated a 10.1.3.4 project to 11.1.1.3 and than to 11.1.2.4. When I deploy the project to the IntegratedWeblogicServer org.xml.sax.SAXParseException exceptions are thrown regarding elements in the sessions.xml.
    session.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <toplink-sessions version="11g Release 1 (11.1.1.5.0)" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <session xsi:type="server-session">
    <name>default</name>
    <primary-project xsi:type="xml">META-INF/kiMap.xml</primary-project>
    <login xsi:type="database-login">
    <platform-class>oracle.toplink.platform.database.oracle.Oracle11Platform</platform-class>
    <driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
    <datasource>jdbc/AlfaDS</datasource>
    <bind-all-parameters>false</bind-all-parameters>
    <byte-array-binding>false</byte-array-binding>
    <optimize-data-conversion>false</optimize-data-conversion>
    <trim-strings>false</trim-strings>
    <jdbc-batch-writing>false</jdbc-batch-writing>
    </login>
    </session>
    </toplink-sessions>
    Exceptions
    org.xml.sax.SAXParseException: <Line 9, Column 22>: XML-24534: (Fout) Element 'datasource' is niet verwacht.
    org.xml.sax.SAXParseException: <Line 10, Column 31>: XML-24534: (Fout) Element 'bind-all-parameters' is niet verwacht.
    org.xml.sax.SAXParseException: <Line 11, Column 30>: XML-24534: (Fout) Element 'byte-array-binding' is niet verwacht.
    org.xml.sax.SAXParseException: <Line 12, Column 36>: XML-24534: (Fout) Element 'optimize-data-conversion' is niet verwacht.
    org.xml.sax.SAXParseException: <Line 13, Column 24>: XML-24534: (Fout) Element 'trim-strings' is niet verwacht.
    org.xml.sax.SAXParseException: <Line 14, Column 30>: XML-24534: (Fout) Element 'jdbc-batch-writing' is niet verwacht.
    org.xml.sax.SAXParseException: <Line 15, Column 15>: XML-24521: (Fout) Element is niet voltooid: 'login'
    Translation
    is niet verwacht = not expected
    is niet voltooid = not complete
    Please help me with this configuration.
    With kind regards
    Martin
    Edited by: Martin Schaap on May 17, 2013 2:52 AM
    Edited by: Martin Schaap on May 20, 2013 10:31 PM

    In the session.xml schema it is a choice between driver-class/url and datasource, so you need to remove the driver-class tag as you are using a datasource.
    <driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
    <datasource>jdbc/AlfaDS</datasource>
    replalce with,
    <datasource>jdbc/AlfaDS</datasource>

  • Doubts on nonxa oracle datasources in Weblogic JTA transaction

    I am doing some studying on XA transaction handling in weblogic 10.3.6. I read a lot materials on web saying that can't enlist more than 1 non xa datasources inside one single XA transaction, so I am doing a simple test: trying to update one record in one oracle database, and inserting one record in another.
    The test code is below:
    @Stateless(mappedName = "nativeQueryTest")
    @TransactionManagement(TransactionManagementType.CONTAINER)
    @TransactionAttribute(TransactionAttributeType.REQUIRED)
    public class DaoEjb implements .... {
         @PersistenceContext(unitName="nonxa.unit")
         private EntityManager nonXAPC;
         @PersistenceContext(unitName="another.nonxa.unit")
         private EntityManager anotherNonXAPC;
         @Override
         public void doUpdateWithNonXaDss() {
              Employee l_entity = nonXAPC.find(Employee.class, "tom");
              l_entity.setAge(new Random().nextInt());
              Department l_dep = new Department();
              l_dep.setName("dept" + new Random().nextInt());
              l_dep.setEmployeeNum(new Random().nextInt());
              anotherNonXAPC.persist(l_dep);
    The persistence unit definitions are:
         <persistence-unit name="nonxa.unit" transaction-type="JTA">
              <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
              <jta-data-source>nonxa.ds</jta-data-source>
              <class>entity.Employee</class>
              <exclude-unlisted-classes>true</exclude-unlisted-classes>
              <properties>
                   <property name="eclipselink.target-database" value="Oracle" />
    <property name="eclipselink.jdbc.batch-writing" value="Oracle-JDBC" />
    <property name="eclipselink.target-server" value="WebLogic_10" />
                   <property name="eclipselink.logging.parameters" value="true" />
                   <property name="eclipselink.logging.logger" value="ServerLogger" />
              </properties>
         </persistence-unit>
         <persistence-unit name="another.nonxa.unit" transaction-type="JTA">
              <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
              <jta-data-source>another.nonxa.ds</jta-data-source>
              <class>entity.Department</class>
              <exclude-unlisted-classes>true</exclude-unlisted-classes>
              <properties>
                   <property name="eclipselink.target-database" value="Oracle" />
                   <property name="eclipselink.jdbc.batch-writing" value="Oracle-JDBC" />
                   <property name="eclipselink.target-server" value="WebLogic_10" />
                   <property name="eclipselink.logging.parameters" value="true" />
                   <property name="eclipselink.logging.logger" value="ServerLogger" />
              </properties>
         </persistence-unit>
    For the two data sources, they are pointed to two different oracle databases on two different physical servers. I use "oracle.jdbc.OracleDriver" as the drivers, and also make sure that "Support Global Transaction" options un-selected.
    I used a simple client to invoke the EJB, and saw the method completes without any error: the first record is updated and second record is inserted! Therefore I am really confused:
    1) is the JTA transaction XA or not in my small EJB?
    2) if it's XA, why can I use two non-XA datasources inside this XA transaction?

    Why do you think this should fail? Looks to me like you have two isolated transactions going on that have no relation to each other.

  • One to one Mapping Issue

    TopLink Version: 10.1.3.5.0.090715
    JBOSS: 5.1.0
    JDK: 1.6.0.17
    Database: Microsoft Sql Server 2005
    Problem Description: I have a table (TestTable1). This table has a PK that is a number from a sequence table as well as a 'SystemAssignedKey' that uses MS SQLServer Idenity generator. I can save this object by itself with no problem.
    My java object for TestTable1 would look like:
    Integer systemAssignedKey; --> MS Identity
    Integer tableGeneratedID; -->generated through a key table
    TestTable2 testTable2; -->object with foreignkey to the above values.
    I have a second table that has 2 foreign keys to the table described above. When I save a new TestTable1 Object, I also want so save this object. Since I do not have the keys until the transactions commit I wanted to utilize the OneToOneMapping.
    I used the example 3-10 in the following document:
    http://download.oracle.com/docs/cd/B10464_01/web.904/b10313.pdf
    When I attempt to save TestTable1 object 1 the fk in TestTable2 object 2 are all NULL at time of insert causing an error.
    First I am curios is this possible with using the MS Identity Key. Secondly can you help identify if I have an issue with my mappings?
    TestTable1:
    //descripter and set up code left out...
    OneToOneMapping testMap= new OneToOneMapping();
    testMap.setAttributeName("TestTable2");
    testMap.setReferenceClass(com.xxx.TestTable2.class);
    testMap.useBatchReading();
    testMap.setCascadeAll(true);
    testMap.dontUseIndirection();
    testMap.addTargetForeignKeyFieldName("TestTable2.ID1", "TestTable1.systemAssignedKey");
    testMap.addTargetForeignKeyFieldName("TestTable2.ID2", "TestTable1.tableGeneratedID");
    testMap.useJoining();
    descriptor.addMapping(testMap);
    In the TestTable2table I have the following. This is the table based on this mapping setup i would expect to have the values populated at time of commit based on this mapping:
    OneToOneMapping testMap2= new OneToOneMapping();
    testMap2.setAttributeName("TestTable1");
    testMap2.setReferenceClass(com.rbs.common.etspii.impl.LESLegalEntityImpl.class);
    testMap2.useBatchReading();
    testMap2.setCascadeAll(true);
    testMap2.dontUseIndirection();
    testMap2.addForeignKeyFieldName("TestTable2.ID1", "TestTable1.systemAssignedKey");
    testMap2.addForeignKeyFieldName("TestTable2.ID2", "TestTable1.tableGeneratedID");
    testMap2.useJoining();
    descriptor.addMapping(testMap2);
    With this configuration TestTable1 would save with the correct values(the IDENTITY and Seq Number), however the TestTable2 object would insert NULL values. I would expect those to be populated with the values in the TestTable1.
    Any help would be greatly appreciated!
    thanks,
    Jeremy

    Chris,
    Thanks for the quick response.
    Here is how i have the sequenceing set up for for of the ID's. The other, the Identity on MS Sql Server, is done at the table level.
    public void applyLogin() {
              DatabaseLogin login = new DatabaseLogin();
              login.usePlatform(new SQLServerPlatform());
              login.setDriverClassName("net.sourceforge.jtds.jdbc.Driver");
              login.setUserName("toplink_user");
              login.setEncryptedPassword("");
              // Configuration properties.
              login.setUsesNativeSequencing(false);
              login.setSequencePreallocationSize(2000);
              login.setSequenceTableName("LESKeyAssigner");
              login.setSequenceNameFieldName("attributeName");
              login.setSequenceCounterFieldName("nextNumber");
              login.setShouldBindAllParameters(false);
              login.setShouldCacheAllStatements(false);
              login.setUsesByteArrayBinding(true);
              login.setUsesStringBinding(false);
              if (login.shouldUseByteArrayBinding()) { // Can only be used with
                                                                     // binding.
                   login.setUsesStreamsForBinding(false);
              login.setShouldForceFieldNamesToUpperCase(false);
              login.setShouldOptimizeDataConversion(true);
              login.setShouldTrimStrings(true);
              login.setUsesBatchWriting(false);
              if (login.shouldUseBatchWriting()) { // Can only be used with batch
                                                                // writing.
                   login.setUsesJDBCBatchWriting(true);
              login.setUsesExternalConnectionPooling(false);
              login.setUsesExternalTransactionController(false);
              setLogin(login);
    Sequencing for TestTable1
              Descriptor descriptor = new Descriptor();
              descriptor.setJavaClass(com.xxxl.TestTable1.class);
              descriptor.addTableName("dbo.TestTable1");
              descriptor.addPrimaryKeyFieldName("TestTable1.tableGeneratedID");
              // Descriptor properties.
              descriptor.useSoftCacheWeakIdentityMap();
              descriptor.setIdentityMapSize(100);
              descriptor.useRemoteSoftCacheWeakIdentityMap();
              descriptor.setRemoteIdentityMapSize(100);
              descriptor.setAlias("LESLegalEntity");
              descriptor.setSequenceNumberFieldName("TestTable1.tableGeneratedID");
              descriptor.setSequenceNumberName("legalEntityNumber");
    So if you will notice i have 2 id's. One of which is done via sequence table the other via MS Identity column.
    Now what i have done is get the assigned key from the table prior to the save. I have set this on the object. So this value is actually populated in TestTable1 object prior to the submit and commit. However when it is committed both ID's are NULL for TestTable2 object. This makes me think i have not set up the mapping correctly or I am not using as it is supposed to be.
    Thanks for the pointer, I will make that change.
    Jeremy

Maybe you are looking for