Regarding Partial Commit

Can ODI does partial commit.Suppose some rows got errored out in a batch of large data, can ODI do partial commit for the rows neglecting erroneous ones. If this can be done, how to acheive this?
This feature would be helpful especially in a batch run where the entire transaction gets rolled back only because of few erroneus ones?

You can do it using the flow control facilities in your interface.
You can then specify a commit or a rollback depending of the number of errors.
The invalid data are loaded into an Error Table and can be recycled.

Similar Messages

  • SAP FS-CD - VPVA Partial commit issue

    When we are running VPVA with start current run, dunning will be escalated and the same will be viewed in VYM10 transaction for the dunning history. Also we can view the current dunning level in FPL9 transaction by choosing the below menu
    FPL9->SETTING->ADDITIONAL FIELD->SHOW->OK. 
    Here the issue is, after running VPVA with start current run we are able to view only in VYM10 but its not getting reflected in FPL9 addtional items. Still the dunning level shows as 00.
    How to avoid this partial commit issue. FS-CD Experts please advise on this.

    If FSCD items contain multiple business areas, then there is a possibility that the business area in FI might get populated with a blank space.
    If this issue is occurring with all the posting to FI where the BSEG table ( where as in case of new GL, RBUSA in table FAGLFLEXT for total and FAGLFLEXA for item) is not getting populated with business area then there might be some reconciliation issues depending upon the GL (new or old) used. If you are using new GL, you might have to check if document splitting functionality is activated in new GL (mySAPERP which is ECC 6.00 and pack SAPKH6001 and above) and will have to maintain the Business Area as a splitting rule. Additionally refer to note 990612 & check FM FI_DOCUMENT_POST.

  • Partial Commit in ODI

    Can ODI does partial commit of large batch of data neglecting erroneous ones. If so how to achieve this? This would be very helpful especiall in a batch transaction where instead of rolling back entire batch due to erroneous records atleast the correct records can be commited to target

    Hi,
    Adding some more points,
    As per i suggested eariler you can use CKM for capturing the error records. Using CKM is also a permanent solution for the data flow.
    Below is the steps for CKM processings,
    1. In your target data store declare a constraint (right click on the constraints, say INSERT CONDITION)
    2. Lets assume that you need to capture records in a field which is not a number ie.,rather than numeric values. Then in the condion window select Sunopsis condition and in the Where box REGEXP_LIKE(<col_name>, '^[[:digit:]]*$'). It will pull all such records to E$ table.
    3. Add this data store in your target and say FLOW_CONTROL to YES and RECYCLE_ERROR to YES in the Control tab of your interface select the CKM and constraints.
    Your error records will be moved to E$ table and in the second run once you correct the error records in E$ table those records will again move to target table (recycle error).
    Please explore the below link for more information on REGEXP.
    http://www.oracle.com/technology/oramag/webcolumns/2003/techarticles/rischert_regexp_pt1.html
    All the best.
    Thanks,
    Guru

  • Interesting scenario on Partial Commit - Deletion of EO

    Hi all,
    Jdev Version: Studio Edition Version 11.1.1.7.0
    Business Requirement: Need to partially commit the deletion of one row (say Row-A) in EO and serialize another row (an addition say Row-B) for committing after approval.
    How we achieve this:
    Step 1 - Make the change in Row-A & Row-B on my main AM (AM1).
    Step 2 - create a parallel new root AM (AM2) and make the changes that need to be partially committed on the VO/EO in this new root AM (AM2) and issue commit on this AM (AM2).
    Step 3 - Now after the partial commit, am back to AM1 where I would like to serialize only the change on Row-B. So I call the remove() on Row-A and passivate.
    Step 4- On my offline approval page, I invoke activate to show the new addition Row-B for approval post which I can invoke commit.
    Issue we face: When we passivate in Step 3, the deletion of Row-A also gets passivated. As a result, this row shows up on my approval screen. On my approval screen, only Row-B should be displayed.
    Appreciate your inputs on this issue.
    Thanks,
    Srini

    Hi,
    row A will be deleted with the next commit the way you put it. My interpretation is you want to remove it from the collection in which case you would call removeRowFromCollection on the VO. Instead of what you are doing here I would consider a soft-delete in which you set a delete flag on a row instead of deleting or commiting it. You can then use a custom RowMatcher on a VO to filter those entities out in which case you only see those for approval that you need to see.
    For partial commits, I would use a different strategy. Create a headless task flow (no view activity just a method activity and a return activity) and have it configured to not share the Data Control (isolated) You then pass in the information for the row to commit, query it in the method activity (through a VO) update the VO and commit the change using the commit method from the DataControl frame (you get this through the BindingContext class.
    This is more declarative and reusable than creating an AM just for this purpose. However, keep in mind that the callingg task flow still things Row A is changed (as it doesn't participate in the commit. So what you do is to call row.refresh() and issue DB forget changes as an argument so Row A is reset to the state that is in the database
    Frank

  • Issue regarding partial payment & cleared items.

    hi Techies,
    I want to develop report which should show all cleared item. (accounting document no,  corresponiding basic amount). End user is using residual clearence procedure for partial payment. i.e.,
    consider if vendor has to pay rs 10000  as service tax to company , its stored in open items against one Accountin Document no.
    Belnr            Amt
    000101 -
    > 10,000
    supppoes he pays 3000
    system is creating one clearing document against this 3000 payment.
    NOW REST OF 7000  EARLIER CLERING DOCUMENT IS BECOME ACCOUNTING DOCUENT . THIS CYCLE IS CONTINE UNTIL CLEARS ALL THE AMOUNT.
    INTIAL STATE.
    000101  -
    > 10,000 -
    > CL1.  (PAID 3000)
    SECOND STATE
    CL1        -
    >   7,000----
    > CL2  (PAID 3000)
    THIRD STATE
    CL2        -
    >   4,000----
    > CL3  (PAID 3000)
    FOURTH STATE
    CL3        -
    >   1,000----
    > CL4  (PAID 1000)
    FIFTH STATE
    CL4        -
    >   1,000----
    > CL5  CLEARS ALL AMT
    HE WANT TO DISPALAY ACCOUNTING DOCUMETN NO WHICH ARE CLOSED (I.E. CLEARED )
    AT THE END OF FIFTH STATE ONLY THIS ACCOUNTING DOCUMENT NO SHOULD DISPLAY IN REPORT OTHEWISE NOT.( IE IF USER RUN THE REPORT  AT SECOND OR THIRD STATE, THE ACCUNTING DOCUMENT NO SHOULD NOT DISPLAY )
    THERE IS NO LIMIT OF PARTIAL PAYMETNS I.E VENDOR CAN PAY HIS AMOUNT AS MANY  TIEMS HE WISHES.
    IS THERE ANY SOLUTION TO THIS REPORT...
    REGARDS
    RAJU

    Hi,
    Yes it is possible.
    Check the docuement types of the documents, you should get the answer in that.
    Regards,
    Prakash Pandey

  • Tran.FBL1N - problem with showing correct amount regarding partial paymants

    Hello,
    Every day at my workplace I use transaction FBL1N which I use to see how much money I own to vendors and also to se how much is payed. However I have problem whit showing payed amounts for partial paymants (transaction F-59). For example:
    I own some vendor EUR 20.000,00 and I decide to pay him EUR 15.000,00 using partial paymant option (transaction F-59) because that's amount I have at the moment on my bank account. After that, when I choose that vendor using transaction FBL1N (with enabled option Open items on current date) so I coiuld see how much it is left to pay him, amount EUR 15.000,00 is not subtracted and reprt is still showing me that I own him EUR 20.000,00 which ofcourse is not correct.
    I also must note that in case I payed that vendor full amount I wouldn't have this problem and transaction FBL1N would show correct amount.
    I hope someone can advise me about this and help me solve my issue.
    Any help is appreciated and many thanks in advance for prompt replys.
    Cheers;)
    Adi
    Edited by: samnovice on Jul 18, 2011 1:22 PM

    thnx loky46, I understand it better now - I hva one more important question regarding this:
    Is there any way I can found total amount I own to one or more vendors (_regardless_ if it is full or partial paymant or combination of both paymants)? For example:
    Line 1 || VENDOR I || Eur 20.000,00 RE
    Line 2 || VENDOR I || Eur 15.000,00 AB
    TOTAL                     Eur   5.000,00
    but without using this option:
    you have to add the field "invoice reference" to your line layout and make a subtotal on it.
    Also, is it possible for line Eur 15.000,00 to become Close item because that part is payed and for rest of amount of Eur 5.000 to be Open item because is still not payed. I assume this can't be done but on this way I could now exact total amount of money which I own to my vendor(s) and it would be a of great help to me in every-day work.
    Thank you.
    Edited by: samnovice on Jul 19, 2011 3:26 PM

  • Keep the scroll position on partial commit in af:table

    Hi,
    We have an ADF 11.1.1.1.0 application that uses a lot of editable tables. In those tables, we have <tt>autoSubmit</tt> enabled for all fields, since some combo boxes and LOVs depend on values entered in the same row. The problem is, that every time a partial submit occurs, the table reloads itself and in the process the position of the scrollbar is lost. This means the user has to scroll down to the record he was editing and continue. By setting <tt>displayRow="selected"</tt> on the <tt><af:table></tt>, we can at least achieve that the table scrolls to the record the user was on a refresh. But if the record was not at the top when the user started editing, the record still "jumps" in the experience of the user. We would like a situation where the table keeps the exact position of the scrollbar while partial submits occur. Isn't it possible to just refresh the selected record, instead of refreshing all rows?
    Best regards,
    Bart Kummel

    You can programmatically achieve this, check the following threads.
    How can I programmatically select row to edit in ADF - 11g
    Select Table Row

  • EJB 3.0 (MDB) CMT does partial Tx commit in hibernate

    Seems like CMT (MDB) does partial commit.+
    A secnario like this: we are using EJB3.0 combination with Hibernate
    1. Some message sent to queue.
    2. MDB listner who listen the queue its onMessage method is invoked. See below for MDB configurations.
    @TransactionManagement(TransactionManagementType.CONTAINER)
    @TransactionAttribute(TransactionAttributeType.REQUIRED)
    3. Some Handler is called from MDB say ResponseHandler.
    4. We called 2 update operation from this handler.
    ResponseHandler#updateTable1() with below set of execution
    |__ em.merg(table1Entity);
    |__ em.flush()
    ResponseHandler#updateTable2() with below set of execution
    |__ em.merg(table2Entity);
    |__ em.flush()
    Problem:+
    I can see only one table1 is get updated. Table2 is not updated with latest value though MDB class is container managed...
    Hibernate query logs: In logs i can see update operation for both the tables
    Hibernate: update table1...
    Hibernate: update table2...
    PS: We can't see this problem in case we put debug breakpoint on first line of onMessage() method of MDB
    class ResponseListner implements MessageListener
    public void onMessage(final Message message)
    Logger.info("ResponseListner.onMessage() : Entered"); //breakpoint line (breakpoint is here)
    Weblogic transaction logs+
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <> <> <1333538349228> <BEA-000000> <BEA1-0012DE77F7FE1343F773: null: XA.prepare(rm=WLStore_application_cluster_domain2_applicationServerJDBCStore, xar=WLStore_application_cluster_domain2_applicationServerJDBCStore234212480>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <> <> <1333538349228> <BEA-000000> <startResourceUse, Number of active requests:1, last alive time:0 ms ago.>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349233> <BEA-000000> <BEA1-0012DE77F7FE1343F773: null: XA.prepare DONE:ok>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349233> <BEA-000000> <endResourceUse, Number of active requests:0>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAJDBC> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349233> <BEA-000000> <JDBC LLR pool='com.application.ds' xid='BEA1-0012DE77F7FE1343F773' tbl='WL_LLR_applicationSERVER': begin write XA record table=WL_LLR_applicationSERVER recLen=529>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAJDBC> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349234> <BEA-000000> <JDBC LLR pool='com.application.ds' xid='BEA1-0012DE77F7FE1343F773' tbl='WL_LLR_applicationSERVER': after write XA record>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAJDBC> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349234> <BEA-000000> <JDBC LLR pool='com.application.ds' xid='BEA1-0012DE77F7FE1343F773' tbl='WL_LLR_applicationSERVER': commit one-phase=false>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAJDBC> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349239> <BEA-000000> <JDBC LLR pool='com.application.ds' xid='BEA1-0012DE77F7FE1343F773' tbl='WL_LLR_applicationSERVER': commit complete>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <> <> <1333538349240> <BEA-000000> <startResourceUse, Number of active requests:1, last alive time:0 ms ago.>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349243> <BEA-000000> <BEA1-0012DE77F7FE1343F773: null: XA.commit DONE (rm=WLStore_application_cluster_domain2_applicationServerJDBCStore, xar=WLStore_application_cluster_domain2_applicationServerJDBCStore234212480>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349243> <BEA-000000> <endResourceUse, Number of active requests:0>
    ####<Apr 4, 2012 4:49:10 PM IST> <Info> <Health> <inlinapplication001> <applicationServer> <weblogic.GCMonitor> <<anonymous>> <> <> <1333538350589> <BEA-310002> <39% of the total memory in the server is free>
    ####<Apr 4, 2012 4:49:26 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0014DE77F7FE1343F773> <> <1333538366645> <BEA-000000> <ResourceDescriptor[WLStore_application_cluster_domain2__WLS_applicationServer]: getOrCreate gets rd: name = WLStore_application_cluster_domain2__WLS_applicationServer
    resourceType = 2
    registered = true
    scUrls = applicationServer+10.19.216.10:5003+application_cluster_domain2+t3+
    xar = WLStore_application_cluster_domain2__WLS_applicationServer1926833320
    healthy = true
    lastAliveTimeMillis = 1333538336966
    numActiveRequests = 0
    >

    You have to add the property toplink.ddl-generation.output-mode to your persistence.xml file, for example:
    <?xml version="1.0" encoding="windows-1252" ?>
    <persistence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
    version="1.0" xmlns="http://java.sun.com/xml/ns/persistence">
    <persistence-unit name="model">
    <jta-data-source>jdbc/jdm-akoDS</jta-data-source>
    <properties>
    <property name="toplink.logging.level" value="INFO"/>
    <property name="toplink.target-database" value="Oracle"/>
    <property name="toplink.ddl-generation" value="drop-and-create-tables"/>
    <property name="toplink.ddl-generation.output-mode" value="database"/>
    </properties>
    </persistence-unit>
    </persistence>

  • Is COMMIT; required in Procedure body?

    Hi,
    How should the COMMIT statement be used within a Procedure or Function? I saw in the Procedures, some developers end the DML statement with COMMIT; some never issue a COMMIT.
    Can you please advise a good practice regarding the COMMIT statement?
    Thanks in advance for any help.

    asharma23 wrote:
    hi,
    its better to use commit inside a stored procedure since commit allows save changes in the database ,the changes so required by you will only be reflected if u ve used commit,also the database then points to(refers to) its last commit state, Without commit no changes will be made against the database and its objects
    Experienced developers know the essence of commit..Well, you've explained why it's necessary to commit data on a database, but you haven't explained or justified why you believe a commit should be inside a stored procedure?
    Would you consider that every stored procedure that performs DML should issue a commit after it? Or would it be better that the data is committed when the business logic dictates it should.
    e.g.
    I have some stored procedures that add a new employee details to my database:-
    . procedure add_employee_name
    . procedure add_employee_address
    . procedure add_employee_job
    . procedure add_employee_salary
    and one final procedure
    . procedure add_employee
    which calls all the previous four procedures.
    Should each of the individual procedures commit what they do or would it be more logical for the overall add_employee procedure to issue the commit once all of the procedures have been called?
    Imagine if each procedure committed and we were trying to add an employee, but somebody had typed in the salary incorrectly so it was not a valid salary. The employees name, address and job would be added and committed to the database and then the salary procedure would fail, meaning that the database now contains only 3/4 of the required data for a valid employee on our database.
    Now imagine if only the outer procedure committed. When the salary procedure fails we would rollback to the start of our logical business unit i.e. before this employee got added and report the error. The database would not end up with partial employee data, and our reports and other applications that rely on it would not break. Once the error is corrected the data could be put on completely and committed and it would meet the requirements for our reports and applications.
    Moral of the story... commit when it is logical to commit, not when you feel like it. ;)

  • Partial delivery issue

    Hi All,
    I have one issue regarding partial delivery. Here I am explaining with example.
    I have customer say 10002 he has ordered products 'X' and 'Y'.
    For product 'X' qty is 20 and Product 'Y' qty is 10. Now i have created order with this products and qty. During order creation product 'X' qty was available but product 'Y' qty was not available. When I created first time delivery I have created delivery successfully for product 'X' with qty 20.
    After this delivery I have received product 'Y' stock. So second time I created a delivery for product 'Y', where in picking I have picked qty 5 instead of qty 10 then i did PGI. But I am getting system message "Delivery has not yet been put away / picked (completely)"
    I have checked my customer master in shipping tap partial delivery allowed is there. I don’t have CM info and i don't have any values in order item line shipping tap Max.paritial delivery.
    So where is going wrong. Pls help me in this regard.
    regards
    srini

    In VL02N change the delivery quantity from 10 to 5 and then do PGI.
    There error is for incompletion procedure that you have defined in your system.

  • Reg. Commit Work Response time

    Hi All,
        Could you please let me know to which factors the response of the Commit work belong to.
    Regards,
    Sen

    Hi,
    Commit work for the database tables.
    Regards,
    Naresh.

  • AniServer Campus Manager problem

    Hi I recently ran a Device patch update and ever since I have had major problems with Campus Manager.
    I figured it was Aniserver related So i have done the following so far:
    1- I reinitialised the ANI Database
    2 - I replaced the aniserv.properties with the .orig file.
    Still the problems exist even if i managed to get the Aniserver started campus manager data collection would have problems and intermittent error messages when trying to access Campus Manager admin pages.
    3 - I decided to upgrade to LMS3.2 SP1
         It seemed to be ok with data collection but then when I came in to work today it hadnt run any scheduled aquistions when i went tinto the admin pages I had the error messages again unable to get properties from server.
    4 I decided to try a complete restore of the database to a date before the initial device package update.
    But I still have problems with the aniserver Failed to run.
    Please Help!!
    Kind regards Dean
    Here is the Ani log
    2011/06/23 13:11:10 main ani ERROR ServiceModule: Failed to instantiate com.cisco.nm.ani.server.dfmpoller.NoServiceModule
    2011/06/23 13:11:10 main ani MESSAGE DBConnection: Created new Database connection [hashCode = 6164599]
    2011/06/23 13:11:12 main ani MESSAGE PartialSMFCommit: Partial Commit is enabled for DataCollection
    2011/06/23 13:11:16 main ani ERROR POStore: Exception caught: com.cisco.nm.ani.server.frontend.AniStaticConfigException: AniStaticConfigException: Failed (sql) to verify/construct schema for PO com.cisco.nm.ani.server.topo.PagpPortChannel because: com.sybase.jdbc2.jdbc.SybSQLException: SQL Anywhere Error -116: Table must be empty
    2011/06/23 13:11:16 main ani ERROR AniLoadConfiguration: AniStaticConfigException: Failed to construct or verify database

    Seems to be working fine now I used the info in the following link to reinitialise the ani.db  https://supportforums.cisco.com/thread/2083513
    Using this alone did not fix it opt/CSCOpx/bin/perl /opt/CSCOpx/bin/dbRestoreOrig.pl dsn=ani dmprefix=ANI
    But doing this as well seemed to sort it:
    First, set the CiscoWorks Daemon Manager service to Manual, and reboot the server.  When the server reboots, delete NMSROOT\databases\ani\ani.log if it exists.  Then run:
    NMSROOT\objects\db\win32\dbsrv9 -f NMSROOT\databases\ani\ani.db
    After that, set the Daemon Manager service back to Automatic, and reboot again
    Regards Dean

  • Pushreplication Example failing

    Hi all,
    I have been struggeling with the Pushreplication ActiveActive Example for a while now, and I cannot get it to work
    I am able to run both Active Cache Servers, but as soon as I am running the ActiveActiveUpdater, I am getting errors in both the Client and the Server.
    I am using Coherence 12.1.3 and the Coherence Incubator 12.3 on Windows 7 64 bit.
    Here are the logs - ActiveActiveUpdater:
    Updating the cache with running sum object key = [0] and value [130]
    Exception in thread "main" Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for DistributedCacheWithPublishingCacheStore service on Member(Id=2, Timestamp=2015-03-08 11:03:03.758, Address=10.162.62.78:8090, MachineId=23146, Location=site:Site1,machine:ANIWCZIN-IL,process:1016, Role=CoherenceServer) (Wrapped: Failed to store key="0") poll() is a blocking call and cannot be called on the Service thread) poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:289)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:50)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPartialCommit(PartitionedCache.CDB:7)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:82)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:3)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:376)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.responseMessage.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: Portable(com.tangosol.util.AssertionException): poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.coherence.Component._assertFailed(Component.CDB:12)
    at com.tangosol.coherence.Component._assert(Component.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:24)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:26)
    at com.tangosol.coherence.config.scheme.AbstractCachingScheme.realizeCache(AbstractCachingScheme.java:63)
    at com.tangosol.net.ExtensibleConfigurableCacheFactory.ensureCache(ExtensibleConfigurableCacheFactory.java:242)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:205)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:182)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:108)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:58)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributor.establishEventChannelController(CoherenceEventDistributor.java:149)
    at com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributorTemplate.realize(EventDistributorTemplate.java:263)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:208)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:1)
    at com.oracle.coherence.common.resourcing.AbstractDeferredSingletonResourceProvider.getResource(AbstractDeferredSingletonResourceProvider.java:85)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.distribute(PublishingCacheStore.java:327)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.store(PublishingCacheStore.java:523)
    at com.tangosol.net.cache.ReadWriteBackingMap$BinaryEntryStoreWrapper.storeInternal(ReadWriteBackingMap.java:6221)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:5003)
    at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1438)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:758)
    at java.util.AbstractMap.putAll(AbstractMap.java:273)
    at com.tangosol.net.cache.ReadWriteBackingMap.putAll(ReadWriteBackingMap.java:801)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.putPrimaryResource(PartitionedCache.CDB:63)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postInvoke(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvokeAll(PartitionedCache.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvoke(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:3)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.PortableException.readExternal(PortableException.java:150)
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:59)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:376)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.responseMessage.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    CacheServer:
    2015-03-08 11:03:40.054/40.096 Oracle Coherence GE 12.1.3.0.0 <Warning> (threadistributedCacheistributedCacheWithPublishingCacheStore, member=2): Application code running on "DistributedCacheWithPublishingCacheStore" service thread(s) should not call ensureCache as this may result in deadlock. The most common case is a CacheFactory call from a custom CacheStore implementation.
    2015-03-08 11:03:40.056/40.098 Oracle Coherence GE 12.1.3.0.0 <Error> (threadistributedCacheistributedCacheWithPublishingCacheStore, member=2): Assertion failed: poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:24)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:26)
    at com.tangosol.coherence.config.scheme.AbstractCachingScheme.realizeCache(AbstractCachingScheme.java:63)
    at com.tangosol.net.ExtensibleConfigurableCacheFactory.ensureCache(ExtensibleConfigurableCacheFactory.java:242)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:205)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:182)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:108)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:58)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributor.establishEventChannelController(CoherenceEventDistributor.java:149)
    at com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributorTemplate.realize(EventDistributorTemplate.java:263)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:208)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:1)
    at com.oracle.coherence.common.resourcing.AbstractDeferredSingletonResourceProvider.getResource(AbstractDeferredSingletonResourceProvider.java:85)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.distribute(PublishingCacheStore.java:327)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.store(PublishingCacheStore.java:523)
    at com.tangosol.net.cache.ReadWriteBackingMap$BinaryEntryStoreWrapper.storeInternal(ReadWriteBackingMap.java:6221)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:5003)
    at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1438)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:758)
    at java.util.AbstractMap.putAll(AbstractMap.java:273)
    at com.tangosol.net.cache.ReadWriteBackingMap.putAll(ReadWriteBackingMap.java:801)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.putPrimaryResource(PartitionedCache.CDB:63)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postInvoke(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvokeAll(PartitionedCache.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvoke(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:3)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    2015-03-08 11:03:40.058/40.100 Oracle Coherence GE 12.1.3.0.0 <Warning> (threadistributedCacheistributedCacheWithPublishingCacheStore, member=2): Partial commit due to the backing map exception com.tangosol.internal.util.HeuristicCommitException
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postInvoke(PartitionedCache.CDB:42)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvokeAll(PartitionedCache.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvoke(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:3)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: (Wrapped: Failed to store key="0") com.tangosol.util.AssertionException: poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:289)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.onStoreFailure(ReadWriteBackingMap.java:5344)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:5009)
    at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1438)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:758)
    at java.util.AbstractMap.putAll(AbstractMap.java:273)
    at com.tangosol.net.cache.ReadWriteBackingMap.putAll(ReadWriteBackingMap.java:801)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.putPrimaryResource(PartitionedCache.CDB:63)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postInvoke(PartitionedCache.CDB:36)
    ... 14 more
    Caused by: com.tangosol.util.AssertionException: poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.coherence.Component._assertFailed(Component.CDB:12)
    at com.tangosol.coherence.Component._assert(Component.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:24)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:26)
    at com.tangosol.coherence.config.scheme.AbstractCachingScheme.realizeCache(AbstractCachingScheme.java:63)
    at com.tangosol.net.ExtensibleConfigurableCacheFactory.ensureCache(ExtensibleConfigurableCacheFactory.java:242)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:205)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:182)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:108)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:58)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributor.establishEventChannelController(CoherenceEventDistributor.java:149)
    at com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributorTemplate.realize(EventDistributorTemplate.java:263)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:208)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:1)
    at com.oracle.coherence.common.resourcing.AbstractDeferredSingletonResourceProvider.getResource(AbstractDeferredSingletonResourceProvider.java:85)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.distribute(PublishingCacheStore.java:327)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.store(PublishingCacheStore.java:523)
    at com.tangosol.net.cache.ReadWriteBackingMap$BinaryEntryStoreWrapper.storeInternal(ReadWriteBackingMap.java:6221)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:5003)
    ... 20 more
    Many thanks in advance!!

    HI Subba,
       have you doen any binding if yes can you send teh details please.
    Regards
    AnujN

  • Changing row selection of a af:table component in a managed bean

    Hi,
    how can ich programmatically change the selected row of a <af:table> component within a managed bean class?
    I have a table which depends on the date settings of a <dvt:timeSelector> of a <dvt:lineGraph> component. Now when the timeSelector is moved to new dates the table should be refreshed by executing the query for the table again with the new dates. The problem is when the query of the table's view object is executed again the first row will be automatically selected after executing.
    Now I want to achieve that the last row I have selected will be selected again after moving the time selector.
    I searched already in the OTN Discussion Forum but didn't find a fitting solution.
    Thanks in advance!

    The problem is that executing the query moves the current roe to the first one.
    What you can do is to save the current row (its pk), execute the query and then set the current row to the pk of the saved selected one. Set the table attribute displayRow="selected" and set the selected row of the table to the now current row.
    One problem is that you have to be sure the last selected row is still in the record set of the new query result.
    Here are some pointers with code:
    Keep the scroll position on partial commit in <af:table>
    Jdev 11G ADF BC: rollback and keeping current row problem
    How can I programmatically select row to edit in ADF - 11g
    Timo

  • Weblogic Explicit Transaction Management

    Hi,
    I am in need of managing mutiple transactions which access two different databases which are not Oracle (NON XA) but still I want to implement 2 Phase Commit.
    I read documents about JTA, and tried to use TransactionManager . But I am not able to create a XaResource with my weblogic JNDI lookup. It is just return a type cast exception saying not able to convert form RmiDataSource to XaDataSource. Transaction class's enlistResource() method only accepts XaReource. Can you suggest a solution for acheving mutiple database transactions to be commited or rolledback as per need. In shot how to manage transactions explicitly in Weblogic 10.

    OK, now I'm catching up. Do you see that exception thrown in your calling
    code, i.e. does your code as posted catch that exception?
    If so, you most probably are not using the proper connection pool -- you
    should be using the JTS pool not just a WL pool -- and so the connection is
    not participating in the transaction.
    Peace,
    Cameron Purdy
    Tangosol Inc.
    Tangosol Coherence: Clustered Coherent Cache for J2EE
    Information at http://www.tangosol.com/
    "Sunil Naik" <[email protected]> wrote in message
    news:3c26a505$[email protected]..
    >
    "Cameron Purdy" <[email protected]> wrote:
    Why do you say it is not working? Do you get a compile error? An
    exception?
    A partial commit? Deadlock? Horse head in your bed? Out of memory?
    Peace,
    Hi Cameron,
    Actually , while testing the method ,I am deliberately making the3 rd
    method call throw an Exception and exit.In that case what I expect is thatthe
    work done in the first two method calls should be rolled back. i.e.Therows should
    not be inserted in the database.This is not happening.The inserts made inthe
    earlier two methods are being commited.
    Hope it is clear now.
    Thanx,
    sunil
    Cameron Purdy
    Tangosol Inc.
    Tangosol Coherence: Clustered Coherent Cache for J2EE
    Information at http://www.tangosol.com/
    "Sunil Naik" <[email protected]> wrote in message
    news:3c230f17$[email protected]..
    Hi,
    I have written a StatelessSession Bean. There is a method in thisbean
    which
    calls methods of Other BMP Beans.All these methods have to be partof one
    Transaction.Below
    I have shown in Pseudocode ,how I am handling it.
    //This is the SessionBean Method
    public void processDocument()
    UserTransaction Utrx = SessionContext.getUserTransaction();
    try
    Utrx.begin();
    Bean1.method1();//this method inserts a row in the database
    Bean2.method2();// this method inserts a row in another table
    Bean3.method3();//deletes a row
    // couple of other method calls
    Utrx.commit();
    catch()
    //catch Exceptions and rollback.

Maybe you are looking for

  • Can I install OS X Server on my Macbook Pro ?

    Q1) Can I install OS X Server on my Macbook Pro (mid 2010) and use it as a server in my small office network ? Q2) If i have OS X Server, can some of the client machines be windows Xp / 7 ? Will it work well ?

  • Hotsync from mac to treo

    There are a ton of doubles that I have cleared off the flies on my palm desktop on my mac.  I want to do a hotsync only 1 way from mac to treo.  This will clean up all the doubles in address and calander.  I couldnt find the option of one way sync an

  • Can I sync Outlook 2011 with Apple contacts in the latest Mavericks like I used to do in OSX 10.8.5?

    Can I sync Outlook 2011 with Apple contacts in the latest Mavericks like I used to do in OSX 10.8.5?

  • How to transport bi objects in to r/3 production

    sr, i have created one dso along with the transformation,infopackage, reports,infoobjects etc, now i want to transport all these objects in to production,kinldy tell me the steps to transport them into production also what are the precautions to be t

  • Video Encoder

    The following error displays upon the launch of Adobe Flash Cs3 Video Encoder: "A required system library did not initialize properly, Please ensure you have DirectShow 9 and QuickTime 7 or high installed on your system." We did however install Direc