Partial Commit in ODI

Can ODI does partial commit of large batch of data neglecting erroneous ones. If so how to achieve this? This would be very helpful especiall in a batch transaction where instead of rolling back entire batch due to erroneous records atleast the correct records can be commited to target

Hi,
Adding some more points,
As per i suggested eariler you can use CKM for capturing the error records. Using CKM is also a permanent solution for the data flow.
Below is the steps for CKM processings,
1. In your target data store declare a constraint (right click on the constraints, say INSERT CONDITION)
2. Lets assume that you need to capture records in a field which is not a number ie.,rather than numeric values. Then in the condion window select Sunopsis condition and in the Where box REGEXP_LIKE(<col_name>, '^[[:digit:]]*$'). It will pull all such records to E$ table.
3. Add this data store in your target and say FLOW_CONTROL to YES and RECYCLE_ERROR to YES in the Control tab of your interface select the CKM and constraints.
Your error records will be moved to E$ table and in the second run once you correct the error records in E$ table those records will again move to target table (recycle error).
Please explore the below link for more information on REGEXP.
http://www.oracle.com/technology/oramag/webcolumns/2003/techarticles/rischert_regexp_pt1.html
All the best.
Thanks,
Guru

Similar Messages

  • Regarding Partial Commit

    Can ODI does partial commit.Suppose some rows got errored out in a batch of large data, can ODI do partial commit for the rows neglecting erroneous ones. If this can be done, how to acheive this?
    This feature would be helpful especially in a batch run where the entire transaction gets rolled back only because of few erroneus ones?

    You can do it using the flow control facilities in your interface.
    You can then specify a commit or a rollback depending of the number of errors.
    The invalid data are loaded into an Error Table and can be recycled.

  • SAP FS-CD - VPVA Partial commit issue

    When we are running VPVA with start current run, dunning will be escalated and the same will be viewed in VYM10 transaction for the dunning history. Also we can view the current dunning level in FPL9 transaction by choosing the below menu
    FPL9->SETTING->ADDITIONAL FIELD->SHOW->OK. 
    Here the issue is, after running VPVA with start current run we are able to view only in VYM10 but its not getting reflected in FPL9 addtional items. Still the dunning level shows as 00.
    How to avoid this partial commit issue. FS-CD Experts please advise on this.

    If FSCD items contain multiple business areas, then there is a possibility that the business area in FI might get populated with a blank space.
    If this issue is occurring with all the posting to FI where the BSEG table ( where as in case of new GL, RBUSA in table FAGLFLEXT for total and FAGLFLEXA for item) is not getting populated with business area then there might be some reconciliation issues depending upon the GL (new or old) used. If you are using new GL, you might have to check if document splitting functionality is activated in new GL (mySAPERP which is ECC 6.00 and pack SAPKH6001 and above) and will have to maintain the Business Area as a splitting rule. Additionally refer to note 990612 & check FM FI_DOCUMENT_POST.

  • Interesting scenario on Partial Commit - Deletion of EO

    Hi all,
    Jdev Version: Studio Edition Version 11.1.1.7.0
    Business Requirement: Need to partially commit the deletion of one row (say Row-A) in EO and serialize another row (an addition say Row-B) for committing after approval.
    How we achieve this:
    Step 1 - Make the change in Row-A & Row-B on my main AM (AM1).
    Step 2 - create a parallel new root AM (AM2) and make the changes that need to be partially committed on the VO/EO in this new root AM (AM2) and issue commit on this AM (AM2).
    Step 3 - Now after the partial commit, am back to AM1 where I would like to serialize only the change on Row-B. So I call the remove() on Row-A and passivate.
    Step 4- On my offline approval page, I invoke activate to show the new addition Row-B for approval post which I can invoke commit.
    Issue we face: When we passivate in Step 3, the deletion of Row-A also gets passivated. As a result, this row shows up on my approval screen. On my approval screen, only Row-B should be displayed.
    Appreciate your inputs on this issue.
    Thanks,
    Srini

    Hi,
    row A will be deleted with the next commit the way you put it. My interpretation is you want to remove it from the collection in which case you would call removeRowFromCollection on the VO. Instead of what you are doing here I would consider a soft-delete in which you set a delete flag on a row instead of deleting or commiting it. You can then use a custom RowMatcher on a VO to filter those entities out in which case you only see those for approval that you need to see.
    For partial commits, I would use a different strategy. Create a headless task flow (no view activity just a method activity and a return activity) and have it configured to not share the Data Control (isolated) You then pass in the information for the row to commit, query it in the method activity (through a VO) update the VO and commit the change using the commit method from the DataControl frame (you get this through the BindingContext class.
    This is more declarative and reusable than creating an AM just for this purpose. However, keep in mind that the callingg task flow still things Row A is changed (as it doesn't participate in the commit. So what you do is to call row.refresh() and issue DB forget changes as an argument so Row A is reset to the state that is in the database
    Frank

  • Force COMMIT in ODI

    Hi,
    I am using an ODI package that has 1 interface + 1 odi invoke web service.
    for the interface the commit is off.
    When the interface populates the data, the bpel WS is called then. BPEL ws is async.
    However since the commit happens at the end of the session bpel cant see the data.
    I need a way to force a COMMIT, so that bpel can see the data.
    Thanks,
    Rosh

    roshParab wrote:
    Should not matter. COMMIT is fired at the end of a sucessful session. Hi Rosh
    1st you should understand where you should aply above criteria.This is generally for interface where you have manually set the commit to false but not for the interface where commit is already set to true.I agree with GURU.It will be commited after the loading.You just test it by setting your commit to true and you will know whether its commiting or not.
    Thanks

  • Keep the scroll position on partial commit in af:table

    Hi,
    We have an ADF 11.1.1.1.0 application that uses a lot of editable tables. In those tables, we have <tt>autoSubmit</tt> enabled for all fields, since some combo boxes and LOVs depend on values entered in the same row. The problem is, that every time a partial submit occurs, the table reloads itself and in the process the position of the scrollbar is lost. This means the user has to scroll down to the record he was editing and continue. By setting <tt>displayRow="selected"</tt> on the <tt><af:table></tt>, we can at least achieve that the table scrolls to the record the user was on a refresh. But if the record was not at the top when the user started editing, the record still "jumps" in the experience of the user. We would like a situation where the table keeps the exact position of the scrollbar while partial submits occur. Isn't it possible to just refresh the selected record, instead of refreshing all rows?
    Best regards,
    Bart Kummel

    You can programmatically achieve this, check the following threads.
    How can I programmatically select row to edit in ADF - 11g
    Select Table Row

  • EJB 3.0 (MDB) CMT does partial Tx commit in hibernate

    Seems like CMT (MDB) does partial commit.+
    A secnario like this: we are using EJB3.0 combination with Hibernate
    1. Some message sent to queue.
    2. MDB listner who listen the queue its onMessage method is invoked. See below for MDB configurations.
    @TransactionManagement(TransactionManagementType.CONTAINER)
    @TransactionAttribute(TransactionAttributeType.REQUIRED)
    3. Some Handler is called from MDB say ResponseHandler.
    4. We called 2 update operation from this handler.
    ResponseHandler#updateTable1() with below set of execution
    |__ em.merg(table1Entity);
    |__ em.flush()
    ResponseHandler#updateTable2() with below set of execution
    |__ em.merg(table2Entity);
    |__ em.flush()
    Problem:+
    I can see only one table1 is get updated. Table2 is not updated with latest value though MDB class is container managed...
    Hibernate query logs: In logs i can see update operation for both the tables
    Hibernate: update table1...
    Hibernate: update table2...
    PS: We can't see this problem in case we put debug breakpoint on first line of onMessage() method of MDB
    class ResponseListner implements MessageListener
    public void onMessage(final Message message)
    Logger.info("ResponseListner.onMessage() : Entered"); //breakpoint line (breakpoint is here)
    Weblogic transaction logs+
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <> <> <1333538349228> <BEA-000000> <BEA1-0012DE77F7FE1343F773: null: XA.prepare(rm=WLStore_application_cluster_domain2_applicationServerJDBCStore, xar=WLStore_application_cluster_domain2_applicationServerJDBCStore234212480>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <> <> <1333538349228> <BEA-000000> <startResourceUse, Number of active requests:1, last alive time:0 ms ago.>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349233> <BEA-000000> <BEA1-0012DE77F7FE1343F773: null: XA.prepare DONE:ok>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349233> <BEA-000000> <endResourceUse, Number of active requests:0>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAJDBC> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349233> <BEA-000000> <JDBC LLR pool='com.application.ds' xid='BEA1-0012DE77F7FE1343F773' tbl='WL_LLR_applicationSERVER': begin write XA record table=WL_LLR_applicationSERVER recLen=529>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAJDBC> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349234> <BEA-000000> <JDBC LLR pool='com.application.ds' xid='BEA1-0012DE77F7FE1343F773' tbl='WL_LLR_applicationSERVER': after write XA record>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAJDBC> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349234> <BEA-000000> <JDBC LLR pool='com.application.ds' xid='BEA1-0012DE77F7FE1343F773' tbl='WL_LLR_applicationSERVER': commit one-phase=false>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAJDBC> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349239> <BEA-000000> <JDBC LLR pool='com.application.ds' xid='BEA1-0012DE77F7FE1343F773' tbl='WL_LLR_applicationSERVER': commit complete>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <> <> <1333538349240> <BEA-000000> <startResourceUse, Number of active requests:1, last alive time:0 ms ago.>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349243> <BEA-000000> <BEA1-0012DE77F7FE1343F773: null: XA.commit DONE (rm=WLStore_application_cluster_domain2_applicationServerJDBCStore, xar=WLStore_application_cluster_domain2_applicationServerJDBCStore234212480>
    ####<Apr 4, 2012 4:49:09 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <api.user1> <BEA1-0012DE77F7FE1343F773> <> <1333538349243> <BEA-000000> <endResourceUse, Number of active requests:0>
    ####<Apr 4, 2012 4:49:10 PM IST> <Info> <Health> <inlinapplication001> <applicationServer> <weblogic.GCMonitor> <<anonymous>> <> <> <1333538350589> <BEA-310002> <39% of the total memory in the server is free>
    ####<Apr 4, 2012 4:49:26 PM IST> <Debug> <JTAXA> <inlinapplication001> <applicationServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0014DE77F7FE1343F773> <> <1333538366645> <BEA-000000> <ResourceDescriptor[WLStore_application_cluster_domain2__WLS_applicationServer]: getOrCreate gets rd: name = WLStore_application_cluster_domain2__WLS_applicationServer
    resourceType = 2
    registered = true
    scUrls = applicationServer+10.19.216.10:5003+application_cluster_domain2+t3+
    xar = WLStore_application_cluster_domain2__WLS_applicationServer1926833320
    healthy = true
    lastAliveTimeMillis = 1333538336966
    numActiveRequests = 0
    >

    You have to add the property toplink.ddl-generation.output-mode to your persistence.xml file, for example:
    <?xml version="1.0" encoding="windows-1252" ?>
    <persistence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
    version="1.0" xmlns="http://java.sun.com/xml/ns/persistence">
    <persistence-unit name="model">
    <jta-data-source>jdbc/jdm-akoDS</jta-data-source>
    <properties>
    <property name="toplink.logging.level" value="INFO"/>
    <property name="toplink.target-database" value="Oracle"/>
    <property name="toplink.ddl-generation" value="drop-and-create-tables"/>
    <property name="toplink.ddl-generation.output-mode" value="database"/>
    </properties>
    </persistence-unit>
    </persistence>

  • Error while execution ODI procedure : java.lang.NullPointerException

    Hi,
    I`m trying to execute a simple ODI procedure and I`m getting the following exception:
    java.lang.NullPointerException
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlS.treatTaskTrt(SnpSessTaskSqlS.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandScenario.treatCommand(DwgCommandScenario.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.j(e.java)
         at com.sunopsis.dwg.cmd.g.F(g.java)
         at com.sunopsis.dwg.dbobj.SnpScen.a(SnpScen.java)
         at com.sunopsis.dwg.dbobj.SnpScen.localExecuteSync(SnpScen.java)
         at com.sunopsis.dwg.tools.StartScen.actionExecute(StartScen.java)
         at com.sunopsis.dwg.function.SnpsFunctionBaseRepositoryConnected.execute(SnpsFunctionBaseRepositoryConnected.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execIntegratedFunction(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlS.treatTaskTrt(SnpSessTaskSqlS.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandScenario.treatCommand(DwgCommandScenario.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.j(e.java)
         at com.sunopsis.dwg.cmd.g.z(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Thread.java:595)
    The source of the procedure:
    select
    P_ID,
    P_NAME,
    to_char(START_TIME_KEY,'DD-MON-YYYY HH24:MI:SS') as START_TIME_KEY,
    to_char(END_TIME_KEY,'DD-MON-YYYY HH24:MI:SS') as END_TIME_KEY,
    P_IS_ACTIVE,
    from persons
    The target of the procedure:
    DECLARE
    v_START_TIME_KEY DATE;
    v_END_TIME_KEY DATE;
    v_NAME VARCHAR2(200);
    v_ID NUMBER;
    BEGIN
    v_NAME := '#P_NAME';
    commit;
    END;
    ODI version: 10.1.3.5.5
    Source and target technology: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
    Thanks!

    Hello,
    Is your aim to get the Name from the table into the variable. ?
    And if there is only one row from the table you can use refresh variable to get the name:
    if there are more rows and you want to do somehthing repeatedly using the name, then you need to use the method you have described.
    Declare the variable in the pakcage and give some default value to the variable then call the procedure.
    Regards
    Reshma

  • Changing row selection of a af:table component in a managed bean

    Hi,
    how can ich programmatically change the selected row of a <af:table> component within a managed bean class?
    I have a table which depends on the date settings of a <dvt:timeSelector> of a <dvt:lineGraph> component. Now when the timeSelector is moved to new dates the table should be refreshed by executing the query for the table again with the new dates. The problem is when the query of the table's view object is executed again the first row will be automatically selected after executing.
    Now I want to achieve that the last row I have selected will be selected again after moving the time selector.
    I searched already in the OTN Discussion Forum but didn't find a fitting solution.
    Thanks in advance!

    The problem is that executing the query moves the current roe to the first one.
    What you can do is to save the current row (its pk), execute the query and then set the current row to the pk of the saved selected one. Set the table attribute displayRow="selected" and set the selected row of the table to the now current row.
    One problem is that you have to be sure the last selected row is still in the record set of the new query result.
    Here are some pointers with code:
    Keep the scroll position on partial commit in <af:table>
    Jdev 11G ADF BC: rollback and keeping current row problem
    How can I programmatically select row to edit in ADF - 11g
    Timo

  • AniServer Campus Manager problem

    Hi I recently ran a Device patch update and ever since I have had major problems with Campus Manager.
    I figured it was Aniserver related So i have done the following so far:
    1- I reinitialised the ANI Database
    2 - I replaced the aniserv.properties with the .orig file.
    Still the problems exist even if i managed to get the Aniserver started campus manager data collection would have problems and intermittent error messages when trying to access Campus Manager admin pages.
    3 - I decided to upgrade to LMS3.2 SP1
         It seemed to be ok with data collection but then when I came in to work today it hadnt run any scheduled aquistions when i went tinto the admin pages I had the error messages again unable to get properties from server.
    4 I decided to try a complete restore of the database to a date before the initial device package update.
    But I still have problems with the aniserver Failed to run.
    Please Help!!
    Kind regards Dean
    Here is the Ani log
    2011/06/23 13:11:10 main ani ERROR ServiceModule: Failed to instantiate com.cisco.nm.ani.server.dfmpoller.NoServiceModule
    2011/06/23 13:11:10 main ani MESSAGE DBConnection: Created new Database connection [hashCode = 6164599]
    2011/06/23 13:11:12 main ani MESSAGE PartialSMFCommit: Partial Commit is enabled for DataCollection
    2011/06/23 13:11:16 main ani ERROR POStore: Exception caught: com.cisco.nm.ani.server.frontend.AniStaticConfigException: AniStaticConfigException: Failed (sql) to verify/construct schema for PO com.cisco.nm.ani.server.topo.PagpPortChannel because: com.sybase.jdbc2.jdbc.SybSQLException: SQL Anywhere Error -116: Table must be empty
    2011/06/23 13:11:16 main ani ERROR AniLoadConfiguration: AniStaticConfigException: Failed to construct or verify database

    Seems to be working fine now I used the info in the following link to reinitialise the ani.db  https://supportforums.cisco.com/thread/2083513
    Using this alone did not fix it opt/CSCOpx/bin/perl /opt/CSCOpx/bin/dbRestoreOrig.pl dsn=ani dmprefix=ANI
    But doing this as well seemed to sort it:
    First, set the CiscoWorks Daemon Manager service to Manual, and reboot the server.  When the server reboots, delete NMSROOT\databases\ani\ani.log if it exists.  Then run:
    NMSROOT\objects\db\win32\dbsrv9 -f NMSROOT\databases\ani\ani.db
    After that, set the Daemon Manager service back to Automatic, and reboot again
    Regards Dean

  • Weblogic Explicit Transaction Management

    Hi,
    I am in need of managing mutiple transactions which access two different databases which are not Oracle (NON XA) but still I want to implement 2 Phase Commit.
    I read documents about JTA, and tried to use TransactionManager . But I am not able to create a XaResource with my weblogic JNDI lookup. It is just return a type cast exception saying not able to convert form RmiDataSource to XaDataSource. Transaction class's enlistResource() method only accepts XaReource. Can you suggest a solution for acheving mutiple database transactions to be commited or rolledback as per need. In shot how to manage transactions explicitly in Weblogic 10.

    OK, now I'm catching up. Do you see that exception thrown in your calling
    code, i.e. does your code as posted catch that exception?
    If so, you most probably are not using the proper connection pool -- you
    should be using the JTS pool not just a WL pool -- and so the connection is
    not participating in the transaction.
    Peace,
    Cameron Purdy
    Tangosol Inc.
    Tangosol Coherence: Clustered Coherent Cache for J2EE
    Information at http://www.tangosol.com/
    "Sunil Naik" <[email protected]> wrote in message
    news:3c26a505$[email protected]..
    >
    "Cameron Purdy" <[email protected]> wrote:
    Why do you say it is not working? Do you get a compile error? An
    exception?
    A partial commit? Deadlock? Horse head in your bed? Out of memory?
    Peace,
    Hi Cameron,
    Actually , while testing the method ,I am deliberately making the3 rd
    method call throw an Exception and exit.In that case what I expect is thatthe
    work done in the first two method calls should be rolled back. i.e.Therows should
    not be inserted in the database.This is not happening.The inserts made inthe
    earlier two methods are being commited.
    Hope it is clear now.
    Thanx,
    sunil
    Cameron Purdy
    Tangosol Inc.
    Tangosol Coherence: Clustered Coherent Cache for J2EE
    Information at http://www.tangosol.com/
    "Sunil Naik" <[email protected]> wrote in message
    news:3c230f17$[email protected]..
    Hi,
    I have written a StatelessSession Bean. There is a method in thisbean
    which
    calls methods of Other BMP Beans.All these methods have to be partof one
    Transaction.Below
    I have shown in Pseudocode ,how I am handling it.
    //This is the SessionBean Method
    public void processDocument()
    UserTransaction Utrx = SessionContext.getUserTransaction();
    try
    Utrx.begin();
    Bean1.method1();//this method inserts a row in the database
    Bean2.method2();// this method inserts a row in another table
    Bean3.method3();//deletes a row
    // couple of other method calls
    Utrx.commit();
    catch()
    //catch Exceptions and rollback.

  • Xml file datamodel always the same file

    Hello,
    i have this kind of issue, i configured my ODI in the topology to load an external Xml file that will arrive every day. My load works good but i'm seeing now that ODI doesn't know that the file that i point to with configuration is different from one launch to another. It doesn't realize that the data inside are changed. I'm doing some test so every time that i receive a new xml file i rename the old one and i put there the new one. In my production system will be like this and scheduled but i'm scared about the fact that my ODI is not knowing the difference between my old file and the new one. Only if i close it and start it again he realize that the file is new and load the new data.
    Have i forgot some configuration in some place? If yes where is and how i've to manage?
    Thank you in Advance.
    Corrado

    Hello C.
    Actually it's an ODI Procedure.
    Try this:
    PACKAGE_XML (your XML process package)
    A) ODI PROCEDURE 01 (insert this step)
    B) XML INTERFACES
    C) ODI PROCEDURE 02 (insert this step)
    Configure ODI PROCEDURES like this:
    ODI PROCEDURE 01
    Technology: XML
    Comand on Target: SYNCHRONIZE FROM FILE
    Schema: Your Source XML Schema.
    Commit: Commit.
    ODI PROCEDURE 02
    Technology: XML
    Comand on Target: SYNCHRONIZE FROM DATABASE
    Schema: Your Target XML Schema.
    Commit: Commit.
    That is also explained on ODI Docs.
    Let me know if that helps.
    []'s

  • Can anybody explain what is support for ADF Project and to solve the Issues

    Hi,
    I am new to ADF and i am currently associated to ADF Support Project.
    Can anybody explain what is support for ADF Project and to solve the Issues when the ADF Project is in Live.
    we are getting the Tickets for the Issues.
    Thanks in advance.

    I agree with Timo.
    It depends on the size of the project, user base, technologies, etc. We use lot of technologies in fusion middleware stack. We get tickets in many areas.
    In your case, it could be anything like user training issues (user may not know how to use the some system features), browser issues like blank screen, bugs in code like JBO errors (failed to validate, another user has changed row, failed to lock the record, NullPointerException, IllegalArgumentException etc), business logic issues, page may not render properly, performance issues, partial commit issues, application server issues, authentication issues. If you use web services you might get web services related problems.

  • Explicit Transaction Management not working

    Hi,
    I have written a StatelessSession Bean. There is a method in this bean which
    calls methods of Other BMP Beans.All these methods have to be part of one Transaction.Below
    I have shown in Pseudocode ,how I am handling it.
    //This is the SessionBean Method
    public void processDocument()
    UserTransaction Utrx = SessionContext.getUserTransaction();
    try
    Utrx.begin();
    Bean1.method1();//this method inserts a row in the database
    Bean2.method2();// this method inserts a row in another table
    Bean3.method3();//deletes a row
    // couple of other method calls
    Utrx.commit();
    catch()
    //catch Exceptions and rollback.

    OK, now I'm catching up. Do you see that exception thrown in your calling
    code, i.e. does your code as posted catch that exception?
    If so, you most probably are not using the proper connection pool -- you
    should be using the JTS pool not just a WL pool -- and so the connection is
    not participating in the transaction.
    Peace,
    Cameron Purdy
    Tangosol Inc.
    Tangosol Coherence: Clustered Coherent Cache for J2EE
    Information at http://www.tangosol.com/
    "Sunil Naik" <[email protected]> wrote in message
    news:3c26a505$[email protected]..
    >
    "Cameron Purdy" <[email protected]> wrote:
    Why do you say it is not working? Do you get a compile error? An
    exception?
    A partial commit? Deadlock? Horse head in your bed? Out of memory?
    Peace,
    Hi Cameron,
    Actually , while testing the method ,I am deliberately making the3 rd
    method call throw an Exception and exit.In that case what I expect is thatthe
    work done in the first two method calls should be rolled back. i.e.Therows should
    not be inserted in the database.This is not happening.The inserts made inthe
    earlier two methods are being commited.
    Hope it is clear now.
    Thanx,
    sunil
    Cameron Purdy
    Tangosol Inc.
    Tangosol Coherence: Clustered Coherent Cache for J2EE
    Information at http://www.tangosol.com/
    "Sunil Naik" <[email protected]> wrote in message
    news:3c230f17$[email protected]..
    Hi,
    I have written a StatelessSession Bean. There is a method in thisbean
    which
    calls methods of Other BMP Beans.All these methods have to be partof one
    Transaction.Below
    I have shown in Pseudocode ,how I am handling it.
    //This is the SessionBean Method
    public void processDocument()
    UserTransaction Utrx = SessionContext.getUserTransaction();
    try
    Utrx.begin();
    Bean1.method1();//this method inserts a row in the database
    Bean2.method2();// this method inserts a row in another table
    Bean3.method3();//deletes a row
    // couple of other method calls
    Utrx.commit();
    catch()
    //catch Exceptions and rollback.

  • Pushreplication Example failing

    Hi all,
    I have been struggeling with the Pushreplication ActiveActive Example for a while now, and I cannot get it to work
    I am able to run both Active Cache Servers, but as soon as I am running the ActiveActiveUpdater, I am getting errors in both the Client and the Server.
    I am using Coherence 12.1.3 and the Coherence Incubator 12.3 on Windows 7 64 bit.
    Here are the logs - ActiveActiveUpdater:
    Updating the cache with running sum object key = [0] and value [130]
    Exception in thread "main" Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for DistributedCacheWithPublishingCacheStore service on Member(Id=2, Timestamp=2015-03-08 11:03:03.758, Address=10.162.62.78:8090, MachineId=23146, Location=site:Site1,machine:ANIWCZIN-IL,process:1016, Role=CoherenceServer) (Wrapped: Failed to store key="0") poll() is a blocking call and cannot be called on the Service thread) poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:289)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:50)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPartialCommit(PartitionedCache.CDB:7)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:82)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:3)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:376)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.responseMessage.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: Portable(com.tangosol.util.AssertionException): poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.coherence.Component._assertFailed(Component.CDB:12)
    at com.tangosol.coherence.Component._assert(Component.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:24)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:26)
    at com.tangosol.coherence.config.scheme.AbstractCachingScheme.realizeCache(AbstractCachingScheme.java:63)
    at com.tangosol.net.ExtensibleConfigurableCacheFactory.ensureCache(ExtensibleConfigurableCacheFactory.java:242)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:205)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:182)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:108)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:58)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributor.establishEventChannelController(CoherenceEventDistributor.java:149)
    at com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributorTemplate.realize(EventDistributorTemplate.java:263)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:208)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:1)
    at com.oracle.coherence.common.resourcing.AbstractDeferredSingletonResourceProvider.getResource(AbstractDeferredSingletonResourceProvider.java:85)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.distribute(PublishingCacheStore.java:327)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.store(PublishingCacheStore.java:523)
    at com.tangosol.net.cache.ReadWriteBackingMap$BinaryEntryStoreWrapper.storeInternal(ReadWriteBackingMap.java:6221)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:5003)
    at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1438)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:758)
    at java.util.AbstractMap.putAll(AbstractMap.java:273)
    at com.tangosol.net.cache.ReadWriteBackingMap.putAll(ReadWriteBackingMap.java:801)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.putPrimaryResource(PartitionedCache.CDB:63)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postInvoke(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvokeAll(PartitionedCache.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvoke(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:3)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.PortableException.readExternal(PortableException.java:150)
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:59)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:376)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.responseMessage.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    CacheServer:
    2015-03-08 11:03:40.054/40.096 Oracle Coherence GE 12.1.3.0.0 <Warning> (threadistributedCacheistributedCacheWithPublishingCacheStore, member=2): Application code running on "DistributedCacheWithPublishingCacheStore" service thread(s) should not call ensureCache as this may result in deadlock. The most common case is a CacheFactory call from a custom CacheStore implementation.
    2015-03-08 11:03:40.056/40.098 Oracle Coherence GE 12.1.3.0.0 <Error> (threadistributedCacheistributedCacheWithPublishingCacheStore, member=2): Assertion failed: poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:24)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:26)
    at com.tangosol.coherence.config.scheme.AbstractCachingScheme.realizeCache(AbstractCachingScheme.java:63)
    at com.tangosol.net.ExtensibleConfigurableCacheFactory.ensureCache(ExtensibleConfigurableCacheFactory.java:242)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:205)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:182)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:108)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:58)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributor.establishEventChannelController(CoherenceEventDistributor.java:149)
    at com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributorTemplate.realize(EventDistributorTemplate.java:263)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:208)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:1)
    at com.oracle.coherence.common.resourcing.AbstractDeferredSingletonResourceProvider.getResource(AbstractDeferredSingletonResourceProvider.java:85)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.distribute(PublishingCacheStore.java:327)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.store(PublishingCacheStore.java:523)
    at com.tangosol.net.cache.ReadWriteBackingMap$BinaryEntryStoreWrapper.storeInternal(ReadWriteBackingMap.java:6221)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:5003)
    at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1438)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:758)
    at java.util.AbstractMap.putAll(AbstractMap.java:273)
    at com.tangosol.net.cache.ReadWriteBackingMap.putAll(ReadWriteBackingMap.java:801)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.putPrimaryResource(PartitionedCache.CDB:63)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postInvoke(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvokeAll(PartitionedCache.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvoke(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:3)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    2015-03-08 11:03:40.058/40.100 Oracle Coherence GE 12.1.3.0.0 <Warning> (threadistributedCacheistributedCacheWithPublishingCacheStore, member=2): Partial commit due to the backing map exception com.tangosol.internal.util.HeuristicCommitException
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postInvoke(PartitionedCache.CDB:42)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvokeAll(PartitionedCache.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvocationContext.postInvoke(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:3)
    at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: (Wrapped: Failed to store key="0") com.tangosol.util.AssertionException: poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:289)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.onStoreFailure(ReadWriteBackingMap.java:5344)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:5009)
    at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1438)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:758)
    at java.util.AbstractMap.putAll(AbstractMap.java:273)
    at com.tangosol.net.cache.ReadWriteBackingMap.putAll(ReadWriteBackingMap.java:801)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.putPrimaryResource(PartitionedCache.CDB:63)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postInvoke(PartitionedCache.CDB:36)
    ... 14 more
    Caused by: com.tangosol.util.AssertionException: poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.coherence.Component._assertFailed(Component.CDB:12)
    at com.tangosol.coherence.Component._assert(Component.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:24)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:26)
    at com.tangosol.coherence.config.scheme.AbstractCachingScheme.realizeCache(AbstractCachingScheme.java:63)
    at com.tangosol.net.ExtensibleConfigurableCacheFactory.ensureCache(ExtensibleConfigurableCacheFactory.java:242)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:205)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:182)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:108)
    at com.oracle.coherence.common.builders.NamedCacheSerializerBuilder.realize(NamedCacheSerializerBuilder.java:58)
    at com.oracle.coherence.patterns.eventdistribution.distributors.coherence.CoherenceEventDistributor.establishEventChannelController(CoherenceEventDistributor.java:149)
    at com.oracle.coherence.patterns.eventdistribution.configuration.EventDistributorTemplate.realize(EventDistributorTemplate.java:263)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:208)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore$1.ensureResource(PublishingCacheStore.java:1)
    at com.oracle.coherence.common.resourcing.AbstractDeferredSingletonResourceProvider.getResource(AbstractDeferredSingletonResourceProvider.java:85)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.distribute(PublishingCacheStore.java:327)
    at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.store(PublishingCacheStore.java:523)
    at com.tangosol.net.cache.ReadWriteBackingMap$BinaryEntryStoreWrapper.storeInternal(ReadWriteBackingMap.java:6221)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:5003)
    ... 20 more
    Many thanks in advance!!

    HI Subba,
       have you doen any binding if yes can you send teh details please.
    Regards
    AnujN

Maybe you are looking for

  • Adobe offline form submission without HTTP or SMTP.

    Dear Adobe Gurus,    Can offline Adobe forms can be submitted without uploading the form to a webdynpro application(HTTP) or sending an email(SMTP). In crude terms, if I have a PDF form with submit button. Can I open the PDF, click on the submit butt

  • Interactive Reports - SQL Source Question

    Background Apex 3.1 is installed on Oracle 10g instance on local machine but all data is stored on a remote machine on Oracle 9 & 10 instances. This data is also used by another piece of software, which directly manipulates the data. The Apex Applica

  • Doubt in Custom Infotype / Modulepool

    Hi all, I have some doubt in Module pool. Wuld u plz check let me know on this. Iam started creating the New Customized Infotype. In that Infotype i have one name SHIFT. The field SHIFT is having F4 help pop-up. Basing the F4 help pop-uping, i need t

  • ODIOSCommand - OS command returned 127 in UNIX

    Hello Experts, I am trying to run a gunzip (unzip) command on our UNIX environment but the command comes back with an error code 127. com.sunopsis.dwg.function.SnpsFunctionBaseException: OS command returned 127.      at com.sunopsis.dwg.tools.OSComma

  • Maping consignment process for Excisable material

    Dear Experts, How excise part is configured in vendor consignment process ,please suggest me. Thanks in advance. Regards, Varun Edited by: madhu varun tirupati on Oct 19, 2010 12:00 PM