Interesting scenario on Partial Commit - Deletion of EO

Hi all,
Jdev Version: Studio Edition Version 11.1.1.7.0
Business Requirement: Need to partially commit the deletion of one row (say Row-A) in EO and serialize another row (an addition say Row-B) for committing after approval.
How we achieve this:
Step 1 - Make the change in Row-A & Row-B on my main AM (AM1).
Step 2 - create a parallel new root AM (AM2) and make the changes that need to be partially committed on the VO/EO in this new root AM (AM2) and issue commit on this AM (AM2).
Step 3 - Now after the partial commit, am back to AM1 where I would like to serialize only the change on Row-B. So I call the remove() on Row-A and passivate.
Step 4- On my offline approval page, I invoke activate to show the new addition Row-B for approval post which I can invoke commit.
Issue we face: When we passivate in Step 3, the deletion of Row-A also gets passivated. As a result, this row shows up on my approval screen. On my approval screen, only Row-B should be displayed.
Appreciate your inputs on this issue.
Thanks,
Srini

Hi,
row A will be deleted with the next commit the way you put it. My interpretation is you want to remove it from the collection in which case you would call removeRowFromCollection on the VO. Instead of what you are doing here I would consider a soft-delete in which you set a delete flag on a row instead of deleting or commiting it. You can then use a custom RowMatcher on a VO to filter those entities out in which case you only see those for approval that you need to see.
For partial commits, I would use a different strategy. Create a headless task flow (no view activity just a method activity and a return activity) and have it configured to not share the Data Control (isolated) You then pass in the information for the row to commit, query it in the method activity (through a VO) update the VO and commit the change using the commit method from the DataControl frame (you get this through the BindingContext class.
This is more declarative and reusable than creating an AM just for this purpose. However, keep in mind that the callingg task flow still things Row A is changed (as it doesn't participate in the commit. So what you do is to call row.refresh() and issue DB forget changes as an argument so Row A is reset to the state that is in the database
Frank

Similar Messages

  • Interesting scenario- user facing error while deleting a sales order

    Hello All,
    I have one interesting scenario and want feedback from your side as soon as possible
    Scenario-
    One of my clients is facing a problem while deleting a sales order.
    the sales order he is trying to delete is the old order created in 2007.
    when i see the document flow the user has created the export order, then delivered it, invoiced and sent it to accounting, accounting document is also cleared. then he cancelled the invoice, reverse PGI and deleted the delivery. the status of the sales order system is showing is being processed. but please note in this case after reversing PGI and deleting a delivery the delivery document disappears from the document flow.
    document flow looks like this
    Order - 200004715                     Being processed
    invoice- 700005310                    completed
    accounting 700005311                 cleared
    Cancel Invoice 700005315            completed
    Accounting 700005316                 cleared
    Now, in 2008 user is trying to delete the sales order but he is unable to do the same. system is giving him message "unable to delete sales order because of subsequent document 70005310 (which is invoice number)"
    Can somebody please throw some light on this problem.
    Thanks in advance.

    Hi,
    As fas as I know, this is standard system behaviour. The reason - you have done PGI (which has created a material document as well as accounting doc) and invoiced & reversed (which again has created accounting documents & rversal documents). All these documents are referencing the sales order.
    If you delete sales order, sales order number gets deleted from VBAK /VBAP tables.
    Hence, in relational data base scenario (meanin SAP in this context), deletion of sales order after creation of subsequent documents is not feasible.
    Hope it clarifies the issue.
    Regards
    Murali

  • How can I implement the equivilent of a temporary table with "on commit delete rows"?

    hi,
    I have triggers on several tables. During a transaction, I need to gather information from all of them, and once one of the triggers has all the information, it creates some data. I Can't rely on the order of the triggers.
    In Oracle and DB2, I'm using temporary tables with "ON COMMIT DELETE ROWS" to gather the information - They fit perfectly to the situation since I don't want any information to be passed between different transactions.
    In SQL Server, there are local temporary tables and global.  Local temp tables don't work for me since apparently they get deleted at the end of the trigger. Global tables keep the data between transactions.
    I could use global tables and add some field that identifies the transaction, and in each access to these tables join by this field, but didn't find how to get some unique identifier for the transaction. @@SPID is the session, and sys.dm_tran_current_transaction
    is not accessible by the user I'm supposed to work with.
    Also with global tables, I can't just wipe data when "operation is done" since at the triggers level I cannot identify when the operation was done, transaction was committed and no other triggers are expected to fire.
    Any idea which construct I could use to acheive the above - passing information between different triggers in the same transaction, while keeping the data visible to the current transaction?
    (I saw similar questions but didn't see an adequate answer, sorry if posting something that was already asked).
    Thanks!

    This is the scenario: If changes (CRUD) happen to both TableA and TableB, then log some info to TableC. Logic looks something like this:
    Create Trigger TableA_C After Insert on TableA {
      If info in temp tables available from TableB
            Write info to TableC
       else
           Write to temp tables info from TableA
    Create Trigger TableB_C After Insert on TableB {
      If info in temp tables available from TableA
            Write info to TableC
       else
           Write to temp tables info from TableB
    So each trigger needs info from the other table, and once everything is available, info to TableC is written. Info is only from the current transaction.
    Order of the triggers is not defined. Also there's no gurantee that both triggers would fire - changes can happen only to TableA / B and in that case I don't want to write anything to TableC.
    The part that gets and sets info to temp table is implemented as temp tables with "on commit delete rows" in DB2 / Oracle.
    What do you think? As I've mentioned, I could use global temp tables with a field that would identify the transaction, but didn't find something like that in SQL Server. And, the lifespan of local temp tables is too short.

  • Interesting scenario regarding billing

    Hello Experts,
    I have an interesting scenario regarding invoice block.
    The invoices are posted but blocked on perticular date.
    Could anybody please help  with  what could be the reason for this block and how this can be avoided in the future?
    Kindly throw some light about the tolerance limit set for the invoices.
    Awaiting your valuable inputs as early as possible.
    Thanks in advance for help.
    Regards,
    Psm.

    Hi,
    Your query is not clear.
    Can you explain further when the invoices are posted and when the invoices are blocked.
    Regards,
    VNR

  • Interesting scenario Billing

    Hello Experts,
    I have an interesting scenario regarding invoice block.
    The invoices are posted but blocked on perticular date.
    Could anybody please help with what could be the reason for this block and how this can be avoided in the future?
    Kindly throw some light about the tolerance limit set for the invoices.
    Awaiting your valuable inputs as early as possible.
    Thanks in advance for help.
    Regards,
    Psm.

    Hi,
    I think there is posting block set for you.Please remove that posting block and proceed further.
    Goto VOFA T.Code.Select your Billing type and then goto General data.Uncheck the Posting block field.
    It's not regarding tolerence limit.
    Pls gothrough the following link for some notes on tolerance limit.
    [Tolerance limit|http://help.sap.com/saphelp_45b/helpdata/en/a8/b9957a452b11d189430000e829fbbd/frameset.htm]
    Regards,
    Krishna.

  • Cannot commit delete row from VO

    I have a VO that I have no problem create new row, update existing row and commit. But I can't commit deleted row to database. no exception or error message etc... and All other VOs are working fine,
    anybody has any idea why? is that an option or something on VO? or is it an BC4J bug?
    Thanks,
    -Ming
    Edited by: user715460 on Oct 16, 2008 3:40 PM

    You are in the wrong forum. Try your question JDeveloper and ADF .
    And please consider this too (copied from this thread Concerned in delayed response
    a) Use a good subject line that briefly describes the issue. This will attract those familiar with that area to come and help.
    b) Tell us what database version you are using. Not just saying e.g. "10g" but more specifically "10.2.0.3"
    c) Describe the issue clearly stating what you have tried, and what you are trying to achieve.
    d) Don't use txt spk because this is a professional forum, not a chat room and not everyone can follow it.
    e) Don't USE CAPITAL LETTERS IN YOUR DESCRIPTION as this is considered shouting and agressive.
    f) Provide sample data for us to use if necessary either with the CREATE TABLE and INSERT statements to create it or providing a WITH clause that we can use. This saves us from having to type in and format the sample data for ourselves and is more likely to attract us to help.
    g) Show the code that you have already tried (if you haven't tried any code yet then why not? have a go yourself and only ask for help when you get stuck).
    h) Show us any error messages you are getting, in full, and with information of the line numbers where the error is occurring
    i) Wherever you provide data or code remember to use the tag before and after or the [code] tag before and the [/code] tag after, so that it keeps it's layout and is clear to read.
    j) Perhaps one of the most important things of all... never suggest that you need a solution "urgently" or that your issue is "urgent". This implies that your issue is somehow more important than the issues posted by other people. Everybody would like an answer to their issue promptly, but it just depends when people are online who can answer the question and nobody is being paid to answer it, so it is arrogant and rude to demand urgent attention to your own. If something is that urgent then you should raise it through oracle metalink as a priority issue or pay for someone with the necessary skills to come and do the work for you.
    Timo                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • SAP FS-CD - VPVA Partial commit issue

    When we are running VPVA with start current run, dunning will be escalated and the same will be viewed in VYM10 transaction for the dunning history. Also we can view the current dunning level in FPL9 transaction by choosing the below menu
    FPL9->SETTING->ADDITIONAL FIELD->SHOW->OK. 
    Here the issue is, after running VPVA with start current run we are able to view only in VYM10 but its not getting reflected in FPL9 addtional items. Still the dunning level shows as 00.
    How to avoid this partial commit issue. FS-CD Experts please advise on this.

    If FSCD items contain multiple business areas, then there is a possibility that the business area in FI might get populated with a blank space.
    If this issue is occurring with all the posting to FI where the BSEG table ( where as in case of new GL, RBUSA in table FAGLFLEXT for total and FAGLFLEXA for item) is not getting populated with business area then there might be some reconciliation issues depending upon the GL (new or old) used. If you are using new GL, you might have to check if document splitting functionality is activated in new GL (mySAPERP which is ECC 6.00 and pack SAPKH6001 and above) and will have to maintain the Business Area as a splitting rule. Additionally refer to note 990612 & check FM FI_DOCUMENT_POST.

  • Partial Commit in ODI

    Can ODI does partial commit of large batch of data neglecting erroneous ones. If so how to achieve this? This would be very helpful especiall in a batch transaction where instead of rolling back entire batch due to erroneous records atleast the correct records can be commited to target

    Hi,
    Adding some more points,
    As per i suggested eariler you can use CKM for capturing the error records. Using CKM is also a permanent solution for the data flow.
    Below is the steps for CKM processings,
    1. In your target data store declare a constraint (right click on the constraints, say INSERT CONDITION)
    2. Lets assume that you need to capture records in a field which is not a number ie.,rather than numeric values. Then in the condion window select Sunopsis condition and in the Where box REGEXP_LIKE(<col_name>, '^[[:digit:]]*$'). It will pull all such records to E$ table.
    3. Add this data store in your target and say FLOW_CONTROL to YES and RECYCLE_ERROR to YES in the Control tab of your interface select the CKM and constraints.
    Your error records will be moved to E$ table and in the second run once you correct the error records in E$ table those records will again move to target table (recycle error).
    Please explore the below link for more information on REGEXP.
    http://www.oracle.com/technology/oramag/webcolumns/2003/techarticles/rischert_regexp_pt1.html
    All the best.
    Thanks,
    Guru

  • Regarding Partial Commit

    Can ODI does partial commit.Suppose some rows got errored out in a batch of large data, can ODI do partial commit for the rows neglecting erroneous ones. If this can be done, how to acheive this?
    This feature would be helpful especially in a batch run where the entire transaction gets rolled back only because of few erroneus ones?

    You can do it using the flow control facilities in your interface.
    You can then specify a commit or a rollback depending of the number of errors.
    The invalid data are loaded into an Error Table and can be recycled.

  • Interesting scenario, is there a need to reset logs periodically?

    10.2.0.2 Ent Ed -- Rman 10.2.0.2 -- aix 5.3
    Is there a possible need to perform a reset logs periodically? This question initially came to me at the worst of times, during a recovery of a fairly important database.
    If / when you have to perform an spfile or controlfile restore without a recovery catalog, if your log sequence is above 255 and your last backup was more than a day ago, is it possible your latest autobackup wont be found? In our scenario we had a day old controlfile snapshot that we used, but I'm not sure if we would have been able to recover had it not been there. From reading the 10g Recovery Manager Reference manual:
    "FROM AUTOBACKUP
    [autoBackupOptionList]
    Restores a control file autobackup. You can only specify this option on the RESTORE CONTROLFILE and RESTORE SPFILE commands. When restoring either type of file in NOCATALOG mode, the FROM AUTOBACKUP clause is required.
    RMAN begins the search on the current day or on the day specified with the SET UNTIL. If no autobackup is found in the current or SET UNTIL day, RMAN checks the previous day starting with sequence 256 (or the sequence specified by MAXSEQ) until it reaches 0. The search continues up to MAXDAYS days (default of 7, maximum of 366) from the current or SET UNTIL day. If no autobackup is found within MAXDAYS days, then RMAN signals an error and the command stops. "
    The doc doesn't mention it but if you haven't ever tried setting MAXSEQ > 256, you'll get an error if you do. 256 is the upper bound for the MAXSEQ parameter. This is a snippet from my test of that:
    <code>
    oracle_sandbox1> rman target /
    Recovery Manager: Release 10.2.0.2.0 - Production on Fri Nov 21 09:16:36 2008
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    connected to target database: WEBDEV (DBID=1965364971)
    RMAN> show all;
    using target database control file instead of recovery catalog
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 5;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE SBT_TAPE PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/app/oracle/product/10.2/dbs/snapcf_webdev.f'; # default
    RMAN> run {
    2> allocate channel t1 type 'SBT_TAPE';
    3> restore spfile to pfile '/app/oracle/product/10.2/dbs/webdev_pfile.tst' from autobackup maxseq=300;
    4> }
    allocated channel: t1
    channel t1: sid=256 devtype=SBT_TAPE
    channel t1: Veritas NetBackup for Oracle - Release 6.5 (2007111606)
    Starting restore at 21-NOV-08
    released channel: t1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 11/21/2008 09:23:46
    RMAN-06494: MAXSEQ = 300 is out of range (0-255)
    RMAN>
    </code>

    The "sequence" here is not the Log Sequence Number that is assigned when LGWR switches from one Redo Log to another, and the previous one is archived out as an Archivelog (if running in ARCHIVELOG mode).
    The "sequence" here is the sequence of the controlfile autobackup that was created. In AUTOBACKUP mode every change to the database structure (eg adding a new datafile) causes an Autobackup. Similarly, every Database Backup causes an Autobackup. Since multiple such autobackups can occur within a single day, Oracle assigns a SEQUENCE to differentiate beteween the backups created on the same day. It is this sequence that is limited to 255 .
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • PI 7.3 receiver AAE idoc scenario-No receiver comm channel found in ICO

    Hi,
    I am working in PI 7.3 receiver AAE idoc scenario.When I try to configure Integrated Configuration(ICO),I am not able to see the receiver comm channel in receiver agreement.
    What is the reason for this??I have configured the communication channel,but still its not shown in reciver agreement dropdown.
    Please help.
    Regards,
    Sriparna

    Hi Sriparna,
    In PI 7.3 there is are two separate IDoc adapters: "standard" IDoc adapter and one that is dedicated for AAE (ICO). Make sure that you have used the right one - most probably not, which is why you cannot see the channel in the dropdown list.
    Hope this helps,
    Grzegorz

  • Item Interest Calculation for partially cleared items

    Hi
    We need to do interest calculation on Customer Line Items. The T Code we are using is FINT. We have set an interest indicator for Item Interest Calculation, with Interest Calculation based on Items Cleared with Payments. The requirement is that Interest should be calculated on even partially cleared items. Suppose a customer invoice is generated on 1.1.2009 for INR 100000 and becomes due for payment on 30.1.2009. Now on 10.2.2009, a partial payment is recieved against this invoice for INR 30000. System should calculate interest on INR 30000 for 11 days. Now again on 20.2.2009, remaining payment of INR 70000 is recieved. In such a case, interest should be calculated on INR 70000 for 21 days @ 1.25% PM. In the current configuration, when we define that system should calculate interest on Open Items cleared with payments, system calculates interest on INR 100000 for 1.25% for 21 days. Pls suggest.
    Regards
    Sanil Bhandari

    Hi u can check all below steps with specific fields i thought it is working perfectly check it.
    1. Define Interest Calculation Types
    here u can enter int rate type as a "S" Balance interest calculation
    2. Prepare Account Balance Interest Calculation
    here u can enter int calculation frequency means monthly or quarterly etc. calander type G, select balance plu int check box
    3. Define Reference Interest Rates
    Here u can enter date currency
    4.  Define Time-Dependent Terms
    here u can enter currency effective from date sequential number term (Debit interest: balance interest calc. or Credit interest: balance interest calc.) referance int rate enter before step what u r defined that one u can enter here.
    5. Enter Interest Values
    here u can enter interest rate for that referance int type
    6. Prepare G/L Account Balance Interest Calculation
    Here u can enter ur g/l accounts
    0001            Interest received (int received a/c)
    0002            Interest paid      (int paid a/c)
    0011            Pt vl.min.int.earned(int received a/c)
    0012            Pst vl.min.int.paid(int paid a/c)
    0013            Pst vl.dt.int.earned(int received a/c)
    0014            Past val.dt.int.paid(int paid a/c)
    0015            Calc.per.int.earned(int received a/c)
    0016            Calc.period int.paid(int paid a/c)
    1000            G/L account (earned)(Loan giving a/c)
    2000            G/L account (paid) (Loan taking a/c)
    after that u can post transaction  execute ur transaction code i thought it is helpful for u
    Regards,
    Nauma.

  • ADF view : auto commit delete  operation

    I have created a ADF table with add and delete operation. When I perform delete operation it is not committing automatically.
    Can you please suggest the way to auto commit the delete operation.
    Thanks,
    Kiran

    User,
    please always tell us your jdev version as the solution might depend on it.
    There is no auto commit in ADF. You can program it so that after an add or delete the data gets committed but there is nothing done automatically.
    if you use a bounded task flow you can drag the operation from the data control onto the task flow and navigate to it after you have done the add or delete operation.
    Timo

  • Commit delete

    Hello,
    I'm trying something new on this and I am wondering if anyone has ideas. In the past rollback segment size wasn't something that mattered. We could make them as big as we wanted. We can no longer do that so when I go to delete 4 million records from a table (can't use truncate because I'm using a where clause) I get "unable to extend rollback...." error. So I wrote this script that I thought would delete 10 records and then quit. (I would change it later to something larger.) Instead it gives the rollback error again. I've looked at other posts on this board but they seem a little complicated. Any ideas why this won't work?
    set time on
    set echo on
    declare
    cnt number(7) := 0;
    tot number(7) := 0;
    cursor C1 is select a.row_id from activity a
    where a.row_id in (select b.row_id from activity2 b);
    begin
    for row_id in C1 loop
    Delete from activity a where a.row_id in (select b.row_id from activity2 b);
    tot := tot + 1;
    cnt := cnt + 1;
    if (cnt >= 10) then
              commit;
              close c1;
    cnt := 0;
    end if;
    end loop;
    commit;
    dbms_output.put_line('activity records deleted: '||tot);
    end;
    /

    Andrew:
    Which sketch?
    I have no real metrics for this, but I use this approach fairly regularly when refreshing a test environment from our production data to get rid of uneeded history. For a several million row table, the threshold appears to be deleting 40% or more of the table. I suspect that as table size increases, the threshold will decrease somewhat. One of these days, I will do some metrics.
    Finding space is realtively easy, unless you are seriously constrained at the filesystem level. What I usually do if there is no space in an existing tablespace is create a new tablespace to hold the temporary data, then drop it when finished. This beats expanding rollback to accomodate the delete, since the space is not permanently allocated.
    Also, you can usually enable constraints with NOVALIDATE, since the data was good originally. In John's case, I suspect that there are no foreign keys since he is deleting such a large percentage of the table.
    TTFN
    John

  • Mail-to-Mail scenario - Problem with comm. channel not reading message

    Good day,
    I am setting up a Mail-to-Mail scenario, where a received e-mail with an attachment is both processed into a file, as well as a monitoring e-mail needs to be send. The attachment processing is an existing process and is still working properly, but my e-mail isn't send. I am using Java mapping to create the destination e-mail based on the source e-mail. Everything is set up in the Directory as well, and through the Test Configuration feature I have tested my new addition to the existing process, I am seeing the result that I am expecting. I have sent a test message and XI picks it up just fine and processes it which is tested by both SXMB_MONI and the resulting file from the attachment.
    However, the monitoring e-mail is not sent. Using the runtime workbench, there is an error in the receiver communication channel, being the following error from the Audit Log:
    MP: Exception caught with cause com.sap.aii.af.ra.ms.api.RecoverableException: com.sap.aii.messaging.util.XMLScanException: java.lang.NullPointerException; nested exception caused by: java.io.IOException: Parsing an empty source. Root element expected!
    Looking through the Message Content, I notice there is a SOAP document describing the source message and two payloads. The first payload being the source message in XML format, the second payload is entirely empty. I suspect this empty second payload is causing the NullPointerException, but I am at a loss as to why the payload is empty. The Test Configuration showed me that the message gets through just fine, so what is going wrong?
    Here is a link to the configuration of the communication channel, maybe something is wrong here: [Config of communication channel|http://roald.hoolwerf.net/images/cc-config.png]
    My question is: What can I do to find what is causing the communication channel to receive an empty payload, and how can I solve this?
    Searching for this issue, or the given error on both Google as these forums hasn't given any related posts, if I did miss them in that case I am sorry for posting and please refer to the related topics.
    Thank you in advance for taking time to look at this question.
    Kind regards,
    Roald Hoolwerf
    Edited by: Roald Hoolwerf on Feb 1, 2010 3:57 PM

    Try this.
    Unchecked  the configure use authentication.
    and use mail package (base64)

Maybe you are looking for