Delete 3 million records!

I would like to ask if any guys have a good strategy (e.g. fallback plan, implementation) to delete 3 million of records from 3 tables using PL/SQL.
How long would it take to do that???
Many thanks in advance!

Sorry, I'm on a surrelaitic tip today.
What I'm getting at is this:
why PL/SQL?SQL is normally the most effective way of zapping records. However, deleting 80% of a 3.5 million row table is quite slow. It may be quicker to insert the rows you want to keep into a separate table, truncate the original table and then insert back. Of course, TRUNCATE is DDL and so can't be rolled back - that affects your regression strategy (i.e. take a back up!)
why three tableswhat is the relationship between these tables? Delete from SQL would work a lot faster if the tables were linked by foreign keys with CASCADE DELETE instead of using sub-queries.
why three millionThe question you answered: three million out of how many?
Cheers, APC

Similar Messages

  • Deleting records from a table with 12 million records

    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    Narayan

    user7202581 wrote:
    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;

  • Delete  over million records

    I want to delete over 1 million records from a user table. This table has 10 more relationship tables also.I tried cursor to loop through the records but I couldn't finish it and I had to kill the process.
    I have copied all user names to temp Table and I am planning to join with each table and delete.
    Do you think this approach would be the right one to delete these many records?

    Sometimes it is appropriate to use a where clause in export to extract the desired rows and tables, then recreate tables with appropriate storage parameters and import. Other times CTAS is appropriate. Other times plain old delete plus special undo. And there are other options like ETL software.
    Details determine appropriateness, including if this is a one-time thing, how long until that many records come back, time frames, scope and so forth. Row-by-row processing is seldom the right way, though that often is used in over-generalized schemes, and may be right if there are complicated business rules determining deletion. At times I've used all of the above in single projects like splitting out subsidiaries from an enterprise db or creating test schemata.

  • Deleteing 110 million records..!!!!!!!1

    I have got a table which has 120 million records out of which only 10 million records are usefull. Now I want to delete the remaining 110 million records with the where condition. I spoke to my DBA and he said this has it will take around 2 weeks or more time for this task but I need to get this done quickly as this thing has been effecting oue daily rollup process for the generation of alerts for a high priority applications
    i want delete based on this condition
    delete from tabA where colA=0.
    Any kind of help is highly appreciated.
    Oracle Version:11g

    >
    3.) insert /*+ append */ into taba select * from taba_temp;
    >
    That's the 'old' way that should be used ONLY if OP does not have the partitioning option licensed.
    >
    1.) create table taba_temp as select * from taba where cola != 0;
    >
    That 'temp' table should be created in the desired tablespace as a RANGE partitioned table with one partition: VALUES LESS THAN (MAXVALUE)
    Then step 3 can just do 'ALTER TABLE EXCHANGE PARTITION' to swap the data in. That is metadata only operation and takes a fraction of a second.
    No need to query the data again.
    DROP TABLE EMP_COPY
    CREATE TABLE EMP_COPY AS SELECT * FROM EMP;  -- this is a copy of emp and acts as the MAIN table that we want to keep
    drop table emp_temp
    -- create a partitioned temp table with the same structure as the actual table
    -- we only want to keep emp records for deptno = 20 for this example
    CREATE TABLE EMP_TEMP 
    PARTITION BY RANGE (empno)
    (partition ALL_DATA values less than (MAXVALUE)
    AS SELECT * FROM EMP_COPY WHERE DEPTNO = 20
    -- truncate our 'real' table - very fast
    TRUNCATE TABLE EMP_COPY
    -- swap in the 'deptno=20' data from the temp table - very fast
    ALTER TABLE EMP_TEMP EXCHANGE PARTITION ALL_DATA WITH TABLE EMP_COPY

  • Deleting 5 million records(slowness issue)

    Hi guys ,
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .

    >
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .
    >
    Nothing much you can do.
    About the only alternatives are
    1) create a new table that copies the records you want to keep. Then drop the old table and rename the new one to the old name. If you are deleting most of the records this is a good approach.
    2) create a new table that copies the records you want to keepl. Then truncate the partitions of the old table and use partition exchange to put the data back.
    3) delete the data in smaller batches of 100K records or so each. You could do this by using a different date value in the WHERE clause. Delete data < 2003, then delete data < 2004 and so on.
    4. If you want to delete all data in a partition you can just truncate the partition. That is the approach to use if you partition by date and are trying to remove older data.

  • Deleting 3 millon duplicated records from 4 million records

    Hi,
    I want to delete 3 million duplicate records from 4 million records in production. We dont have the partitions. I cann't recreate table using CTAS Option. So I have to delete the data in batch because of shared environment and people acceessing the data.
    Is there any fastest way to delete the data in stead of using bulk delete by using max rowid.
    please help me out.
    Regards,
    Venkat.
    Edited by: ramanamadhav on Aug 16, 2011 8:41 AM

    After deletion of data( with suggestion given by Justin), make sure that latest statistics are also taken.
    After such a heavy delete there is always a mismatch between num_rows and actual count.
    the best way is to shrink space and then gather table statistics, other wise optimizer will get confused and it will not take into consideration correct statistics.
    For e.g.
    16:41:55 SQL> create table test_tab_sm
    as
    select level col1, rpad('*888',200,'###') col2
    from dual connect by level <= 100000;16:42:05   2  16:42:05   3  16:42:05   4 
    Table created.
    Elapsed: 00:00:00.49
    16:42:08 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.51
    16:42:17 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                        100000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:42:28 SQL> delete from TEST_TAB_SM where mod(col1,2) =1;
    50000 rows deleted.
    Elapsed: 00:00:01.09
    16:42:39 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                        100000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:42:47 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.26
    16:42:55 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:43:27 SQL> alter table TEST_TAB_SM move;
    Table altered.
    Elapsed: 00:00:00.46
    16:43:59 SQL>  select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.03
    16:44:06 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.24
    16:44:17 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       1471         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:44:24 SQL> We can see how the no. of blocks changes. It changed from 2942 to 1471 which is half and in your case it will be 1/4th times of what it was using.
    There are other options for shrinking space as well other than MOVE.
    alter table <table_name> shrink space;Try it out and you can see the difference yourself.

  • I want to delete approx 100,000 million records and free the space

    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.

    user8731258 wrote:
    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.To archive, backup the database.
    To delete and free up the space, truncate the table/partitions and then shrink the datafile(s) associated with the tablespace.

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • Maitaning huge volume of data (Around 60 Million records)

    Iu2019ve requirement to load the data from ODS to Cube by full load. This ODS is getting 50 Million records down the line for 6 months which we have to maintain in BW.
    Can you please put the advise on the following things?
         Can we accommodate 50 Million records in ODS?
    If i.e. the case u201CCan we put the load for 50 Million records from ODS to Cube?u201D     And each record has to go check in another ODS to get the value for another InfoObject. Hence u201CIs the load going to be successful for the 50 Million records? Iu2019m not sure. Or do we get time out error?

    Harsha,
    The data load should go through ... some things to do / check...
    Delete the indices on cube before loading and then rebuild the same later after the load completes.
    regarding the lookup - if you are looking up specific values in another DSO - build a suitable secondary index on the DSO for the same ( preferably unique index )
    A DSo or cube can definitely hold 50 million records - we have had cases where we has 50 million records for 1 month with the DSO holding data for 6 to 10 months and the same with the cube also. Only that the reporting on the cube might be slow at a very detailed level.
    Also please state your version - 3.x or 7.0...
    also if you are on Oracle - plan for providing / backing up archive logs - since loading generates a lot of arcive logs...
    Edited by: Arun Varadarajan on Apr 21, 2009 2:30 AM

  • Creating table to hold 10 million records

    What should be the TABLESPACE,PCTUSED,PCTFREE,INITRANS, MAXTRANS, STORAGE details for creating a table to hold 10 million records.

    TABLESPACE,A tablespace big enough to hold 10 million rows. You may decide to have a separate tablespace for a big table, you may not.
    PCTUSEDAre these records likely to be deleted?
    PCTFREEAre these records likely to be updated?
    INITRANS, MAXTRANSHow many concurrent users are likely to be working with these records?
    STORAGE Do you want to override the default storage values of the tablespace you are using?
    In short, these questions can only be answered by somebody who understands your application i.e. you. The required values of these parameters has got little to do with the fact that the table has 10 million rows. You would need to answer these same questions for a table that held only ten thousand rows.
    Cheers, APC

  • How to delete the record in Table

    Hi Guru's,
    i have Table which contain no.of Records.
    i want to deleted one record. if i go to Table maint.Generator....from table itself..
    how to do that... when we deleting the record. can we create new TR for that?
    can anybody tell me.
    Thanks in Advance,
    venkat

    Hi,
    it is answered, here for my table there is no Table Maint.Generator.
    i just explained how i have done it.
    i just simply gone into Debug mode. there
    code = Dele.
    i have given. then i came out from debug mode to Table. there i just got Delete button on application tool bar.
    i selected the record then icliked on Delete button.
    it is got deleted.
    But it is not asking for any new Transport Request.
    Regards,
    Venkat

  • How to delete the record in the file present in the application server?

    Hi,
      I have two questions.
      i) How can I delete a record based on some condition in the recordx in the file that is present in the application server?
      ii) How can I lock the users whiel one user is accessing the file on the application server?
    Thanks in advance.
    Suvan

    Hi,
    If u want a frequent deletion this approach to delete a record from a file will havee unnecesary copy of records from one file to another and deletion of one file and renaming activities.
    Instead what u can do is Add and field del_flag to ur record structure.
    If u want to delete the record from a file just mark this del_flag as 'X'.
    While processing u can have a loop like
    loop at it_XXX where del_flag <> 'X'.
    endloop.
    This will logically delete the record.
    When u r to finish the application at that time only perform this copying / deleting / and renaing activity
    Hope this helps.
    Cheers,
    Nitin

  • Error while updating or deleting a record

    Hi to all...
    i have created atable group_master with 2 fields Group_id and Group_name
    the user will specify a group_name in a text box and press save. Before commiting a seq number will be generated
    inside the form procedure and it will be inserted.
    The insert is working fine.
    But when i select a record i.e a group_name and change it contents and commit it, it is not commiting.
    when i checked the display error it showed Ora-01400 cannot insert null values into the table.
    Iam not inserting a value here iam updating the group name only.
    even when i delete a record it shows same error....
    can any one help to solve this....
    thanx in advance........

    I have when button pressed trigger whick helps to show the lov which contains the group_id and group_name;
    a := show_lov('lov27');
    if a then
    go_block('group_master');
    execute_query;
    end if;
    it is correctly fetching the delected data. but propmpts a message box asking Do u want to save the record?
    if i ignore it n modify the group_name and save it,still it shows unable to insert null value.
    i jus want a simple way to solve this....
    i want to fetch that particular record and modify or delete it.
    group_id n group_name cannot be null.
    i will be using Buttons for add,delete,update and save record. i will be using lov to fetch records for modification or deletion.
    how to avoid the message box which prompts when i call lov with execute_query.....
    * I tried Key-listval...but i didnt get any solution. i will try again....
    thanx for ur replies....

  • Stale data error while deleting a record

    Hi
    My design of this development is as follow...
    1. Search Page in which users give some search criteria and results will be displayed in the results region on the same page. For each results record I have two buttons like 'Update' and 'Delete' so that users can delete the record or can update the record. For update i created a one more page where users can able to edit and save the data. I have two AM one for Search Page and one for Update Page.
    Please find more details below.
    intfEO  based on PO_REQUISITIONS_INTERFACE_ALL
    errorEO  based on PO_INTERFACE_ERRORS
    updateEO based on PO_REQUISITIONS_INTERFACE_ALL
    VOs
    intfVO based on intfEO and errorEO
    updateVO based on updateEO
    AM
    intfAM based on intfVO
    updateAM based on updateVO
    Pages
    searchPG based on intfAM
    updatePG based on updateAM
    Suppose I have one record in interface table with corresponding error record in error table.When users given the search criteria and hit enter they found a record. It means i have one record in the interface having one error record in the error table and both tables have same transaction id (primary key). So here user first try to update the record and it is working. After update he try to delete a record then I am getting below error. Please note that I am not getting this error when if i directly delete the record with out doing any update before delete. When ever users click on update icon then update PG will open and when users click on Apply button then data is getting updated in the database and page will forward to the search page.
    Unable to perform transaction on the record.
    Cause: The record contains stale data. The record has been modified by another user.
    Action: Cancel the transaction and re-query the record to get the new data.
    My UpdatePage Controller
    /*===========================================================================+
    |   Copyright (c) 2001, 2005 Oracle Corporation, Redwood Shores, CA, USA    |
    |                         All rights reserved.                              |
    +===========================================================================+
    |  HISTORY                                                                  |
    +===========================================================================*/
    package powl.oracle.apps.xxpowl.po.requisition.webui;
    import com.sun.java.util.collections.HashMap;
    import com.sun.rowset.internal.Row;
    import java.io.Serializable;
    import oracle.apps.fnd.common.MessageToken;
    import oracle.apps.fnd.common.VersionInfo;
    import oracle.apps.fnd.framework.OAApplicationModule;
    import oracle.apps.fnd.framework.OAException;
    import oracle.apps.fnd.framework.OAViewObject;
    import oracle.apps.fnd.framework.webui.OAControllerImpl;
    import oracle.apps.fnd.framework.webui.OADialogPage;
    import oracle.apps.fnd.framework.webui.OAPageContext;
    import oracle.apps.fnd.framework.webui.OAWebBeanConstants;
    import oracle.apps.fnd.framework.webui.beans.OAWebBean;
    import oracle.apps.fnd.framework.webui.beans.form.OASubmitButtonBean;
    import oracle.apps.fnd.framework.webui.beans.message.OAMessageDateFieldBean;
    import oracle.apps.fnd.framework.webui.beans.message.OAMessageTextInputBean;
    //import oracle.apps.fnd.oam.diagnostics.report.Row;
    import oracle.apps.icx.por.common.webui.ClientUtil;
    import powl.oracle.apps.xxpowl.po.requisition.server.xxpowlPOReqIntfUpdateAMImpl;
    import powl.oracle.apps.xxpowl.po.requisition.server.xxpowlPOReqIntfUpdateEOVOImpl;
    * Controller for ...
    public class xxpowlPOReqIntfAllUpdatePageCO extends OAControllerImpl
      public static final String RCS_ID="$Header$";
      public static final boolean RCS_ID_RECORDED =
            VersionInfo.recordClassVersion(RCS_ID, "%packagename%");
       * Layout and page setup logic for a region.
       * @param pageContext the current OA page context
       * @param webBean the web bean corresponding to the region
      public void processRequest(OAPageContext pageContext, OAWebBean webBean)
        super.processRequest(pageContext, webBean);
         xxpowlPOReqIntfUpdateAMImpl am = (xxpowlPOReqIntfUpdateAMImpl)pageContext.getApplicationModule(webBean);
          xxpowlPOReqIntfUpdateEOVOImpl UpdateVO =(xxpowlPOReqIntfUpdateEOVOImpl)am.findViewObject("xxpowlPOReqIntfUpdateEOVOImpl");
          String newvalue = (String)pageContext.getSessionValue("testValue");     
          System.out.println("Transaction ID from processRequest UpdateCO from testValue field:"+newvalue);
          String transactionid = pageContext.getParameter("HashmapTransacitonid");
          System.out.println("Transaction ID from processRequest Hash Map in UpdateCO :"+transactionid);
          String errorcolumn = pageContext.getParameter("HashmapErrorcolumn");
          System.out.println("Error Column Name from processRequest Hash Map in UpdateCO :"+errorcolumn);
          String errormsg = pageContext.getParameter("HashmapErrormessage");
          System.out.println("Error Message from processRequest Hash Map in UpdateCO :"+errormsg);
          String readyonly = pageContext.getParameter("HashmapReadonly");
          System.out.println("Read Only value from processRequest Hash Map in UpdateCO :"+readyonly);
          if (transactionid !=null & !"".equals(transactionid)) { 
         /* Passing below four parameters to the Update Page */
          Serializable amParams[] = new Serializable[]{transactionid,readyonly,errorcolumn,errormsg} ;
          pageContext.getRootApplicationModule().invokeMethod("executexxpowlPOReqIntfUpdateEOVO", amParams);
       * Procedure to handle form submissions for form elements in
       * a region.
       * @param pageContext the current OA page context
       * @param webBean the web bean corresponding to the region
      public void processFormRequest(OAPageContext pageContext, OAWebBean webBean)
        super.processFormRequest(pageContext, webBean);      
         OAApplicationModule am = pageContext.getApplicationModule(webBean);
          if (pageContext.getParameter("ApplyButton") != null)
            System.out.println("Inside ApplyButton method in UpdatePageCO");
            OAViewObject vo = (OAViewObject)am.findViewObject("xxpowlPOReqIntfUpdateEOVO");
            String transactionid = pageContext.getParameter("HashmapTransacitonid");
            System.out.println("Transaction ID from processFormRequest Hash Map in UpdateCO-ApplyButton method :"+transactionid);
            am.invokeMethod("apply");
              pageContext.forwardImmediately("OA.jsp?page=/powl/oracle/apps/xxpowl/po/requisition/webui/xxpowlPOReqIntfAllPG",
                                                     null,
                                                     OAWebBeanConstants.KEEP_MENU_CONTEXT,
                                                     null,
                                                     null,
                                                     true,
                                                     OAWebBeanConstants.ADD_BREAD_CRUMB_NO);
           else if (pageContext.getParameter("CancelButton") != null)
            am.invokeMethod("rollback");
            pageContext.forwardImmediately("OA.jsp?page=/powl/oracle/apps/xxpowl/po/requisition/webui/xxpowlPOReqIntfAllPG",
                                                   null,
                                                   OAWebBeanConstants.KEEP_MENU_CONTEXT,
                                                   null,
                                                   null,
                                                   true,
                                                   OAWebBeanConstants.ADD_BREAD_CRUMB_NO);
    UpdatePageAMImpl.java
    package powl.oracle.apps.xxpowl.po.requisition.server;
    import oracle.apps.fnd.common.MessageToken;
    import oracle.apps.fnd.framework.OAException;
    import oracle.apps.fnd.framework.OARow;
    import oracle.apps.fnd.framework.server.OAApplicationModuleImpl;
    import oracle.apps.fnd.framework.OAViewObject;
    import oracle.apps.inv.appsphor.order.server.XxapOrderHeaderVOImpl;
    import oracle.jbo.Transaction;
    // ---    File generated by Oracle ADF Business Components Design Time.
    // ---    Custom code may be added to this class.
    // ---    Warning: Do not modify method signatures of generated methods.
    public class xxpowlPOReqIntfUpdateAMImpl extends OAApplicationModuleImpl {
        /**This is the default constructor (do not remove)
        public xxpowlPOReqIntfUpdateAMImpl() {
        /**Container's getter for xxpowlPOReqIntfUpdateEOVO
        public xxpowlPOReqIntfUpdateEOVOImpl getxxpowlPOReqIntfUpdateEOVO() {
            return (xxpowlPOReqIntfUpdateEOVOImpl)findViewObject("xxpowlPOReqIntfUpdateEOVO");
        /**Sample main for debugging Business Components code using the tester.
        public static void main(String[] args) {
            launchTester("powl.oracle.apps.xxpowl.po.requisition.server", /* package name */
          "xxpowlPOReqIntfUpdateAMLocal" /* Configuration Name */);
    /*  // Added by 
        public void execute_update_query(String TransactionID) {
        xxpowlPOReqIntfUpdateEOVOImpl vo = getxxpowlPOReqIntfUpdateEOVO();
        vo.initQuery(TransactionID);
    // Added by  , this will not call bec changed the logic and so now the update button enabled on search results page
    // and this method will not called
    public void pageInEditMode (String transactionID, String readOnlyFlag, String ErrorColumn,
                                                                           String ErrorMessage)
        System.out.println("Transaction Id from pageInEditMode in UpdatePGAMImpl.java: "+transactionID);
        System.out.println("xxReadOnly from pageInEditMode in UpdatePGAMImpl.java: "+readOnlyFlag);
        // Get the VO
        xxpowlPOReqIntfUpdateEOVOImpl updateVO = getxxpowlPOReqIntfUpdateEOVO();
        //Remove the where clause that was added in the previous run
        updateVO.setWhereClause(null);
        //Remove the bind parameters that were added in the previous run.
        updateVO.setWhereClauseParams(null);
        //Add where clause
        // updateVO.addWhereClause(" TRANSACTION_ID = :1 ");
        //Bind transactionid to the where clause.
         // updateVO.setWhereClauseParam(1, transactionID); // this will not work bec it will start with zero from from 1
          updateVO.setWhereClauseParam(0, transactionID);
        //Execute the query.
        updateVO.executeQuery();
        xxpowlPOReqIntfUpdateEOVORowImpl currentRow = (xxpowlPOReqIntfUpdateEOVORowImpl)updateVO.next();
        // Assiging the transient varaibles
        currentRow.setxxErrorMessage(ErrorMessage);
        currentRow.setxxErrorColumn(ErrorColumn);
        if ("N".equals(readOnlyFlag))
              /* Make the attribute to 'False so that all fields will be displayed in Edit Mode because we used this
               xxReadOnly as SPEL  */
               currentRow.setxxReadOnly(Boolean.FALSE);
       public void executexxpowlPOReqIntfUpdateEOVO(String transactionID, String xxReadyOnly, String ErrorColumn,
                                                                                              String ErrorMessage)
           System.out.println("Transaction Id from executexxpowlPOReqIntfUpdateEOVO in UpdatePGAMImpl.java: "+transactionID);
           System.out.println("xxReadOnly from executexxpowlPOReqIntfUpdateEOVO in UpdatePGAMImpl.java: "+xxReadyOnly);
           System.out.println("Error Message from executexxpowlPOReqIntfUpdateEOVO in UpdatePGAMImpl.java: "+ErrorColumn);
           System.out.println("Error Column from executexxpowlPOReqIntfUpdateEOVO in UpdatePGAMImpl.java: "+ErrorMessage);
         // Get the VO
         xxpowlPOReqIntfUpdateEOVOImpl updateVO = getxxpowlPOReqIntfUpdateEOVO();
         //xxpowlPOReqIntfUpdateEOVORowImpl updaterowVO = xxpowlPOReqIntfUpdateEOVO();
       //not working
       //    OARow row = (OARow)updateVO.getCurrentRow();
       //    row.setAttribute("xxReadOnly", Boolean.TRUE);
    // updateVO.putTransientValue('XXXXX',x);
         //Remove the where clause that was added in the previous run
         updateVO.setWhereClause(null);
         //Remove the bind parameters that were added in the previous run.
         updateVO.setWhereClauseParams(null);
         //Add where clause
         // updateVO.addWhereClause(" TRANSACTION_ID = :1 ");
         //Bind transactionid to the where clause.
          // updateVO.setWhereClauseParam(1, transactionID); // this will not work bec it will start with zero from from 1
           updateVO.setWhereClauseParam(0, transactionID);
        // updateVO.setWhereClauseParam(1, ErorrColumn); 
         //Execute the query.
         updateVO.executeQuery();
         /* We want the page should be read only initially so after executing the VO with above command
            and if you use next() it will go to the first record among the
            fetched records. If you want to iterate for all the records then use iterator */
            /* Using Iterator
             while(updateVO.hasNext()) {  // this will check after execute Query above command if it has any rows
              xxpowlPOReqIntfUpdateEOVORowImpl currentRow = (xxpowlPOReqIntfUpdateEOVORowImpl)updateVO.next();
              /* above line next() will take the control of the first record */
          /*     currentRow.setxxErrorMessage(ErrorMessage);
                 currentRow.setxxErrorColumn(ErrorColumn);
               if ("Y".equals(xxReadyOnly))
                      currentRow.setxxReadOnly(Boolean.TRUE);                 
             } // this while loop will loop till end of all the fetched records
         xxpowlPOReqIntfUpdateEOVORowImpl currentRow = (xxpowlPOReqIntfUpdateEOVORowImpl)updateVO.next();
      // Assiging the transient varaibles
      currentRow.setxxErrorMessage(ErrorMessage);
      currentRow.setxxErrorColumn(ErrorColumn);
        /* Make the attribute to 'TRUE' so that all fields will be displayed as READ ONLY because we used this
           xxReadOnly as SPEL
           if ("Y".equals(xxReadyOnly))
                  currentRow.setxxReadOnly(Boolean.TRUE);            
      //Added by  and this methiod will get called from UpdatePG Process Form Request controller  
         public void rollback()
           Transaction txn = getTransaction();
           if (txn.isDirty())
             txn.rollback();
        public void apply()
            //OAViewObject vo1 = (OAViewObject)getxxpowlPOReqIntfUpdateEOVO();
            //Number chargeAccountID = vo1.get
          getTransaction().commit();      
          OAViewObject vo = (OAViewObject)getxxpowlPOReqIntfUpdateEOVO();
          if (!vo.isPreparedForExecution())
           vo.executeQuery();
    SearchPG AM
    package powl.oracle.apps.xxpowl.po.requisition.server;
    import oracle.apps.fnd.framework.OAViewObject;
    import oracle.apps.fnd.framework.server.OAApplicationModuleImpl;
    import oracle.jbo.RowSetIterator;
    import oracle.jbo.Transaction;
    import oracle.jbo.domain.Number;
    import powl.oracle.apps.xxpowl.po.requisition.lov.server.xxpowlErrosLovVOImpl;
    import powl.oracle.apps.xxpowl.po.requisition.lov.server.xxpowlInterfaceSouceCodeLovVOImpl;
    import powl.oracle.apps.xxpowl.po.requisition.lov.server.xxpowlItemSegment1LovVOImpl;
    import powl.oracle.apps.xxpowl.po.requisition.lov.server.xxpowlOrgLovVOImpl;
    import powl.oracle.apps.xxpowl.po.requisition.lov.server.xxpowlReferenceNumberLovVOImpl;
    import powl.oracle.apps.xxpowl.po.requisition.lov.server.xxpowlRequestIdLovVOImpl;
    import powl.oracle.apps.xxpowl.po.requisition.lov.server.xxpowlRequisitionTypeLovVOImpl;
    // ---    File generated by Oracle ADF Business Components Design Time.
    // ---    Custom code may be added to this class.
    // ---    Warning: Do not modify method signatures of generated methods.
    public class xxpowlPOReqIntfAllAMImpl extends OAApplicationModuleImpl {
        /**This is the default constructor (do not remove)
        public xxpowlPOReqIntfAllAMImpl() {
        /**Container's getter for xxpowlPOReqIntfAllVO
        public xxpowlPOReqIntfAllVOImpl getxxpowlPOReqIntfAllVO() {
            return (xxpowlPOReqIntfAllVOImpl)findViewObject("xxpowlPOReqIntfAllVO");
        /**Sample main for debugging Business Components code using the tester.
        public static void main(String[] args) {
            launchTester("powl.oracle.apps.xxpowl.po.requisition.server", /* package name */
          "xxpowlPOReqIntfAllAMLocal" /* Configuration Name */);
        /**Container's getter for xxpowlRequestIdLovVO
        public xxpowlRequestIdLovVOImpl getxxpowlRequestIdLovVO() {
            return (xxpowlRequestIdLovVOImpl)findViewObject("xxpowlRequestIdLovVO");
        /**Container's getter for xxpowlErrosLovVO
        public xxpowlErrosLovVOImpl getxxpowlErrosLovVO() {
            return (xxpowlErrosLovVOImpl)findViewObject("xxpowlErrosLovVO");
        /**Container's getter for xxpowlOrgLovVO
        public xxpowlOrgLovVOImpl getxxpowlOrgLovVO() {
            return (xxpowlOrgLovVOImpl)findViewObject("xxpowlOrgLovVO");
      //Start Adding by Lokesh
      //This method wil get invoked from the search results page Process Request
       public void rollbackItem()
         Transaction txn = getTransaction();
         if (txn.isDirty())
           txn.rollback();
      //This method will invoked from Controller page when user click Yes on delete confirmtion page from Search Results Page
       public void deleteItem(String trasnsactionID)
         Number rowToDelete = new Number(Integer.parseInt(trasnsactionID));      
         OAViewObject vo = (OAViewObject)getxxpowlPOReqIntfAllVO();
         xxpowlPOReqIntfAllVORowImpl row = null;
         int fetchedRowCount = vo.getFetchedRowCount();
       //  System.out.print(fetchedRowCount);
         System.out.println("No of row fetched on delete method :"+fetchedRowCount);
         RowSetIterator deleteIter = vo.createRowSetIterator("deleteIter");
           System.out.println("1 :");
         if (fetchedRowCount > 0)
             System.out.println("2 :");
           deleteIter.setRangeStart(0);
             System.out.println("3 :");
           deleteIter.setRangeSize(fetchedRowCount);
             System.out.println("4 :");
           for (int i = 0; i < fetchedRowCount; i++)
               System.out.println("5 :");
             row = (xxpowlPOReqIntfAllVORowImpl)deleteIter.getRowAtRangeIndex(i);
               System.out.println("6 :");
             Number PK = row.getTransactionId();
               System.out.println("7 :");
             if (PK.compareTo(rowToDelete) == 0)
                 System.out.println("8 :");
               row.remove();
                 System.out.println("9 :");
               getTransaction().commit();
                 System.out.println("10 :");
               break;
                 //System.out.println("11 :");
           System.out.println("11 :");
         deleteIter.closeRowSetIterator();
           System.out.println("12 :");
        /**Container's getter for xxpowlInterfaceSouceCodeLovVO
        public xxpowlInterfaceSouceCodeLovVOImpl getxxpowlInterfaceSouceCodeLovVO() {
            return (xxpowlInterfaceSouceCodeLovVOImpl)findViewObject("xxpowlInterfaceSouceCodeLovVO");
        /**Container's getter for xxpowlRequisitionTypeLovVO
        public xxpowlRequisitionTypeLovVOImpl getxxpowlRequisitionTypeLovVO() {
            return (xxpowlRequisitionTypeLovVOImpl)findViewObject("xxpowlRequisitionTypeLovVO");
        /**Container's getter for xxpowlReferenceNumberLovVO
        public xxpowlReferenceNumberLovVOImpl getxxpowlReferenceNumberLovVO() {
            return (xxpowlReferenceNumberLovVOImpl)findViewObject("xxpowlReferenceNumberLovVO");
        /**Container's getter for xxpowlItemSegment1LovVO
        public xxpowlItemSegment1LovVOImpl getxxpowlItemSegment1LovVO() {
            return (xxpowlItemSegment1LovVOImpl)findViewObject("xxpowlItemSegment1LovVO");
    Search Page Controller
    /*===========================================================================+
    |   Copyright (c) 2001, 2005 Oracle Corporation, Redwood Shores, CA, USA    |
    |                         All rights reserved.                              |
    +===========================================================================+
    |  HISTORY                                                                  |
    +===========================================================================*/
    package powl.oracle.apps.xxpowl.po.requisition.webui;
    import com.sun.java.util.collections.HashMap;
    //import com.sun.java.util.collections.Hashtable;
    import java.util.Hashtable;
    //import java.util.HashMap;
    import java.io.Serializable;
    import javax.servlet.jsp.PageContext;
    import oracle.apps.fnd.common.MessageToken;
    import oracle.apps.fnd.common.VersionInfo;
    import oracle.apps.fnd.framework.OAApplicationModule;
    import oracle.apps.fnd.framework.OAException;
    import oracle.apps.fnd.framework.webui.OAControllerImpl;
    import oracle.apps.fnd.framework.webui.OADialogPage;
    import oracle.apps.fnd.framework.webui.OAPageContext;
    import oracle.apps.fnd.framework.webui.OAWebBeanConstants;
    import oracle.apps.fnd.framework.webui.TransactionUnitHelper;
    import oracle.apps.fnd.framework.webui.beans.OAWebBean;
    * Controller for ...
    public class xxpowlPOReqIntfAllSearchPageCO extends OAControllerImpl
      public static final String RCS_ID="$Header$";
      public static final boolean RCS_ID_RECORDED =
            VersionInfo.recordClassVersion(RCS_ID, "%packagename%");
       * Layout and page setup logic for a region.
       * @param pageContext the current OA page context
       * @param webBean the web bean corresponding to the region
      public void processRequest(OAPageContext pageContext, OAWebBean webBean)
        super.processRequest(pageContext, webBean);
        //Added by Lokesh
          OAApplicationModule am = pageContext.getApplicationModule(webBean);
         if (TransactionUnitHelper.isTransactionUnitInProgress(pageContext, "updateRecord", false)) {
               am.invokeMethod("rollbackItem");
               TransactionUnitHelper.endTransactionUnit(pageContext, "updateRecord");
       * Procedure to handle form submissions for form elements in
       * a region.
       * @param pageContext the current OA page context
       * @param webBean the web bean corresponding to the region
      public void processFormRequest(OAPageContext pageContext, OAWebBean webBean)
        super.processFormRequest(pageContext, webBean);
             String userClicked = pageContext.getParameter("event");
             System.out.println("Event from processFormRequest in SearchPageCO :"+userClicked);
           System.out.println("Parametere Names are :- \t" + pageContext.getParameter("UpdateImage"));
           System.out.println("Parametere Names are :- \t" + pageContext.getParameter("event"));
         if (pageContext.getParameter("event") != null &&
             pageContext.getParameter("event").equalsIgnoreCase("Update")) { 
             String ReqTransactionId=(String)pageContext.getParameter("transacitonidParam");
             String errorcolumn=(String)pageContext.getParameter("errorcolumnParam");
             String errormsg=(String)pageContext.getParameter("errormessageParam");
             String readyonly="Y"; //(String)pageContext.getParameter("readonlyParam");
             System.out.println("Requisition Transaction Id  : "+ReqTransactionId);
             System.out.println("Error Column  : "+errorcolumn);
             System.out.println("Error Message  : "+errormsg);
             System.out.println("Read Only  : "+readyonly);
             HashMap params = new HashMap(4);
             params.put("HashmapTransacitonid",ReqTransactionId);
             params.put("HashmapErrorcolumn",errorcolumn);
             params.put("HashmapErrormessage",errormsg);
             params.put("HashmapReadonly",readyonly);
             pageContext.putSessionValue("testValue",ReqTransactionId);
             //System.out.println("Transaction Id passing through HashMap :- \t" + ReqTransactionId);
             pageContext.setForwardURL("OA.jsp?page=/powl/oracle/apps/xxpowl/po/requisition/webui/xxpowlPOReqIntfAllUpdatePG",
                                        null,
                                        OAWebBeanConstants.KEEP_MENU_CONTEXT,
                                        null,
                                        params,
                                        true,
                                        OAWebBeanConstants.ADD_BREAD_CRUMB_YES,
                                        OAWebBeanConstants.IGNORE_MESSAGES) ;   
    else if (pageContext.getParameter("event") != null &&
             pageContext.getParameter("event").equalsIgnoreCase("Delete")) {   
        System.out.println("Inside Delete method in SearchCO");
        String deleteTransactionID=(String)pageContext.getParameter("deleteTransactionIDParam");
        System.out.println("Transaction Id in Delete Method :- \t" + deleteTransactionID);
      //  deleteTransactionID ="";  //Makeing Null because dont want to show the transaction id bec users dont know about it
        //MessageToken[] tokens = { new MessageToken("MESSAGE_NAME", deleteTransactionID) };
        MessageToken[] tokens = { new MessageToken("MESSAGE_NAME", "") };
        OAException mainMessage = new OAException("FND", "FND_MESSAGE_DELETE_WARNING", tokens);
        OADialogPage dialogPage = new OADialogPage(OAException.WARNING,mainMessage, null, "", "");
        String yes = pageContext.getMessage("AK", "FWK_TBX_T_YES", null);
        String no = pageContext.getMessage("AK", "FWK_TBX_T_NO", null);
        dialogPage.setOkButtonItemName("DeleteYesButton");
        dialogPage.setOkButtonToPost(true);
        dialogPage.setNoButtonToPost(true);
        dialogPage.setPostToCallingPage(true);
        dialogPage.setOkButtonLabel(yes);
        dialogPage.setNoButtonLabel(no);
        Hashtable formParams = new Hashtable(1);
        formParams.put("transactionIdDeleted", deleteTransactionID);
        dialogPage.setFormP

    Hi friend ,
    In Search page i didn't do any update. and also search page is not a problem it's working fine. create page only the problem. In this page only throwing Stale data error exception.
    Please give me more suggestion.
    Thanks in advance,

  • A better way to delete IT2001 records in batch.,

    Hi Gurus,
    A little problem here. Due to data source error, I need to delete IT2001 records in batch.
    Currently I am using standard report RPUREOPN to do this. But RPUREOPN will not update those deduction in IT2006 quota. So I've to delete & reupload cooresponding IT2006 quota data which is inconvenient.
    Is there a better way to do this job? I mean deleting IT2001 in batch and also update the deductions in IT2006.
    Thanks in adv!
    Br,Kee

    After running the report you mentioned above to delete absences/attn you can try running the report RPTUPD00 to revaluate the attendance and absence.
    Regards,
    Divya

  • HOW to delete the records in CO1P?

    Hi all:
    When i close the PO,i find the PO have CO1P records.However ,the PO has been settled. Then how can i  delete the records in CO1P for this PO.

    Dear,
    CO1P:
    Normally the Goods Movement errors are updated in Table AFFW - Goods Movements with Errors from Confirmations and in transaction COGI.
    If the table is locked during background using update control for the confirmation then the errors are recorded in CO1P.
    COFC:
    Used for Reprocessing of errors in confirmation related to actual cost calculations.
    Please run CORUPROC Program.
    Please check this link also for further help
    http://help.sap.com/saphelp_46c/helpdata/en/e3/7cd32396f611d1b5a60000e8359890/frameset.htm
    Regards,
    R.Brahmankar

Maybe you are looking for

  • Having Issues with Outlook Calender Syncing on a BB Curve on Nextel

    My boss at work recently got the BB Curve from nextel and we have gotten the device setup pretty much the way he wants it. There is a small issue though that when he tries to sync his callender it gets to the same spot, event number 31 throws out a n

  • Double Payment of Invoices

    Hi, Situation: In the payment run on a particular date, one vendor was paid for all the due open items it had. Payment advice and remittance advice was generated. Amount was transferred to the vendor. But the vendor account in SAP was not updated. On

  • 2LIS_04_P_ARBPL - how this extractor fetching data from which all tables??

    Hi We have one DataSource 2LIS_04_P_ARBPL and we want to know all the tables of origin. Actually the problem is: this datasource is extracting a blank workcenter Suppose we have one Process Order 100000 for which this DS is giving us 4 records: out o

  • Importing Fotos to documents

    I am having problems importing fotos to documents.When I try it the message "no image capture device found" appears.I am using a program called textedit, a macbook and 10.5.2 operating sytem.Can anybody please help me?Thanks!

  • Web link work around for Pages

    As of March 2014 Apple hasn't fixed the Export to PDF preserving web links bug!  Pages 5.1 build 1769.  This means that any PDFs you create from Pages won't have clickable web links past the first page. Here are the work arounds that I have found so