Deleteing 110 million records..!!!!!!!1

I have got a table which has 120 million records out of which only 10 million records are usefull. Now I want to delete the remaining 110 million records with the where condition. I spoke to my DBA and he said this has it will take around 2 weeks or more time for this task but I need to get this done quickly as this thing has been effecting oue daily rollup process for the generation of alerts for a high priority applications
i want delete based on this condition
delete from tabA where colA=0.
Any kind of help is highly appreciated.
Oracle Version:11g

>
3.) insert /*+ append */ into taba select * from taba_temp;
>
That's the 'old' way that should be used ONLY if OP does not have the partitioning option licensed.
>
1.) create table taba_temp as select * from taba where cola != 0;
>
That 'temp' table should be created in the desired tablespace as a RANGE partitioned table with one partition: VALUES LESS THAN (MAXVALUE)
Then step 3 can just do 'ALTER TABLE EXCHANGE PARTITION' to swap the data in. That is metadata only operation and takes a fraction of a second.
No need to query the data again.
DROP TABLE EMP_COPY
CREATE TABLE EMP_COPY AS SELECT * FROM EMP;  -- this is a copy of emp and acts as the MAIN table that we want to keep
drop table emp_temp
-- create a partitioned temp table with the same structure as the actual table
-- we only want to keep emp records for deptno = 20 for this example
CREATE TABLE EMP_TEMP 
PARTITION BY RANGE (empno)
(partition ALL_DATA values less than (MAXVALUE)
AS SELECT * FROM EMP_COPY WHERE DEPTNO = 20
-- truncate our 'real' table - very fast
TRUNCATE TABLE EMP_COPY
-- swap in the 'deptno=20' data from the temp table - very fast
ALTER TABLE EMP_TEMP EXCHANGE PARTITION ALL_DATA WITH TABLE EMP_COPY

Similar Messages

  • Deleting 5 million records(slowness issue)

    Hi guys ,
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .

    >
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .
    >
    Nothing much you can do.
    About the only alternatives are
    1) create a new table that copies the records you want to keep. Then drop the old table and rename the new one to the old name. If you are deleting most of the records this is a good approach.
    2) create a new table that copies the records you want to keepl. Then truncate the partitions of the old table and use partition exchange to put the data back.
    3) delete the data in smaller batches of 100K records or so each. You could do this by using a different date value in the WHERE clause. Delete data < 2003, then delete data < 2004 and so on.
    4. If you want to delete all data in a partition you can just truncate the partition. That is the approach to use if you partition by date and are trying to remove older data.

  • Delete 3 million records!

    I would like to ask if any guys have a good strategy (e.g. fallback plan, implementation) to delete 3 million of records from 3 tables using PL/SQL.
    How long would it take to do that???
    Many thanks in advance!

    Sorry, I'm on a surrelaitic tip today.
    What I'm getting at is this:
    why PL/SQL?SQL is normally the most effective way of zapping records. However, deleting 80% of a 3.5 million row table is quite slow. It may be quicker to insert the rows you want to keep into a separate table, truncate the original table and then insert back. Of course, TRUNCATE is DDL and so can't be rolled back - that affects your regression strategy (i.e. take a back up!)
    why three tableswhat is the relationship between these tables? Delete from SQL would work a lot faster if the tables were linked by foreign keys with CASCADE DELETE instead of using sub-queries.
    why three millionThe question you answered: three million out of how many?
    Cheers, APC

  • Delete  over million records

    I want to delete over 1 million records from a user table. This table has 10 more relationship tables also.I tried cursor to loop through the records but I couldn't finish it and I had to kill the process.
    I have copied all user names to temp Table and I am planning to join with each table and delete.
    Do you think this approach would be the right one to delete these many records?

    Sometimes it is appropriate to use a where clause in export to extract the desired rows and tables, then recreate tables with appropriate storage parameters and import. Other times CTAS is appropriate. Other times plain old delete plus special undo. And there are other options like ETL software.
    Details determine appropriateness, including if this is a one-time thing, how long until that many records come back, time frames, scope and so forth. Row-by-row processing is seldom the right way, though that often is used in over-generalized schemes, and may be right if there are complicated business rules determining deletion. At times I've used all of the above in single projects like splitting out subsidiaries from an enterprise db or creating test schemata.

  • Delete 50 Million records from a table with 60 Million records

    Hi,
    I'm using oracle9.2.0.7 on win2k3 32bit.
    I need to delete 50M rows from a table that contains 60M records. This db was just passed on to me. I tried to use the delete statement but it takes too long. After reading the articles and forums, the best way to delete that many records from a table is to create a temp table, transfer the data needed to the temp table, drop the big table then rename temp table to big table. But the key here is in creating an exact replica of the big table.I have gotten the create table, indexes and constraints script in the export file from my production DB. But in the forums I read, I noticed that I haven't gotten the create grant script, is there a view I could use to get this? Can dbms.metadata get this?
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    I just want to make sure that I haven't left anything out. Kindly help.
    Thanks and Best Regards

    Can dbms.metadata get this?
    Yes, dbms_metadata can get the grants.
    YAS@10GR2 > select dbms_metadata.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST') from dual;
    DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST')
      GRANT SELECT ON "YAS"."TEST" TO "SYS"
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    There are triggers, foreign keys referencing this table (which will not permit you to drop the table if you do not take care of them), snapshot logs on the table, snapshots based on the table, etc...

  • Deleting records from a table with 12 million records

    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    Narayan

    user7202581 wrote:
    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;

  • Deleting 3 millon duplicated records from 4 million records

    Hi,
    I want to delete 3 million duplicate records from 4 million records in production. We dont have the partitions. I cann't recreate table using CTAS Option. So I have to delete the data in batch because of shared environment and people acceessing the data.
    Is there any fastest way to delete the data in stead of using bulk delete by using max rowid.
    please help me out.
    Regards,
    Venkat.
    Edited by: ramanamadhav on Aug 16, 2011 8:41 AM

    After deletion of data( with suggestion given by Justin), make sure that latest statistics are also taken.
    After such a heavy delete there is always a mismatch between num_rows and actual count.
    the best way is to shrink space and then gather table statistics, other wise optimizer will get confused and it will not take into consideration correct statistics.
    For e.g.
    16:41:55 SQL> create table test_tab_sm
    as
    select level col1, rpad('*888',200,'###') col2
    from dual connect by level <= 100000;16:42:05   2  16:42:05   3  16:42:05   4 
    Table created.
    Elapsed: 00:00:00.49
    16:42:08 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.51
    16:42:17 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                        100000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:42:28 SQL> delete from TEST_TAB_SM where mod(col1,2) =1;
    50000 rows deleted.
    Elapsed: 00:00:01.09
    16:42:39 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                        100000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:42:47 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.26
    16:42:55 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:43:27 SQL> alter table TEST_TAB_SM move;
    Table altered.
    Elapsed: 00:00:00.46
    16:43:59 SQL>  select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.03
    16:44:06 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.24
    16:44:17 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       1471         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:44:24 SQL> We can see how the no. of blocks changes. It changed from 2942 to 1471 which is half and in your case it will be 1/4th times of what it was using.
    There are other options for shrinking space as well other than MOVE.
    alter table <table_name> shrink space;Try it out and you can see the difference yourself.

  • I want to delete approx 100,000 million records and free the space

    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.

    user8731258 wrote:
    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.To archive, backup the database.
    To delete and free up the space, truncate the table/partitions and then shrink the datafile(s) associated with the tablespace.

  • How to delete the duplicate records in a table without promary key

    I have a table that contains around 1 million records and there is no promary key or auto number coulums. I need to delete the duplicate records from this table. what is the simple effective way to do this.

    Please see this link:
    Remove duplicate records ...
    sqldevelop.wordpress.com

  • Procedure to delete mutiple table records with 1000 rows at a time

    Hello Champs,
    I am having requirement to create procedure to achive the deletes from 5 big tables. 1 master table and 4 child tables. Each table has 28 millions records. Now I have to schedule a procedure to run it on daily basis to delete approximately 500k records from each tables. But the condition is: Seperate procedures for each tables and delete only 1000 rows at a time. The plan is below. Since I am new for writing complicate procedure, I don't have much idea how to code the procedures, can some one help me to design the procedure queries. Many Thanks
    1. Fetch 1000 rows from master table with where clause condition.
    2. Match them with each child tables and delete the records.
    3. Atlast delete those 1000 rows from master table.
    4. over all commit for all 5 X 1000 records.
    5. Repeat the steps 1 to 4 till no rows to fetch from master table.
    Below are detailed procedure plan:----
    ----- Main procedure to fetch 1000 rows at a time and providing them as input to 5 different procedures to delete records, repeating same steps till no rows to fetch i.e. (i=0)-----
    Create procedure fetch_record_from_master_table:
    loop
    i = Select column1 from mastertable where <somecondition> and rowcount <= 1000;
    call procedure 1 and i rows as input;
    call procedure 2 and i rows as input;
    call procedure 3 and i rows as input;
    call procedure 4 and i rows as input;
    call procedure 5 and i rows as input;
    commit;
    exit when i=0
    end loop;
    ----- Sepereate procedure's to delete records from each child tables------
    Create procedure 1 with input:
    delete from child_table1 where column1=input rows of master table;
    Create procedure 2 with input:
    delete from child_table2 where column1=input rows of master table;
    Create procedure 3 with input:
    delete from child_table3 where column1=input rows of master table;
    Create procedure 4 with input:
    delete from child_table4 where column1=input rows of master table;
    --- procedure to delete records from master table atlast----
    Create procedure 5 with input:
    delete from Master_table where column1=input rows of master table;

    Oops, but this will work, won't it?
    begin
      execute immediate 'truncate table child_table1';
      execute immediate '  truncate table child_table2';
      execute immediate '  truncate table child_table3';
      execute immediate '  truncate table child_table4';
      for r_fk in ( select table_name, constraint_name
                    from user_constraints
                    where table_name = 'MASTER_TABLE'
                    and   constraint_type = 'R'
      loop
        execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' DISABLE';
      end loop;
      execute immediate '  truncate table master_table';
      for r_fk in ( select table_name, constraint_name
                    from user_constraints
                    where table_name = 'MASTER_TABLE'
                    and   constraint_type = 'R'
      loop
        execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' ENABLE';
      end loop;
    end;
    / Or
      truncate table child_table1;
      truncate table child_table2;
      truncate table child_table3;
      truncate table child_table4;
    begin
      for r_fk in ( select table_name, constraint_name
                    from user_constraints
                    where table_name = 'MASTER_TABLE'
                    and   constraint_type = 'R'
      loop
        execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' DISABLE';
      end loop;
    end;
    truncate table master_table;
    begin
      for r_fk in ( select table_name, constraint_name
                    from user_constraints
                    where table_name = 'MASTER_TABLE'
                    and   constraint_type = 'R'
      loop
        execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' ENABLE';
      end loop;
    end;
    /

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • Maitaning huge volume of data (Around 60 Million records)

    Iu2019ve requirement to load the data from ODS to Cube by full load. This ODS is getting 50 Million records down the line for 6 months which we have to maintain in BW.
    Can you please put the advise on the following things?
         Can we accommodate 50 Million records in ODS?
    If i.e. the case u201CCan we put the load for 50 Million records from ODS to Cube?u201D     And each record has to go check in another ODS to get the value for another InfoObject. Hence u201CIs the load going to be successful for the 50 Million records? Iu2019m not sure. Or do we get time out error?

    Harsha,
    The data load should go through ... some things to do / check...
    Delete the indices on cube before loading and then rebuild the same later after the load completes.
    regarding the lookup - if you are looking up specific values in another DSO - build a suitable secondary index on the DSO for the same ( preferably unique index )
    A DSo or cube can definitely hold 50 million records - we have had cases where we has 50 million records for 1 month with the DSO holding data for 6 to 10 months and the same with the cube also. Only that the reporting on the cube might be slow at a very detailed level.
    Also please state your version - 3.x or 7.0...
    also if you are on Oracle - plan for providing / backing up archive logs - since loading generates a lot of arcive logs...
    Edited by: Arun Varadarajan on Apr 21, 2009 2:30 AM

  • Creating table to hold 10 million records

    What should be the TABLESPACE,PCTUSED,PCTFREE,INITRANS, MAXTRANS, STORAGE details for creating a table to hold 10 million records.

    TABLESPACE,A tablespace big enough to hold 10 million rows. You may decide to have a separate tablespace for a big table, you may not.
    PCTUSEDAre these records likely to be deleted?
    PCTFREEAre these records likely to be updated?
    INITRANS, MAXTRANSHow many concurrent users are likely to be working with these records?
    STORAGE Do you want to override the default storage values of the tablespace you are using?
    In short, these questions can only be answered by somebody who understands your application i.e. you. The required values of these parameters has got little to do with the fact that the table has 10 million rows. You would need to answer these same questions for a table that held only ten thousand rows.
    Cheers, APC

  • How to delete the double records connected to one or more than one tables in SQL 2008?

    Hi
    Can anyone please help me with the SQL query. I Im having a table called People with columns names: personno., lastname, firstname and so on. The personno. is having duplicate records,so all the duplicate records i have written with "double" in
    the beginning of the numbers. I tried deleting these double records but they are linked to one or more than one tables. I have to find out, all the tables blocking the deleting of double person. And then create select statements which creates update statements
    in order to replace the current id of double person with substitute id. (The personno. is in the form of id's in the database)
    Thanks

    You should not append "double" in the personno. When we append it will not be able to join or relate to other table. Keep the id as it is and use another field(STATUS) to mark as duplicate. Also we will require another field(PRIMARYID) against
    those duplicate rows i.e the main or the primary personno.
    SELECT * FROM OtherTable a INNER JOIN
    (SELECT personno, status, primaryid FROM PEOPLE WHERE status = 'Duplicate') b
    ON a.personno = b.personno
    UPDATE OtherTable SET personno = b.primaryid
    FROM OtherTable a INNER JOIN
    (SELECT personno, status, primaryid FROM PEOPLE WHERE status = 'Duplicate') b
    ON a.personno = b.personno
    NOTE: Please take backup before applying the query. This is not tested.
    Regards, RSingh

  • How to Delete the condition record in CRM

    HI,
    Can you please help me how to delete the condition record from condition table in CRM.
    Please explain the usage of FM CRMXIF_CONDITION_SEL_DELETE with examples.
    I have also read the documention of the function module. How to use this FM for custom defined condition table.
    (this is the code given in Documentation)
    DATA-OBJECT_REPRESENTATION         = 'E'
    DATA-SEL_OPT-CT_APPLICATION              = 'CRM'
    DATA-SEL_OPT-OBJECT_TASK                    = 'D'
    DATA-SEL_OPT-RANGE-FIELDNAME        = 'PRODUCT_ID'
    DATA-SEL_OPT-RANGE-R_SIGN                  = 'I'    (Including)
    DATA-SEL_OPT-RANGE-R_OPTION           = 'EQ'
    DATA-SEL_OPT-RANGE-R_VALUE_LOW  = 'PROD_1'
    Thanks
    Shankar

    Hi Shankar,
    I am using the same CRMXIF_CONDITION_SEL_DELETE function module to delete condition record present in CRM.
    But it is giving me below error in the return table of the FM after i run the program. Can you please correct me if I am doing any thing wrong?
    Error in  lt_return: SMW3     CND_MAST_SEL_DEL_EXT_VALIDATE     CND_M_SD
    code:
    ls_range-fieldname = 'PRODUCT_ID''.
    ls_range-R_SIGN = 'I'.
    ls_range-R_OPTION = 'EQ'.
    ls_range-R_VALUE_LOW = '123456'.
    APPEND ls_range TO lt_range.
    MOVE lt_range TO ls_entry-SEL_OPT-range.
    ls_data-SEL_OPT-object_task = 'D'.
    ls_data-SEL_OPT-ct_application = 'CRM'.
    ls_data-object_representation = 'E'.
    CALL FUNCTION 'CRMXIF_CONDITION_SEL_DELETE'
      EXPORTING
        DATA          = ls_date
    IMPORTING
       RETURN        = lt_return
    CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
      IMPORTING
        return = lt_ret.
    Edited by: Saravanaprasad Nadar on Jul 7, 2010 1:27 AM

Maybe you are looking for

  • How do i link my existing itunes library to my new ipad

    how do ilink my old itunes library to my new ipad air

  • How can I tell if it worked?

    I had been told by Apple enteprise Logic pro support that the problems I had where partly caused by home folder permissions and ACL's. They had me run the repair permissions and ACL's in the home folder option on my ML Mini using its' own recovery mo

  • Missing Video controls in Safari

    I have looked everywhere and can't figure out how to find the video controls in Safari. I have version 4.04, and Quicktime 10, running Snow Leopard. Videos run fine and pause w/spacebar, but I can't fast forward, or rewind until I find those elusive

  • WAD - Portal - Changing row background colors

    Hi, We are creating web templates and posting to portal 6.0. I would like to know how to change the background colors for the rows for the output of the analysis item. Right now the default colors are grey and white (alternating rows). I did a search

  • Alternate color in two successive line

    i would like to have a line in a color and the next in an other color. for example, the first line in blue, the second in yellow, the third in blue etc... in order to alternate the colors between each record (line) on my report. thanks