Transfering 100+million records

Hi Guys
i have source table (TAB1) having 100+ million records.. needs to copy to another database... to reduce the numbers... created the join with another table (TAB2) which reduced it to 10+ million records but data services taking long time (almost 40 minutes) to show the numbers (1000) in monitor.
See the attached pic for the dataflow created..
could you guys please help me to let me know what is the best practice to implement same in less time...
DS4.2 on windows server 2008
Data Services server has 32 GB RAM (out of which this job is using 6GB only)
SQL SERVER has 64 GB RAM (out of which this job is using 22GB only)
Regards

Does that mean you don't have access to your target db either? Because that's where you create your linked server.
Anyway, try and put your join and filter conditions directly in the query transform, under the FROM-tabs and WHERE-tabs, i.e. without using a global variable. The sql generated will look like:
     SELECT  tab1.* FROM TAB1
     INNER JOIN TAB2
     ON tab1.c1 = tab2.c2
     WHERE tab2.date >= <fromdate>
If you then set the "Transfer type" of the Data_Transfer transform to "Table" and make sure that the temp table is created in your source db (I hope you'll have the authorisation to do so), you will get:
     INSERT INTO temp
     SELECT  tab1.* FROM TAB1
     INNER JOIN TAB2
     ON tab1.c1 = tab2.c2
     WHERE tab2.date >= <fromdate>
This bypasses DS processing completely and most probably will be the best you can achieve if you cannot touch your db environment itself.
On another note, the native MS SQL Server client 2008 will outperform any ODBC driver.

Similar Messages

  • How to Load 100 Million Rows in Partioned Table

    Dear all,
    I a workling in VLDB application.
    I have a Table with 5 columns
    For ex- A,B,C,D,DATE_TIME
    I CREATED THE RANGE (DAILY) PARTIONED TABLE ON COLUMN (DATE_TIME).
    AS WELL CREATED NUMBER OF INDEX FOR EX,
    INDEX ON A
    COMPOSITE INDEX ON DATE_TIME,B,C
    REQUIREMENT
    NEED TO LOAD APPROX 100 MILLION RECORDS IN THIS TABLE EVERYDAY ( IT WILL LOAD VIA SQL LOADER OR FROM TEMP TABLE (INSERT INTO ORIG SELECT * FROM TEMP)...
    QUESTION
    TABLE IS INDEXED SO I AM NOT ABLE TO USE SQLLDR FEATURE DIRECT=TRUE.
    SO LET ME KNOW WHAT THE BEST AVILABLE WAY TO LOAD THE DATA IN THIS TABLE ????
    Note--> PLEASE REMEMBER I CAN'T DROP AND CREATE INDEX DAILY DUE TO HUGE DATA QUANTITY.

    Actually a simpler issue then what you seem to think it is.
    Q. What is the most expensive and slow operation on a database server?
    A. I/O. The more I/O, the more latency there is, the longer the wait times are, the bigger the response times are, etc.
    So how do you deal with VLT's? By minimizing I/O. For example, using direct loads/inserts (see SQL APPEND hint) means less I/O as we are only using empty data blocks. Doing one pass through the data (e.g. apply transformations as part of the INSERT and not afterwards via UPDATEs) means less I/O. Applying proper filter criteria. Etc.
    Okay, what do you do when you cannot minimize I/O anymore? In that case, you need to look at processing that I/O volume in parallel. Instead of serially reading and writing a 100 million rows, you (for example) use 10 processes that each reads and writes 10 million rows. I/O bandwidth is there to be used. It is extremely unlikely that a single process can fully utilised the available I/O bandwidth. So use more processes, each processing a chunk of data, to use more of that available I/O bandwidth.
    Lastly, think DDL before DML when dealing with VLT's. For example, a CTAS to create a new data set and then doing a partition exchange to make that new data set part of the destination table, is a lot faster than deleting that partition's data directly, and then running a INSERT to refresh that partition's data.
    That in a nutshell is about it - think I/O and think of ways to use it as effectively as possible. With VLT's and VLDB's one cannot afford to waste I/O.

  • SQL Query to fetch records from tables which have 75+ million records

    Hi,
    I have the explain plan for a sql stmt.Require your suggestions to improve this.
    PLAN_TABLE_OUTPUT
    | Id  | Operation                            | Name                         | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT                     |                              |   340 |   175K| 19075 |
    |   1 |  TEMP TABLE TRANSFORMATION           |                              |       |       |       |
    |   2 |   LOAD AS SELECT                     |                              |       |       |       |
    |   3 |    SORT GROUP BY                     |                              |    32M|  1183M|   799K|
    |   4 |     TABLE ACCESS FULL                | CLM_DETAIL_PRESTG            |   135M|  4911M|   464K|
    |   5 |   LOAD AS SELECT                     |                              |       |       |       |
    |   6 |    TABLE ACCESS FULL                 | CLM_HEADER_PRESTG            |     1 |   274 |   246K|
    PLAN_TABLE_OUTPUT
    |   7 |   LOAD AS SELECT                     |                              |       |       |       |
    |   8 |    SORT UNIQUE                       |                              |   744K|    85M|  8100 |
    |   9 |     TABLE ACCESS FULL                | DAILY_PROV_PRESTG            |   744K|    85M|  1007 |
    |  10 |   UNION-ALL                          |                              |       |       |       |
    |  11 |    SORT UNIQUE                       |                              |   177 | 97350 |  9539 |
    |  12 |     HASH JOIN                        |                              |   177 | 97350 |  9538 |
    |  13 |      HASH JOIN OUTER                 |                              |     3 |  1518 |  9533 |
    |  14 |       HASH JOIN                      |                              |     1 |   391 |  8966 |
    |  15 |        TABLE ACCESS BY INDEX ROWID   | CLM_DETAIL_PRESTG            |     1 |    27 |     3 |
    |  16 |         NESTED LOOPS                 |                              |     1 |   361 |    10 |
    |  17 |          NESTED LOOPS OUTER          |                              |     1 |   334 |     7 |
    PLAN_TABLE_OUTPUT
    |  18 |           NESTED LOOPS OUTER         |                              |     1 |   291 |     4 |
    |  19 |            VIEW                      |                              |     1 |   259 |     2 |
    |  20 |             TABLE ACCESS FULL        | SYS_TEMP_0FD9D66C9_DA2D01AD  |     1 |   269 |     2 |
    |  21 |            INDEX RANGE SCAN          | CLM_PAYMNT_CLMEXT_PRESTG_IDX |     1 |    32 |     2 |
    |  22 |           TABLE ACCESS BY INDEX ROWID| CLM_PAYMNT_CHKEXT_PRESTG     |     1 |    43 |     3 |
    |  23 |            INDEX RANGE SCAN          | CLM_PAYMNT_CHKEXT_PRESTG_IDX |     1 |       |     2 |
    |  24 |          INDEX RANGE SCAN            | CLM_DETAIL_PRESTG_IDX        |     6 |       |     2 |
    |  25 |        VIEW                          |                              |    32M|   934M|  8235 |
    |  26 |         TABLE ACCESS FULL            | SYS_TEMP_0FD9D66C8_DA2D01AD  |    32M|   934M|  8235 |
    |  27 |       VIEW                           |                              |   744K|    81M|   550 |
    |  28 |        TABLE ACCESS FULL             | SYS_TEMP_0FD9D66CA_DA2D01AD  |   744K|    81M|   550 |
    PLAN_TABLE_OUTPUT
    |  29 |      TABLE ACCESS FULL               | CCP_MBRSHP_XREF              |  5288 |   227K|     5 |
    |  30 |    SORT UNIQUE                       |                              |   163 | 82804 |  9536 |
    |  31 |     HASH JOIN                        |                              |   163 | 82804 |  9535 |
    |  32 |      HASH JOIN OUTER                 |                              |     3 |  1437 |  9530 |
    |  33 |       HASH JOIN                      |                              |     1 |   364 |  8963 |
    |  34 |        NESTED LOOPS OUTER            |                              |     1 |   334 |     7 |
    |  35 |         NESTED LOOPS OUTER           |                              |     1 |   291 |     4 |
    |  36 |          VIEW                        |                              |     1 |   259 |     2 |
    |  37 |           TABLE ACCESS FULL          | SYS_TEMP_0FD9D66C9_DA2D01AD  |     1 |   269 |     2 |
    |  38 |          INDEX RANGE SCAN            | CLM_PAYMNT_CLMEXT_PRESTG_IDX |     1 |    32 |     2 |
    |  39 |         TABLE ACCESS BY INDEX ROWID  | CLM_PAYMNT_CHKEXT_PRESTG     |     1 |    43 |     3 |
    PLAN_TABLE_OUTPUT
    |  40 |          INDEX RANGE SCAN            | CLM_PAYMNT_CHKEXT_PRESTG_IDX |     1 |       |     2 |
    |  41 |        VIEW                          |                              |    32M|   934M|  8235 |
    |  42 |         TABLE ACCESS FULL            | SYS_TEMP_0FD9D66C8_DA2D01AD  |    32M|   934M|  8235 |
    |  43 |       VIEW                           |                              |   744K|    81M|   550 |
    |  44 |        TABLE ACCESS FULL             | SYS_TEMP_0FD9D66CA_DA2D01AD  |   744K|    81M|   550 |
    |  45 |      TABLE ACCESS FULL               | CCP_MBRSHP_XREF              |  5288 |   149K|     5 |
    The CLM_DETAIL_PRESTG table has 100 million records and the CLM_HEADER_PRESTG table has 75 million records.
    Any gussestions on how to getch huge records from tables of this size will help.
    Regards,
    Narayan

    WITH CLAIM_DTL
         AS (  SELECT
                      ICN_NUM,
    MIN (FIRST_SRVC_DT) AS FIRST_SRVC_DT,
    MAX (LAST_SRVC_DT) AS LAST_SRVC_DT,
    MIN (PLC_OF_SRVC_CD) AS PLC_OF_SRVC_CD
    FROM CCP_STG.CLM_DETAIL_PRESTG  CD WHERE ACT_CD <>'D'
    GROUP BY ICN_NUM),
    CLAIM_HDR
         AS (SELECT
                    ICN_NUM,
    SBCR_ID,
    MBR_ID,
    MBR_FIRST_NAME,
    MBR_MI,
    MBR_LAST_NAME,
    MBR_BIRTH_DATE,
    GENDER_TYPE_CD,
    SBCR_RLTNSHP_TYPE_CD,
    SBCR_FIRST_NAME,
    SBCR_MI,
    SBCR_LAST_NAME,
    SBCR_ADDR_LINE_1,
    SBCR_ADDR_LINE2,
    SBCR_ADDR_CITY,
    SBCR_ADDR_STATE,
    SBCR_ZIP_CD,
    PRVDR_NUM,
    CLM_PRCSSD_DT,
    CLM_TYPE_CLASS_CD,
    AUTHO_NUM,
    TOT_BILLED_AMT,
    HCFA_DRG_TYPE_CD,
    FCLTY_ADMIT_DT,
    ADMIT_TYPE,
    DSCHRG_STATUS_CD,
    FILE_BILLING_NPI,
    CLAIM_LOCATION_CD,
    CLM_RELATED_ICN_1,
    SBCR_ID||0
    || MBR_ID
    || GENDER_TYPE_CD
    || SBCR_RLTNSHP_TYPE_CD
    || MBR_BIRTH_DATE
    AS MBR_ENROLL_ID,
    SUBSCR_INSGRP_NM ,
    CAC,
    PRVDR_PTNT_ACC_ID,
    BILL_TYPE,
      PAYEE_ASSGN_CODE,
    CREAT_RUN_CYC_EXEC_SK,
    PRESTG_INSRT_DT
    FROM CCP_STG.CLM_HEADER_PRESTG P WHERE ACT_CD <>'D' AND SUBSTR(CLM_PRCSS_TYPE_CD,4,1) NOT IN  ('1','2','3','4','5','6')  ),
    PROV AS ( SELECT DISTINCT
    PROV_ID,
    PROV_FST_NM,
    PROV_MD_NM,
    PROV_LST_NM,
    PROV_BILL_ADR1,
    PROV_BILL_CITY,
    PROV_BILL_STATE,
    PROV_BILL_ZIP,
    CASE WHEN PROV_SEC_ID_QL='E' THEN PROV_SEC_ID
    ELSE NULL
    END AS PROV_SEC_ID,
    PROV_ADR1,
    PROV_CITY,
    PROV_STATE,
    PROV_ZIP
    FROM CCP_STG.DAILY_PROV_PRESTG),
    MBR_XREF AS (SELECT SUBSTR(MBR_ENROLL_ID,1,17)||DECODE ((SUBSTR(MBR_ENROLL_ID,18,1)),'E','1','S','2','D','3')||SUBSTR(MBR_ENROLL_ID,19) AS MBR_ENROLLL_ID,
      NEW_MBR_FLG
    FROM CCP_STG.CCP_MBRSHP_XREF)
    SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
    CLAIM_HDR.SBCR_ID AS SBCR_ID,
    CLAIM_HDR.MBR_ID AS MBR_ID,
    CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
    CLAIM_HDR.MBR_MI AS MBR_MI,
    CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
    CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
    CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
    CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
    CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
    CLAIM_HDR.SBCR_MI AS SBCR_MI,
    CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
    CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
    CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
    CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
    CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
    CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
    CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
    CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
    CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
    CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
    CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
    CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
    CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
    CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
    CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
    CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
    CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
    CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
    CLAIM_HDR.SUBSCR_INSGRP_NM,
    CLAIM_HDR.CAC,
    CLAIM_HDR.PRVDR_PTNT_ACC_ID,
    CLAIM_HDR.BILL_TYPE,
    CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
    CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
    CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
    PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
    PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
    PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
    PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
    PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
    PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
    PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
    PROV.PROV_SEC_ID AS BILL_PROV_EIN,
    PROV.PROV_ID AS SERV_FAC_ID    ,
    PROV.PROV_ADR1 AS SERV_FAC_ADDR1          ,
    PROV.PROV_CITY AS SERV_FAC_CITY ,
    PROV.PROV_STATE AS SERV_FAC_STATE          ,
    PROV.PROV_ZIP AS     SERV_FAC_ZIP  ,
    CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
    CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
    CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
    CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
      CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
    CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK
      FROM CLAIM_DTL,(select * FROM CCP_STG.CLM_DETAIL_PRESTG WHERE ACT_CD <>'D') CLM_DETAIL_PRESTG, CLAIM_HDR,CCP_STG.MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
    WHERE    
    CLAIM_HDR.ICN_NUM = CLM_DETAIL_PRESTG.ICN_NUM
    AND       CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
    AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
    AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
    AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
    AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
    AND CLM_DETAIL_PRESTG.FIRST_SRVC_DT >= 20110101
    AND MBR_XREF.NEW_MBR_FLG = 'Y'
    AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
    AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0
    UNION ALL
    SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
    CLAIM_HDR.SBCR_ID AS SBCR_ID,
    CLAIM_HDR.MBR_ID AS MBR_ID,
    CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
    CLAIM_HDR.MBR_MI AS MBR_MI,
    CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
    CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
    CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
    CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
    CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
    CLAIM_HDR.SBCR_MI AS SBCR_MI,
    CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
    CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
    CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
    CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
    CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
    CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
    CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
    CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
    CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
    CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
    CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
    CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
    CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
    CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
    CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
    CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
    CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
    CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
    CLAIM_HDR.SUBSCR_INSGRP_NM,
    CLAIM_HDR.CAC,
    CLAIM_HDR.PRVDR_PTNT_ACC_ID,
    CLAIM_HDR.BILL_TYPE,
    CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
    CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
    CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
    PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
    PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
    PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
    PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
    PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
    PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
    PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
    PROV.PROV_SEC_ID AS BILL_PROV_EIN,
    PROV.PROV_ID AS SERV_FAC_ID    ,
    PROV.PROV_ADR1 AS SERV_FAC_ADDR1          ,
    PROV.PROV_CITY AS SERV_FAC_CITY ,
    PROV.PROV_STATE AS SERV_FAC_STATE          ,
    PROV.PROV_ZIP AS     SERV_FAC_ZIP  ,
    CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
    CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
    CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
    CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
    CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
    CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK  
      FROM CLAIM_DTL, CLAIM_HDR,MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
    WHERE CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
    AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
    AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
    AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
    AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
    -- AND TRUNC(CLAIM_HDR.PRESTG_INSRT_DT) = TRUNC(SYSDATE)
    AND CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK = 123638.000000000000000
    AND MBR_XREF.NEW_MBR_FLG = 'N'
    AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
    AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0;

  • I want to delete approx 100,000 million records and free the space

    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.

    user8731258 wrote:
    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.To archive, backup the database.
    To delete and free up the space, truncate the table/partitions and then shrink the datafile(s) associated with the tablespace.

  • Updating table with more than 50 million records

    Hi Team,
    Updating three columns in a table which has 60 millions of records is taking a lot of time. How to improve the performance of the query. The table has to be updated based on the values from different database tables.
    Thanks,
    Eshwar.
    Please don't forget to Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful. It will helpful to other users.

    Split them into small batches
    when you are using multiple tables, have them with JOINs
    have the corresponding indexes for joining columns
    disable triggers if any (if possible and doesnt affect your logic - wud be least favourable step ..)
    also these shud help:
    http://blogs.msdn.com/b/repltalk/archive/2011/10/10/lessons-learned-updating-100-millions-rows.aspx
    http://www.sqlservergeeks.com/blogs/AhmadOsama/personal/450/sql-server-optimizing-update-queries-for-large-data-volumes
    http://stackoverflow.com/questions/7344984/update-large-number-of-rows-sql-server-2005
    Thanks,
    Jay
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

  • Spooling to Text File - 30 million records - Getting Out of Memory Error

    Hi All,
    I have an extremely large oracle table that I need to spool to a .txt file. The table has approximately 30 million records. I'm using Toad For Oracle version 10.5 and I'm on Oracle 10g. I've tried running the following spool command a few times and it keeps crashing...I'm getting a "Out of Memory" error in my Toad window when I execute it as a script. Here's the code:
    Spool on
    set heading off
    SET PAGESIZE 0
    SET TRIMSPOOL ON
    SET LINESIZE 100
    set feedback off
    set echo off
    set termout off
    Spool "C:\spooledtext.txt"
    select
    column1
    from test_table
    order by
    column2,
    column1
    Spool off;
    An ideas as to how I can get this query to spool to a text file without crashing and running out of memory?
    Thanks

    use sqlplus.
    Or select smaller chunks and use copy to concat them afterwards.
    Sybrand Bakker
    Senior Oracle DBA

  • Tune for updating 1 million record

    I want to update a main table that has 1 million records with a details from staging table that has 1 milllion distinct records. Have used bulk collect and for all , but still it takes 1 min and 10 sec for just 1000 records,
    statement used inside the procedure call. Please let me know what more can be done to optimize
    OPEN C_ORDERINFO;
      LOOP
          FETCH C_ORDERINFO BULK COLLECT INTO C_A123 ,C_B123,C_D123 LIMIT 100;
          EXIT WHEN C_A123.COUNT = 0;
    FORALL J IN C_A123.FIRST..C_A123.LAST
              UPDATE tableA
                 SET B123 = C_B123(J),
                     UPDATEDATETIME = systimestamp
               WHERE D123 = C_D123(J) AND
                     B123 = C_A123(J);

    varchar2(200000000).SQL VARCHAR2 is limited to 4000
    PL/SQL VARCHAR2 is limited to 32767

  • Help with querying a 200 million record table

    Hi ,
    I need to query a 200 million record table which is partitioned by monthly activity.
    But my problem is I need to see how many activities occured on one account in a time frame.
    If there are 200 partitions, I need to go into all the partitions, get the activities of the account in the partition and at the end give the number of activities.
    Fortunately, only activity is expected for an account in the partition which may be present or absent.
    if this table had 100 records, i would use this..
    select account_no, count(*)
    from Acct_actvy
    group by account_no;

    Must stress that it is critical that you not write code (SQL or PL/SQL) that uses hardcoded partition names to find data.
    That approach is very risk, prone to runtime errors, difficult to maintain and does not scale. It is not worth it.
    From the developer's side, there should be total ignorance to the fact that a table is partitioned. A developer must treat a partition table no different than any other table.
    To give you an idea.. this a copy-and-paste from a SQL*Plus session doing what you want to do. Against a partitioned table at least 3x bigger than yours. It covers about a 12 month period. There's a partition per day - and empty daily partitions for the next 2 years. The SQL aggregation is monthly. I selected a random network address to illustrate.
    SQL> select count(*) from x25_calls;
      COUNT(*)
    619491919
    Elapsed: 00:00:19.68
    SQL>
    SQL>  select TRUNC(callendtime,'MM') AS MONTH, sourcenetworkaddress, count(*) from x25_calls where sourcenetworkaddress = '3103165962'
      2  group by TRUNC(callendtime,'MM'), sourcenetworkaddress;
    MONTH               SOURCENETWORKADDRESS   COUNT(*)
    2005/09/01 00:00:00 3103165962                 3599
    2005/10/01 00:00:00 3103165962                 1184
    2005/12/01 00:00:00 3103165962                    4
    2005/06/01 00:00:00 3103165962                    1
    2005/04/01 00:00:00 3103165962                  560
    2005/08/01 00:00:00 3103165962                  101
    2005/03/01 00:00:00 3103165962                 3330
    7 rows selected.
    Elapsed: 00:00:19.72As you can see - not a single reference to any partitioning. Excellent performance, despite running on an old K-class HP server.
    The reason for the performance is simple. A correctly designed and implemented partitioning scheme that caters for most of the queries against the table. Correctly designed and implemented indexes - especially local bitmap indexes. Without any hacks like partition names and the like...

  • Distinct on 10 million records

    Hello everyone.
    I have a table called BL_DOCUMENTS which has 10 million records in it.
    It has 'DocumentId' column which is the PrimaryKey.
    I created an index on three columns:
    CREATE INDEX IX_BD_CREA_DOCU_ISDE_ISCUR ON BL_DOCUMENTS (CREATORID ASC, ISDELETED ASC, ISCURRENT ASC)
    I run a simple query to get the 'CreatorId' column's values with 'distinct' (there are currently 25,000 different ids):
    select O., rownum rnum from(*
    SELECT distinct  D.CreatorId  ObjectId FROM BL_DOCUMENTS  D
    WHERE D.IsCurrent = 1 AND  D.IsDeleted = 0
    *     ORDER BY ObjectId DESC*
    *) O where rownum <= 1000*
    This query runs for 7-10 seconds.
    If I remove the distinct the query runs for 0.007 secs.
    How can I improve this?

    DevMTs wrote:
    I try that but it's not improved.Still changing index column order should give some improvement since INDEX SKIP SCAN DESCENDING is a bit more costly operation than INDEX RANGE SCAN DESCENDING:
    SQL> create table BL_DOCUMENTS(CreatorId number,IsCurrent number,IsDeleted number);
    Table created.
    SQL> CREATE INDEX IX_BD_CREA_DOCU_ISDE_ISCUR ON BL_DOCUMENTS (CREATORID ASC, ISDELETED ASC, ISCURRENT ASC);
    Index created.
    SQL> exec dbms_stats.gather_table_stats(ownname => user, tabname => 'BL_DOCUMENTS',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> explain plan for
      2  select O.*, rownum rnum from(
      3  SELECT distinct D.CreatorId ObjectId FROM BL_DOCUMENTS D
      4  WHERE D.IsCurrent = 1 AND D.IsDeleted = 0
      5  ORDER BY ObjectId DESC
      6  ) O where rownum <= 1000
      7  /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 4024680831
    | Id  | Operation                     | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                            |     1 |    13 |     1 (100)| 00:00:01 |
    |*  1 |  COUNT STOPKEY                |                            |       |       |            |          |
    |   2 |   VIEW                        |                            |     1 |    13 |     1 (100)| 00:00:01 |
    |   3 |    SORT UNIQUE NOSORT         |                            |     1 |    39 |     1 (100)| 00:00:01 |
    |*  4 |     INDEX SKIP SCAN DESCENDING| IX_BD_CREA_DOCU_ISDE_ISCUR |     1 |    39 |     0   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=1000)
       4 - access("D"."ISDELETED"=0 AND "D"."ISCURRENT"=1)
           filter("D"."ISCURRENT"=1 AND "D"."ISDELETED"=0)
    18 rows selected.
    SQL> drop index IX_BD_CREA_DOCU_ISDE_ISCUR;
    Index dropped.
    SQL> CREATE INDEX IX_BD_CREA_DOCU_ISDE_ISCUR ON BL_DOCUMENTS (ISCURRENT,ISDELETED,CREATORID);
    Index created.
    SQL> exec dbms_stats.gather_table_stats(ownname => user, tabname => 'BL_DOCUMENTS',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> explain plan for
      2  select O.*, rownum rnum from(
      3  SELECT distinct D.CreatorId ObjectId FROM BL_DOCUMENTS D
      4  WHERE D.IsCurrent = 1 AND D.IsDeleted = 0
      5  ORDER BY ObjectId DESC
      6  ) O where rownum <= 1000
      7  /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3922126944
    | Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                            |     1 |    13 |     1 (100)| 00:00:01 |
    |*  1 |  COUNT STOPKEY                 |                            |       |       |            |          |
    |   2 |   VIEW                         |                            |     1 |    13 |     1 (100)| 00:00:01 |
    |   3 |    SORT UNIQUE NOSORT          |                            |     1 |    39 |     1 (100)| 00:00:01 |
    |*  4 |     INDEX RANGE SCAN DESCENDING| IX_BD_CREA_DOCU_ISDE_ISCUR |     1 |    39 |     0   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=1000)
       4 - access("D"."ISCURRENT"=1 AND "D"."ISDELETED"=0)
    17 rows selected.
    SQL> In any case, post execution plan.
    SY.

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Deleting records from a table with 12 million records

    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    Narayan

    user7202581 wrote:
    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;

  • Which is the Best way to upload BP for 3+ million records??

    Hello Gurus,
                       we have 3+million records of data to be uploaded in to CRM coming from Informatica. which is the best way to upload the data in to CRM, which takes less time consumption and easy. Please help me.
    Thanks,
    Naresh.

    do with bapi BAPI_BUPA_FS_CREATE_FROM_DATA2

  • Best way to Insert Millions records in SQL Azure on daily basis?

    I am maintaining millions of records in Sql Server 2008 R2 and now i am intended to migrate these on SQL Azure.
    In existing system with SQL Server 2008 R2, few SSIS packages and Stored Procedures are firstly truncate the existing records and then perform Insert operation on the table which holds
    approx 26 Million records in 30 mins. on Daily basis (as system demands).
    When i migrate these on SQL Azure, i am unable to perform these operations in a
    faster way as i did in SQL 2008. Sometimes i got Request timeout error.
    While searching for faster way, many of them suggest for Batch process or BCP. But Batch processing is NOT suitable in my case because it takes much time to insert those records. I required some faster and efficient way on SQL Azure.
    Hoping for some good suggestions.
    Thanks in advance :)
    Ashish Narnoli

    +1 to Frank's advice.
    Also, please upgrade your Azure SQL Database server to
    V12 as you will receive higher performance on the premium tiers.  As you scale-up your database for your bulk insert, remember that
    SQL Database charges by the hour. To minimize costs, scale back down when the inserts have completed.

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • ABAP Proxy for 10 million records

    Hi,
    I am running a extract program for my inventory which has about 10 million records.
    I am sending through ABAP proxy and  job is cacelled due to memory problem.
    I am breaking up the records while sending through ABAP proxy..
    I am sending about 2000 times proxy by breaking the number of records..
    do you think ABAP proxy would able to handle 10 million records..?
    Any advice would be highly appreciated.
    Thanks and Best Regards,
    M-

    Hi,
    I am facing the same problem. My temporary solution is to break up the selected data into 30.000 records and send those portions by ABAP proxy to PI.
    I think the problem lies in the ABAP to xml conversion (call transformation) within the proxy.
    Although breaking up the data seems to work for me now, it gives me an other issue: I have to combine the data back again in PI.
    So now I am thinking of saving all the records as a dataset file on the application server and using the file adapter instead.
    Regards,
    Arjan Aalbers

Maybe you are looking for