SCD 2 load performance with 60 millions records

Hey guys!
I'm wondering what would be the load performance for a type 2 SCD mapping based on the framework presented in the transformation guide (page A1-A20). The dimension has the following characteristics:
60 millions records
50 columns (including 17 to be tracked for changes)
Has anyone come across a similar case?
Mark or Igor- Is there any benchmark available on SCD 2 for large dimensions?
Any help would be greatly appreciated.
Thanks,
Rene

Rene,
It's really very difficult to guesstimate the loading time for a similar configuration. Too many parameters are missing, especially hardware. We are in the process of setting up some real benchmarks later this year - maybe you can give us some interesting scenarios.
On the other side, 50-60 million records is not that many these days... so I personally would consider anything more than several hours (on a half decent hardware) as too long.
Regards:
Igor

Similar Messages

  • How can I create a details cube with millions records

    Hello everyone,
    I need now to create a cube for details data. But the problem is that the details data is very large. There are some millions records.
    How can I design such cube in the essbase? Or can man create such cube in the essbase at all?
    I need your suggests. Thank you very much!
    Ming

    hello Sandeep,
    thank you for your reply.
    Our situation is we have biee+essbase.
    And the users want to see the details data. The data is too large.
    The users want to get all data from excel (hyperion).
    So there are many problem with the speed performance.
    How can I design so that the performance is better?
    Ming

  • Load form with first record of UDO

    Hi all
    I do a form for my UDO. When a open the form I want to display the first record of my UDO
    how i can resolve it?
    Thanks

    Hi
    Normally UDO from loads with Find mode or otherwise you can forcefully make your query by selecting top one row and then auto click the find button......
    Dim strQuery As String = "select Top 1 from [@Table]
                Dim RecSet As SAPbobsCOM.Recordset
                'Dim oCombo As SAPbouiCOM.ComboBox
                RecSet = SBO_Company.GetBusinessObject(SAPbobsCOM.BoObjectTypes.BoRecordset)
                RecSet.DoQuery(strQuery)
                If RecSet.Fields.Item(0).Value <> 0 Then
                    Me.m_SBO_Form.Freeze(True)
                    Dim oItem As SAPbouiCOM.Item
                    oItem = Me.m_SBO_Form.Items.Item("1")
                    oItem.Click(SAPbouiCOM.BoCellClickType.ct_Regular)
    Hope it helps

  • Maitaning huge volume of data (Around 60 Million records)

    Iu2019ve requirement to load the data from ODS to Cube by full load. This ODS is getting 50 Million records down the line for 6 months which we have to maintain in BW.
    Can you please put the advise on the following things?
         Can we accommodate 50 Million records in ODS?
    If i.e. the case u201CCan we put the load for 50 Million records from ODS to Cube?u201D     And each record has to go check in another ODS to get the value for another InfoObject. Hence u201CIs the load going to be successful for the 50 Million records? Iu2019m not sure. Or do we get time out error?

    Harsha,
    The data load should go through ... some things to do / check...
    Delete the indices on cube before loading and then rebuild the same later after the load completes.
    regarding the lookup - if you are looking up specific values in another DSO - build a suitable secondary index on the DSO for the same ( preferably unique index )
    A DSo or cube can definitely hold 50 million records - we have had cases where we has 50 million records for 1 month with the DSO holding data for 6 to 10 months and the same with the cube also. Only that the reporting on the cube might be slow at a very detailed level.
    Also please state your version - 3.x or 7.0...
    also if you are on Oracle - plan for providing / backing up archive logs - since loading generates a lot of arcive logs...
    Edited by: Arun Varadarajan on Apr 21, 2009 2:30 AM

  • 40 million records in a repository. Possible?

    Hi,
    Our client wants to load appx 40 million records in a material repository. SAP has tested out material repository with just 1 million records. Is it even possible to load these many records? Has anyone done anything like this before and would like to share their experience?
    Regards,

    Hello mdm3north
    Are you sure that 40 millions is really clear records and don't contain duplicates?
    I dont have imagination how somebody will be work with
    However as i know from my past expirience:
    Usually all material area splited by some logical groups and some persons are working just with one group of materials
    One of solution may be split materials by logical groups and create own repository for each logical group.
    Regards
    Kanstantsin Chernichenka

  • Planfunction in IP or with BW modelling - case with 15 million records

    Hi,
    we need to implement a simple planfunction (qty * price) which has to be executed for 15 million records at a time (qty of 15 million records multiplied with average price calculated on a higher level). I'd like to still implement this with a simple FOX formula but are fearing the performance, given the number of records. Does anyone has experience with this number of records. Would you suggest to do this within IP or using BW modelling. The maximum lead time accepted is 24 hours for this planfunction ...
    The planfunction is expected to be executed in a batch or background mode, but should be triggered from an IP input query and not using RSPC for example...
    please advise.
    D

    Hi Dries,
    using BI IP you should definitely do a partition via planning sequence in a process chain, cf.
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/45/946677f8fb0cf2e10000000a114a6b/frameset.htm
    Planning functions load the requested data into main memory, with 15 million records you will have a problem. In addition it is not a good idea to emply only one work process with the whole work (a planning function uses only one work process). So partition the problem to be able to use parallelization.
    Process chains can be triggered via an API, cf. function group RSPC_API. So you can easily start a process chain via a planning function.
    Regards,
    Gregor

  • Tune Query with Millions of Records

    Hi everyone,
    I've got an Oracle 11g tuning task set before me and I'm pretty novice when it comes to tuning.
    The query itself is only about 10-15 lines of SQL, however, it's hitting four tables, one of them is 100 million records and one is 8 million. The other two are pretty small comparatively ( 6,000 and 300 records). The problem I am having is that the query actually needs to aggregate 3 million records.
    I found an article about using the star_transformation_enabled = true parameter, then on the fact table I set all the foreign key to bitmaps and the dimensions have a standard primary key defined on the surrogate key. This strategy works but it still takes a long time for the query to crunch the 3 million records (takes about 30 minutes).
    I know there's also the option of doing materialized views and using query re-write to take advantage of the MV, but my problem with that is that we're using OBIEE and we can't control how many different variations of these query's we see. So we would have to make a ton of MVs.
    What are the best ways to tackle high volume queries like this from a system wide perspective?
    Are there any benchmarks for what I should be seeing in terms of a 3 million record query? Is expecting under a minute even reasonable?
    Any help would be appreciated!
    Thanks!
    -Joe

    Here is the trace information:
    SQL> set autotrace traceonly arraysize 1000
    SQL> SELECT SUM(T91573.ACTIVITY_GLOBAL1_AMT) AS c2,
      2    SUM(
      3    CASE
      4      WHEN T91573.DB_CR_IND = 'CREDIT'
      5      THEN T91573.ACTIVITY_GLOBAL1_AMT
      END )                           AS c3,
      T91397.GL_ACCOUNT_NAME          AS c4,
      6    7    8    T91397.GROUP_ACCOUNT_NUM        AS c5,
      9    SUM(T91573.BALANCE_GLOBAL1_AMT) AS c6,
    10    T156337.ROW_WID                 AS c7
    11  FROM W_MCAL_DAY_D T156337
    12    /* Dim_W_MCAL_DAY_D_Fiscal_Day */
    13    ,
    14    W_INT_ORG_D T111515
      /* Dim_W_INT_ORG_D_Company */
    15   16    ,
      W_GL_ACCOUNT_D T91397
    17   18    /* Dim_W_GL_ACCOUNT_D */
    19    ,
    20    W_GL_BALANCE_F T91573
      /* Fact_W_GL_BALANCE_F */
    21   22  WHERE ( T91397.ROW_WID        = T91573.GL_ACCOUNT_WID
    23  AND T91573.COMPANY_ORG_WID    = T111515.ROW_WID
    24  AND T91573.BALANCE_DT_WID     = T156337.ROW_WID
    AND T111515.COMPANY_FLG       = 'Y'
    AND T111515.ORG_NUM           = '02000'
    25   26   27  AND T156337.MCAL_PER_NAME_QTR = '2010 Q 1' )
    28  GROUP BY T91397.GL_ACCOUNT_NAME,
      T91397.GROUP_ACCOUNT_NUM,
    29   30    T156337.ROW_WID
    31  ;
    522 rows selected.
    Execution Plan
    Plan hash value: 2761996426
    | Id  | Operation                              | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |                            |  7882 |   700K|  7330   (1)| 00:01:28 |
    |   1 |  HASH GROUP BY                         |                            |  7882 |   700K|  7330   (1)| 00:01:28 |
    |*  2 |   HASH JOIN                            |                            |  7882 |   700K|  7329   (1)| 00:01:28 |
    |   3 |    VIEW                                | VW_GBC_13                  |  7837 |   390K|  6534   (1)| 00:01:19 |
    |   4 |     TEMP TABLE TRANSFORMATION          |                            |       |       |            |          |
    |   5 |      LOAD AS SELECT                    | SYS_TEMP_0FD9D7416_F97A325 |       |       |            |          |
    |*  6 |       VIEW                             | index$_join$_114           |   572 | 10296 |   191   (9)| 00:00:03 |
    |*  7 |        HASH JOIN                       |                            |       |       |            |          |
    |   8 |         BITMAP CONVERSION TO ROWIDS    |                            |   572 | 10296 |     1   (0)| 00:00:01 |
    |*  9 |          BITMAP INDEX SINGLE VALUE     | W_MCAL_DAY_D_F46           |       |       |            |          |
    |  10 |         INDEX FAST FULL SCAN           | W_MCAL_DAY_D_P1            |   572 | 10296 |   217   (1)| 00:00:03 |
    |  11 |      HASH GROUP BY                     |                            |  7837 |   290K|  6343   (1)| 00:01:17 |
    |* 12 |       HASH JOIN                        |                            | 26186 |   971K|  6337   (1)| 00:01:17 |
    |  13 |        TABLE ACCESS FULL               | SYS_TEMP_0FD9D7416_F97A325 |   572 |  5148 |     2   (0)| 00:00:01 |
    |  14 |        TABLE ACCESS BY INDEX ROWID     | W_GL_BALANCE_F             | 26186 |   741K|  6334   (1)| 00:01:17 |
    |  15 |         BITMAP CONVERSION TO ROWIDS    |                            |       |       |            |          |
    |  16 |          BITMAP AND                    |                            |       |       |            |          |
    |  17 |           BITMAP MERGE                 |                            |       |       |            |          |
    |  18 |            BITMAP KEY ITERATION        |                            |       |       |            |          |
    |* 19 |             TABLE ACCESS BY INDEX ROWID| W_INT_ORG_D                |     2 |    32 |     3   (0)| 00:00:01 |
    |* 20 |              INDEX RANGE SCAN          | W_INT_ORG_ORG_NUM          |     2 |       |     1   (0)| 00:00:01 |
    |* 21 |             BITMAP INDEX RANGE SCAN    | W_GL_BALANCE_F_F4          |       |       |            |          |
    |  22 |           BITMAP MERGE                 |                            |       |       |            |          |
    |  23 |            BITMAP KEY ITERATION        |                            |       |       |            |          |
    |  24 |             TABLE ACCESS FULL          | SYS_TEMP_0FD9D7416_F97A325 |   572 |  5148 |     2   (0)| 00:00:01 |
    |* 25 |             BITMAP INDEX RANGE SCAN    | W_GL_BALANCE_F_F1          |       |       |            |          |
    |  26 |    VIEW                                | index$_join$_003           |   199K|  7775K|   794   (5)| 00:00:10 |
    |* 27 |     HASH JOIN                          |                            |       |       |            |          |
    |* 28 |      HASH JOIN                         |                            |       |       |            |          |
    |  29 |       BITMAP CONVERSION TO ROWIDS      |                            |   199K|  7775K|    26   (0)| 00:00:01 |
    |  30 |        BITMAP INDEX FULL SCAN          | W_GL_ACCOUNT_D_M1          |       |       |            |          |
    |  31 |       BITMAP CONVERSION TO ROWIDS      |                            |   199K|  7775K|   118   (0)| 00:00:02 |
    |  32 |        BITMAP INDEX FULL SCAN          | W_GL_ACCOUNT_D_M10         |       |       |            |          |
    |  33 |      INDEX FAST FULL SCAN              | W_GL_ACCOUNT_D_M18         |   199K|  7775K|   733   (1)| 00:00:09 |
    Predicate Information (identified by operation id):
       2 - access("T91397"."ROW_WID"="ITEM_1")
       6 - filter("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
       7 - access(ROWID=ROWID)
       9 - access("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
      12 - access("T91573"."BALANCE_DT_WID"="C0")
      19 - filter("T111515"."COMPANY_FLG"='Y')
      20 - access("T111515"."ORG_NUM"='02000')
      21 - access("T91573"."COMPANY_ORG_WID"="T111515"."ROW_WID")
      25 - access("T91573"."BALANCE_DT_WID"="C0")
      27 - access(ROWID=ROWID)
      28 - access(ROWID=ROWID)
    Note
       - star transformation used for this statement
    Statistics
           1067  recursive calls
              9  db block gets
         417513  consistent gets
         296603  physical reads
           6708  redo size
          25220  bytes sent via SQL*Net to client
            520  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
            522  rows processedAnd here is the cursor details:
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  6s625d3821nq3, child number 0
    SELECT /*+ gather_plan_statistics */ SUM(T91573.ACTIVITY_GLOBAL1_AMT)
    AS c2,   SUM(   CASE     WHEN T91573.DB_CR_IND = 'CREDIT'     THEN
    T91573.ACTIVITY_GLOBAL1_AMT   END )                           AS c3,
    T91397.GL_ACCOUNT_NAME          AS c4,   T91397.GROUP_ACCOUNT_NUM
    AS c5,   SUM(T91573.BALANCE_GLOBAL1_AMT) AS c6,   T156337.ROW_WID
               AS c7 FROM W_MCAL_DAY_D T156337   /*
    Dim_W_MCAL_DAY_D_Fiscal_Day */   ,   W_INT_ORG_D T111515   /*
    Dim_W_INT_ORG_D_Company */   ,   W_GL_ACCOUNT_D T91397   /*
    Dim_W_GL_ACCOUNT_D */   ,   W_GL_BALANCE_F T91573   /*
    PLAN_TABLE_OUTPUT
    Fact_W_GL_BALANCE_F */ WHERE ( T91397.ROW_WID        =
    T91573.GL_ACCOUNT_WID AND T91573.COMPANY_ORG_WID    = T111515.ROW_WID
    AND T91573.BALANCE_DT_WID     = T156337.ROW_WID AND T111515.COMPANY_FLG
          = 'Y' AND T111515.ORG_NUM           = '02000' AND
    T156337.MCAL_PER_NAME_QTR = '2010 Q 1' ) GROUP BY
    T91397.GL_ACCOUNT_NAME,   T91397.GROUP_ACCOUNT_NUM,   T156337.ROW_WID
    Plan hash value: 3262111942
    PLAN_TABLE_OUTPUT
    | Id  | Operation                              | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem| Used-Mem |
    |   0 | SELECT STATEMENT                       |                            |   1 |        |    522 |00:51:34.16 |     424K|    111K|      2 |       |       |          |
    |   1 |  HASH GROUP BY                         |                            |   1 |   7882 |    522 |00:51:34.16 |     424K|    111K|      2 |   748K|   748K| 1416K (0)|
    |*  2 |   HASH JOIN                            |                            |   1 |   7882 |   5127 |00:51:34.00 |     424K|    111K|      2 |  1035K|  1035K| 1561K (0)|
    |   3 |    VIEW                                | VW_GBC_13                  |   1 |   7837 |   5127 |00:51:32.65 |     423K|    111K|      2 |       |       |          |
    |   4 |     TEMP TABLE TRANSFORMATION          |                            |   1 |        |   5127 |00:51:32.64 |     423K|    111K|      2 |       |       |          |
    |   5 |      LOAD AS SELECT                    |                            |   1 |        |      0 |00:00:00.09 |     188 |      0 |      2 |   269K|   269K|  269K (0)|
    |*  6 |       VIEW                             | index$_join$_114           |   1 |    572 |    724 |00:00:00.01 |     183 |      0 |      0 |       |       |          |
    |*  7 |        HASH JOIN                       |                            |   1 |        |    724 |00:00:00.01 |     183 |      0 |      0 |  1011K|  1011K| 1573K (0)|
    |   8 |         BITMAP CONVERSION TO ROWIDS    |                            |   1 |    572 |    724 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
    |*  9 |          BITMAP INDEX SINGLE VALUE     | W_MCAL_DAY_D_F46           |   1 |        |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
    |  10 |         INDEX FAST FULL SCAN           | W_MCAL_DAY_D_P1            |   1 |    572 |  64822 |00:00:00.06 |     180 |      0 |      0 |       |       |          |
    |  11 |      HASH GROUP BY                     |                            |   1 |   7837 |   5127 |00:51:32.54 |     423K|    111K|      0 |  1168K|  1038K| 2598K (0)|
    |* 12 |       HASH JOIN                        |                            |   1 |  26186 |   3267K|03:18:27.02 |     423K|    111K|      0 |  1236K|  1236K| 1248K (0)|
    |  13 |        TABLE ACCESS FULL               | SYS_TEMP_0FD9D73B3_F97A325 |   1 |    572 |    724 |00:00:00.02 |       7 |      2 |      0 |       |       |          |
    |  14 |        TABLE ACCESS BY INDEX ROWID     | W_GL_BALANCE_F             |   1 |  26186 |   3267K|03:18:12.81 |     423K|    111K|      0 |       |       |          |
    |  15 |         BITMAP CONVERSION TO ROWIDS    |                            |   1 |        |   3267K|00:00:06.29 |   16142 |   1421 |      0 |       |       |          |
    |  16 |          BITMAP AND                    |                            |   1 |        |     74 |00:00:03.06 |   16142 |   1421 |      0 |       |       |          |
    |  17 |           BITMAP MERGE                 |                            |   1 |        |     83 |00:00:00.08 |     393 |      0 |      0 |  1024K|   512K| 2754K (0)|
    |  18 |            BITMAP KEY ITERATION        |                            |   1 |        |    764 |00:00:00.01 |     393 |      0 |      0 |       |       |          |
    |* 19 |             TABLE ACCESS BY INDEX ROWID| W_INT_ORG_D                |   1 |      2 |      2 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
    |* 20 |              INDEX RANGE SCAN          | W_INT_ORG_ORG_NUM          |   1 |      2 |      2 |00:00:00.01 |       1 |      0 |      0 |       |       |          |
    |* 21 |             BITMAP INDEX RANGE SCAN    | W_GL_BALANCE_F_F4          |   2 |        |    764 |00:00:00.01 |     390 |      0 |      0 |       |       |          |
    |  22 |           BITMAP MERGE                 |                            |   1 |        |    210 |00:00:03.12 |   15749 |   1421 |      0 |    57M|  7389K|   17M (3)|
    |  23 |            BITMAP KEY ITERATION        |                            |   4 |        |  16405 |00:00:15.36 |   15749 |   1421 |      0 |       |       |          |
    |  24 |             TABLE ACCESS FULL          | SYS_TEMP_0FD9D73B3_F97A325 |   4 |    572 |   2896 |00:00:00.05 |      16 |      6 |      0 |       |       |          |
    |* 25 |             BITMAP INDEX RANGE SCAN    | W_GL_BALANCE_F_F1          |2896 |        |  16405 |00:00:24.99 |   15733 |   1415 |      0 |       |       |          |
    |  26 |    VIEW                                | index$_join$_003           |   1 |    199K|    199K|00:00:02.50 |     737 |      1 |      0 |       |       |          |
    |* 27 |     HASH JOIN                          |                            |   1 |        |    199K|00:00:02.18 |     737 |      1 |      0 |    14M|  2306K|   17M (0)|
    |* 28 |      HASH JOIN                         |                            |   1 |        |    199K|00:00:01.94 |     144 |      1 |      0 |    10M|  2639K|   13M (0)|
    |  29 |       BITMAP CONVERSION TO ROWIDS      |                            |   1 |    199K|    199K|00:00:00.19 |      26 |      0 |      0 |       |       |          |
    |  30 |        BITMAP INDEX FULL SCAN          | W_GL_ACCOUNT_D_M1          |   1 |        |     93 |00:00:00.01 |      26 |      0 |      0 |       |       |          |
    |  31 |       BITMAP CONVERSION TO ROWIDS      |                            |   1 |    199K|    199K|00:00:01.05 |     118 |      1 |      0 |       |       |          |
    |  32 |        BITMAP INDEX FULL SCAN          | W_GL_ACCOUNT_D_M10         |   1 |        |   5791 |00:00:00.01 |     118 |      1 |      0 |       |       |          |
    |  33 |      INDEX FAST FULL SCAN              | W_GL_ACCOUNT_D_M18         |   1 |    199K|    199K|00:00:00.19 |     593 |      0 |      0 |       |       |          |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       2 - access("T91397"."ROW_WID"="ITEM_1")
       6 - filter("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
       7 - access(ROWID=ROWID)
       9 - access("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
      12 - access("T91573"."BALANCE_DT_WID"="C0")
      19 - filter("T111515"."COMPANY_FLG"='Y')
      20 - access("T111515"."ORG_NUM"='02000')
      21 - access("T91573"."COMPANY_ORG_WID"="T111515"."ROW_WID")
      25 - access("T91573"."BALANCE_DT_WID"="C0")
      27 - access(ROWID=ROWID)
      28 - access(ROWID=ROWID)
    PLAN_TABLE_OUTPUT
    Note
       - star transformation used for this statement
    78 rows selected.Can any suggest a way to improve the performance? Or even hint at a good place for me to start looking?
    Please let me know if there is any additional information I can give.
    -Joe

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • Problem with Fetching Million Records from Table COEP into an Internal Tabl

    Hi Everyone ! Hope things are going well.
           Table : COEP has 6 million records.
    I am trying to get records based on certain criteria, that is, there are atleast 5 conditions in the WHERE clause.
    I've noticed it takes about 15 minutes to populate the internal table. How can i improve the performance to less than a minute for a fetch of 500 records from a database set of 6 million?
    Regards,
    Owais...

    The first obvious sugession would be to use the proper indexes. I had a similar Issue with COVP which is a join of COEP and COBK. I got substanstial performance improvement by adding "where LEDNR EQ '00'" in the where clause.
    Here is my select:
              SELECT kokrs
                     belnr
                     buzei
                     ebeln
                     ebelp
                     wkgbtr
                     refbn
                     bukrs
                     gjahr
                FROM covp CLIENT SPECIFIED
                INTO TABLE i_coep
                 FOR ALL ENTRIES IN i_objnr
               WHERE mandt EQ sy-mandt
                 AND lednr EQ '00'
                 AND objnr = i_objnr-objnr
                 AND kokrs = c_conarea.

  • Performance issues in million records table

    I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
    On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
    Am looking for archival solutions for these master tables.
    Operations on Archival Tables, would be limited to read.
    Expected benefits
    User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
    Very limited usage on Historical data - compared to operations on current data
    Performance on operations over current data is important compared over that on historical data
    Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
    Some solutions i cud think of ...
    [ 1 ] Put every archived record into a archival table and fetch it from there
    i.e clearly distinguish searches as current or archival - prior to searching
    the impact i feel is again archival tables are ever increasing by approx a million in a year
    [ 2 ] Put records into various archival tables each differentiated by a year
    For instance every year i do replicate the set of tables and that year data goes into that table.
    how do i do a fetch??
    Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
    The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
    Also I don't want to make change to every query in my app - until there is no way out..

    Hi,
    Read the following documentation link about Partitioning in Oracle.
    Best Regards,
    Alex

  • Increase performance query more than 10 millions records significantly

    The story is :
    Everyday, there is more than 10 million records which the data in textfiles format (.csv(comma separated value) extension, or other else).
    Example textfiles name is transaction.csv
    Phone_Number
    6281381789999
    658889999888
    618887897
    etc .. more than 10 million rows
    From transaction.csv then split to 3 RAM (memory) tables :
    1st. table nation (nation_id, nation_desc)
    2nd. table operator(operator_id, operator_desc)
    3rd. table area(area_id, area_desc)
    Then query this 3 RAM tables to result physical EXT_TRANSACTION (in harddisk)
    Given physical External Oracle table name EXT_TRANSACTION with column result is :
    Phone_Number Nation_Desc Operator_Desc Area_Desc
    ======================================
    6281381789999 INA SMP SBY
    So : Textfiles (transaction.csv) --> RAM tables --> Oracle tables (EXT_TRANSACTION)
    The first 2 digits is nation_id, next 4 digits is operator_id, and next 2 digits is area_id.
    I ever heard, to increase performance significantly, there is a technique to create table in memory (RAM) and not in harddisk.
    Any advice would be very appreciate.
    Thanks.

    Oracle uses sophisticated algorithms for various memory caches, including buffering data in memory. It is described in Oracle® Database Concepts.
    You can tell Oracle via the CACHE table clause to keep blocks for that table in the buffer cache (refer to the URL for the technical details of how this is done).
    However, this means there are now less of the buffer cache available to cache other data often used. So this approach could make accessing one table a bit faster at the expense of making access to other tables slower.
    This is a balancing act - how much can one "interfere" with cache before affecting and downgrading performance. Oracle also recommends that this type of "forced" caching is use for small lookup tables. It is not a good idea to use this on large tables.
    As for your problem - why do you assume that keeping data in memory will make processing faster? That is a very limited approach. Memory is a resource that is in high demand. It is a very finite resource. It needs to be carefully spend to get the best and optimal performance.
    The buffer cache is designed to cache "hot" (often accessed) data blocks. So in all likelihood, telling Oracle to cache a table you use a lot is not going to make it faster. Oracle is already caching the hot data blocks as best possible.
    You also need to consider what the actual performance problem is. If your process needs to crunch tons of data, it is going to be slow. Throwing more memory will be treating the symptom - not the actual problem that tons of data are being processed.
    So you need to define the actual problem. Perhaps it is not slow I/O - there could be a user defined PL/SQL function used as part of the ELT process that causes the problem. Parallel processing could be use to do more I/O at the same time (assuming the I/O subsystem has the capacity). The process can perhaps be designed better - and instead of multiple passes through a data set, crunching the same data (but different columns) again and again, do it in a single pass.
    10 million rows are nothing ito what Oracle can process on even a small server today. I have dual CPU AMD servers doing over 2,000 inserts per second in a single process. A Perl program making up to a 1,000 PL/SQL procedure calls per second. Oracle is extremely capable - as it today's hardware and software. But that needs a sound software engineering approach. And that approach says that we first need to fully understand the problem before we can solve it, treating the cause and not the symptom.

  • Performance across millions of records

    Hi,
    I have millions of records in the database. I need to retrieve these records from multiple master data tables and do the validations and post the error messages in some format. Please let me know the way where I can complete the process in 15minutes and which does not go to short dump. I am really expecting the performance to be excellent.

    Hi,
    I would go for a different concept - in other words: forget it. Let's say, you have 2 million records (millions wasn't very specific, but could be much more). 15 minutes (usual time-out comes already after 10 minutes!) are 900 seconds. Divide this by 2 Mio -> 0.45 milliseconds per entry.
    In this time you want to select the entry and perform a check. I doubt this will be possible - you might select the entries in this time, you might make a loop, maybe one read table in this time - but all together (avoiding RAM problem, having only index access, gathering error messages...) small chance.
    I guess you will rather spend a lot of time and won't succeed - or you have less entries to test then you said in the first place.
    Of course I cannot estimate the exact runtime - even if you would have given the exact requirement - but just make some tests with very small numbers and see yourself, if you can come close to the time / entry you need.
    Regards,
    Christian

  • Table has 80 million records - Performance impact if we stop archiving

    HI All,
    I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.
    I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?
    Any comments welcomed.

    What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.
    About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.
    You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.
    In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
    Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 .

  • Loading 3 millions record in database via externel table

    I am loading 3+ millions records in database by using externel tables. It is very slow process. How can I make this process fast?

    Hi,
    1. Break down the file into several files. let just say 10 files (300,000 record each)
    2. disable all index on the target table if possible
    3. disable Foreign key if possible, beside you can check this later using exceptions table
    4. make sure your freelist is and initrans is 10 for the target table, if you are inserting tabel resides in manual space management tablespace
    5. Create 10 proccess, each reading from their own file. and run this 10 process concurrently and used log error with unlimited reject limit facility. so the insert will continue until finish
    hope can help.

  • How to load unicode data files with fixed records lengths?

    Hi!
    To load unicode data files with fixed records lengths (in terms of charachters and not of bytes!) using SQL*Loader manually, I found two ways:
    Alternative 1: one record per row
    SQL*Loader control file example (without POSITION, since POSITION always refers to bytes!)<br>
    LOAD DATA
    CHARACTERSET UTF8
    LENGTH SEMANTICS CHAR
    INFILE unicode.dat
    INTO TABLE STG_UNICODE
    TRUNCATE
    A CHAR(2) ,
    B CHAR(6) ,
    C CHAR(2) ,
    D CHAR(1) ,
    E CHAR(4)
    ) Datafile:
    001111112234444
    01NormalDExZWEI
    02ÄÜÖßêÊûÛxöööö
    03ÄÜÖßêÊûÛxöööö
    04üüüüüüÖÄxµôÔµ Alternative2: variable length records
    LOAD DATA
    CHARACTERSET UTF8
    LENGTH SEMANTICS CHAR
    INFILE unicode_var.dat "VAR 4"
    INTO TABLE STG_UNICODE
    TRUNCATE
    A CHAR(2) ,
    B CHAR(6) ,
    C CHAR(2) ,
    D CHAR(1) ,
    E CHAR(4)
    ) Datafile:
    001501NormalDExZWEI002702ÄÜÖßêÊûÛxöööö002604üuüüüüÖÄxµôÔµ Problems
    Implementing these two alternatives in OWB, I encounter the following problems:
    * How to specify LENGTH SEMANTICS CHAR?
    * How to suppress the POSITION definition?
    * How to define a flat file with variable length and how to specify the number of bytes containing the length definition?
    Or is there another way that can be implemented using OWB?
    Any help is appreciated!
    Thanks,
    Carsten.

    Hi Carsten
    If you need to support the LENGTH SEMANTICS CHAR clause in an external table then one option is to use the unbound external table and capture the access parameters manually. To create an unbound external table you can skip the selection of a base file in the external table wizard. Then when the external table is edited you will get an Access Parameters tab where you can define the parameters. In 11gR2 the File to Oracle external table can also add this clause via an option.
    Cheers
    David

Maybe you are looking for

  • Boot camp black screen with blip

    So i was installing windows 7using boot camp, everything was going well  intell i got to pick which drive to install. it stated installing the  files then said files cant be loaded. I tryed to repeat the process but  got the same message. So i restar

  • Firewire 800 enough for burn using dual burners? And what drive to buy?

    I have a Mac Pro with a SuperDrive. (I think it's a Pioneer). Tomorrow I'll open the lid and check if the connectors are SATA or IDE. Being either one, I now wonder if I can continue to burn from my external FW800 disc, using the "multiple burner opt

  • BO professional - Should I learn SAP BI as well !!!

    Hello All, I am Business Objects Professional 2.5+ years of experience in BO. However After SAP-BO Acquisition, I am bit worried about the skillset. Do you think I should learn SAP BI in order to enhance my BI skills. Which SAP BI skills would be cru

  • Can I use differnet patterns for a "digital pattern match start trigger"

    Hello, Currently I would like use different patterns for a trigger the PCIe-6537 board to start capture data, for example, two patterns: 1101 and 1100 in digital IO lines line0:3, whenever one of this two pattern happens, start capture. It seems like

  • Loadvars Undefined?

    I'm having a few problems with some actionscript. Basically when I test the SWF file without it being embedded in an HTML file (just by testing it in Flash itself) it formats the number perfectly. However when I place the file in an HTML page it retu