Querying One million records

Hi,
We have a query which selects data from nearly million records( one year). It was filtered by calday. But this query seems to be not working.
initially got an error with Time_OUT (after 10 minutes). Then i increased the rdisp/wp_max_runtime parameter to 20 minutes but still no use.
When i queried for 11 months data( which approximately 900,000 records) I was getting results in 5 minutes.
I also tried to run the query from /RSRT but still couldnt able to query on one year data (one million records). Can you please help me out?
Thanks
anu.

Hi,
Do you have any other dumps in ST22 apart from time out ?
What is your OS ?
Krzys

Similar Messages

  • Update Query is Performing Full table Scan of 1 Millions Records

    Hello Everyboby I have one update query ,
    UPDATE tablea SET
              task_status = 12
              WHERE tablea.link_id >0
              AND tablea.task_status <> 0
              AND tablea.event_class='eventexception'
              AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
              AND ltask.task_status = 0)
    When I do explain plan it shows following result...
    Execution Plan
    0 UPDATE STATEMENT Optimizer=CHOOSE
    1 0 UPDATE OF 'tablea'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'tablea'
    4 2 TABLE ACCESS (BY INDEX ROWID) OF 'tablea'
    5 4 INDEX (UNIQUE SCAN) OF 'PK_tablea' (UNIQUE)
    NOW tablea may have more than 10 MILLION Records....This would take hell of time even if it has to
    update 2 records....please suggest me some optimal solutions.....
    Regards
    Mahesh

    I see your point but my question or logic say i have index on all columns used in where clauses so i find no reason for oracle to do full table scan,,,,
    UPDATE tablea SET
    task_status = 12
    WHERE tablea.link_id >0
    AND tablea.task_status <> 0
    AND tablea.event_class='eventexception'
    AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
    AND ltask.task_status = 0)
    I am clearly statis where task_status <> 0 and event_class= something and tablea.link_id >0
    so ideal case FOR optimizer should be
    Step 1)Select all the rowid having this condition...
    Step 2)
    For each row in rowid get all the row where task_status=0
    and where taskid=linkid of rowid selected above...
    Step 3)While looping for each rowid if it find any condition try for rowid obtained from ltask in task 2 update that record....
    I want to have this kind of plan,,,,,does anyone know how to make oracle obtained this kind of plan......
    It is try FULL TABLE SCAN is harmfull alteast not better than index scan.....

  • Increase performance query more than 10 millions records significantly

    The story is :
    Everyday, there is more than 10 million records which the data in textfiles format (.csv(comma separated value) extension, or other else).
    Example textfiles name is transaction.csv
    Phone_Number
    6281381789999
    658889999888
    618887897
    etc .. more than 10 million rows
    From transaction.csv then split to 3 RAM (memory) tables :
    1st. table nation (nation_id, nation_desc)
    2nd. table operator(operator_id, operator_desc)
    3rd. table area(area_id, area_desc)
    Then query this 3 RAM tables to result physical EXT_TRANSACTION (in harddisk)
    Given physical External Oracle table name EXT_TRANSACTION with column result is :
    Phone_Number Nation_Desc Operator_Desc Area_Desc
    ======================================
    6281381789999 INA SMP SBY
    So : Textfiles (transaction.csv) --> RAM tables --> Oracle tables (EXT_TRANSACTION)
    The first 2 digits is nation_id, next 4 digits is operator_id, and next 2 digits is area_id.
    I ever heard, to increase performance significantly, there is a technique to create table in memory (RAM) and not in harddisk.
    Any advice would be very appreciate.
    Thanks.

    Oracle uses sophisticated algorithms for various memory caches, including buffering data in memory. It is described in Oracle® Database Concepts.
    You can tell Oracle via the CACHE table clause to keep blocks for that table in the buffer cache (refer to the URL for the technical details of how this is done).
    However, this means there are now less of the buffer cache available to cache other data often used. So this approach could make accessing one table a bit faster at the expense of making access to other tables slower.
    This is a balancing act - how much can one "interfere" with cache before affecting and downgrading performance. Oracle also recommends that this type of "forced" caching is use for small lookup tables. It is not a good idea to use this on large tables.
    As for your problem - why do you assume that keeping data in memory will make processing faster? That is a very limited approach. Memory is a resource that is in high demand. It is a very finite resource. It needs to be carefully spend to get the best and optimal performance.
    The buffer cache is designed to cache "hot" (often accessed) data blocks. So in all likelihood, telling Oracle to cache a table you use a lot is not going to make it faster. Oracle is already caching the hot data blocks as best possible.
    You also need to consider what the actual performance problem is. If your process needs to crunch tons of data, it is going to be slow. Throwing more memory will be treating the symptom - not the actual problem that tons of data are being processed.
    So you need to define the actual problem. Perhaps it is not slow I/O - there could be a user defined PL/SQL function used as part of the ELT process that causes the problem. Parallel processing could be use to do more I/O at the same time (assuming the I/O subsystem has the capacity). The process can perhaps be designed better - and instead of multiple passes through a data set, crunching the same data (but different columns) again and again, do it in a single pass.
    10 million rows are nothing ito what Oracle can process on even a small server today. I have dual CPU AMD servers doing over 2,000 inserts per second in a single process. A Perl program making up to a 1,000 PL/SQL procedure calls per second. Oracle is extremely capable - as it today's hardware and software. But that needs a sound software engineering approach. And that approach says that we first need to fully understand the problem before we can solve it, treating the cause and not the symptom.

  • Help with querying a 200 million record table

    Hi ,
    I need to query a 200 million record table which is partitioned by monthly activity.
    But my problem is I need to see how many activities occured on one account in a time frame.
    If there are 200 partitions, I need to go into all the partitions, get the activities of the account in the partition and at the end give the number of activities.
    Fortunately, only activity is expected for an account in the partition which may be present or absent.
    if this table had 100 records, i would use this..
    select account_no, count(*)
    from Acct_actvy
    group by account_no;

    Must stress that it is critical that you not write code (SQL or PL/SQL) that uses hardcoded partition names to find data.
    That approach is very risk, prone to runtime errors, difficult to maintain and does not scale. It is not worth it.
    From the developer's side, there should be total ignorance to the fact that a table is partitioned. A developer must treat a partition table no different than any other table.
    To give you an idea.. this a copy-and-paste from a SQL*Plus session doing what you want to do. Against a partitioned table at least 3x bigger than yours. It covers about a 12 month period. There's a partition per day - and empty daily partitions for the next 2 years. The SQL aggregation is monthly. I selected a random network address to illustrate.
    SQL> select count(*) from x25_calls;
      COUNT(*)
    619491919
    Elapsed: 00:00:19.68
    SQL>
    SQL>  select TRUNC(callendtime,'MM') AS MONTH, sourcenetworkaddress, count(*) from x25_calls where sourcenetworkaddress = '3103165962'
      2  group by TRUNC(callendtime,'MM'), sourcenetworkaddress;
    MONTH               SOURCENETWORKADDRESS   COUNT(*)
    2005/09/01 00:00:00 3103165962                 3599
    2005/10/01 00:00:00 3103165962                 1184
    2005/12/01 00:00:00 3103165962                    4
    2005/06/01 00:00:00 3103165962                    1
    2005/04/01 00:00:00 3103165962                  560
    2005/08/01 00:00:00 3103165962                  101
    2005/03/01 00:00:00 3103165962                 3330
    7 rows selected.
    Elapsed: 00:00:19.72As you can see - not a single reference to any partitioning. Excellent performance, despite running on an old K-class HP server.
    The reason for the performance is simple. A correctly designed and implemented partitioning scheme that caters for most of the queries against the table. Correctly designed and implemented indexes - especially local bitmap indexes. Without any hacks like partition names and the like...

  • Tune Query with Millions of Records

    Hi everyone,
    I've got an Oracle 11g tuning task set before me and I'm pretty novice when it comes to tuning.
    The query itself is only about 10-15 lines of SQL, however, it's hitting four tables, one of them is 100 million records and one is 8 million. The other two are pretty small comparatively ( 6,000 and 300 records). The problem I am having is that the query actually needs to aggregate 3 million records.
    I found an article about using the star_transformation_enabled = true parameter, then on the fact table I set all the foreign key to bitmaps and the dimensions have a standard primary key defined on the surrogate key. This strategy works but it still takes a long time for the query to crunch the 3 million records (takes about 30 minutes).
    I know there's also the option of doing materialized views and using query re-write to take advantage of the MV, but my problem with that is that we're using OBIEE and we can't control how many different variations of these query's we see. So we would have to make a ton of MVs.
    What are the best ways to tackle high volume queries like this from a system wide perspective?
    Are there any benchmarks for what I should be seeing in terms of a 3 million record query? Is expecting under a minute even reasonable?
    Any help would be appreciated!
    Thanks!
    -Joe

    Here is the trace information:
    SQL> set autotrace traceonly arraysize 1000
    SQL> SELECT SUM(T91573.ACTIVITY_GLOBAL1_AMT) AS c2,
      2    SUM(
      3    CASE
      4      WHEN T91573.DB_CR_IND = 'CREDIT'
      5      THEN T91573.ACTIVITY_GLOBAL1_AMT
      END )                           AS c3,
      T91397.GL_ACCOUNT_NAME          AS c4,
      6    7    8    T91397.GROUP_ACCOUNT_NUM        AS c5,
      9    SUM(T91573.BALANCE_GLOBAL1_AMT) AS c6,
    10    T156337.ROW_WID                 AS c7
    11  FROM W_MCAL_DAY_D T156337
    12    /* Dim_W_MCAL_DAY_D_Fiscal_Day */
    13    ,
    14    W_INT_ORG_D T111515
      /* Dim_W_INT_ORG_D_Company */
    15   16    ,
      W_GL_ACCOUNT_D T91397
    17   18    /* Dim_W_GL_ACCOUNT_D */
    19    ,
    20    W_GL_BALANCE_F T91573
      /* Fact_W_GL_BALANCE_F */
    21   22  WHERE ( T91397.ROW_WID        = T91573.GL_ACCOUNT_WID
    23  AND T91573.COMPANY_ORG_WID    = T111515.ROW_WID
    24  AND T91573.BALANCE_DT_WID     = T156337.ROW_WID
    AND T111515.COMPANY_FLG       = 'Y'
    AND T111515.ORG_NUM           = '02000'
    25   26   27  AND T156337.MCAL_PER_NAME_QTR = '2010 Q 1' )
    28  GROUP BY T91397.GL_ACCOUNT_NAME,
      T91397.GROUP_ACCOUNT_NUM,
    29   30    T156337.ROW_WID
    31  ;
    522 rows selected.
    Execution Plan
    Plan hash value: 2761996426
    | Id  | Operation                              | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |                            |  7882 |   700K|  7330   (1)| 00:01:28 |
    |   1 |  HASH GROUP BY                         |                            |  7882 |   700K|  7330   (1)| 00:01:28 |
    |*  2 |   HASH JOIN                            |                            |  7882 |   700K|  7329   (1)| 00:01:28 |
    |   3 |    VIEW                                | VW_GBC_13                  |  7837 |   390K|  6534   (1)| 00:01:19 |
    |   4 |     TEMP TABLE TRANSFORMATION          |                            |       |       |            |          |
    |   5 |      LOAD AS SELECT                    | SYS_TEMP_0FD9D7416_F97A325 |       |       |            |          |
    |*  6 |       VIEW                             | index$_join$_114           |   572 | 10296 |   191   (9)| 00:00:03 |
    |*  7 |        HASH JOIN                       |                            |       |       |            |          |
    |   8 |         BITMAP CONVERSION TO ROWIDS    |                            |   572 | 10296 |     1   (0)| 00:00:01 |
    |*  9 |          BITMAP INDEX SINGLE VALUE     | W_MCAL_DAY_D_F46           |       |       |            |          |
    |  10 |         INDEX FAST FULL SCAN           | W_MCAL_DAY_D_P1            |   572 | 10296 |   217   (1)| 00:00:03 |
    |  11 |      HASH GROUP BY                     |                            |  7837 |   290K|  6343   (1)| 00:01:17 |
    |* 12 |       HASH JOIN                        |                            | 26186 |   971K|  6337   (1)| 00:01:17 |
    |  13 |        TABLE ACCESS FULL               | SYS_TEMP_0FD9D7416_F97A325 |   572 |  5148 |     2   (0)| 00:00:01 |
    |  14 |        TABLE ACCESS BY INDEX ROWID     | W_GL_BALANCE_F             | 26186 |   741K|  6334   (1)| 00:01:17 |
    |  15 |         BITMAP CONVERSION TO ROWIDS    |                            |       |       |            |          |
    |  16 |          BITMAP AND                    |                            |       |       |            |          |
    |  17 |           BITMAP MERGE                 |                            |       |       |            |          |
    |  18 |            BITMAP KEY ITERATION        |                            |       |       |            |          |
    |* 19 |             TABLE ACCESS BY INDEX ROWID| W_INT_ORG_D                |     2 |    32 |     3   (0)| 00:00:01 |
    |* 20 |              INDEX RANGE SCAN          | W_INT_ORG_ORG_NUM          |     2 |       |     1   (0)| 00:00:01 |
    |* 21 |             BITMAP INDEX RANGE SCAN    | W_GL_BALANCE_F_F4          |       |       |            |          |
    |  22 |           BITMAP MERGE                 |                            |       |       |            |          |
    |  23 |            BITMAP KEY ITERATION        |                            |       |       |            |          |
    |  24 |             TABLE ACCESS FULL          | SYS_TEMP_0FD9D7416_F97A325 |   572 |  5148 |     2   (0)| 00:00:01 |
    |* 25 |             BITMAP INDEX RANGE SCAN    | W_GL_BALANCE_F_F1          |       |       |            |          |
    |  26 |    VIEW                                | index$_join$_003           |   199K|  7775K|   794   (5)| 00:00:10 |
    |* 27 |     HASH JOIN                          |                            |       |       |            |          |
    |* 28 |      HASH JOIN                         |                            |       |       |            |          |
    |  29 |       BITMAP CONVERSION TO ROWIDS      |                            |   199K|  7775K|    26   (0)| 00:00:01 |
    |  30 |        BITMAP INDEX FULL SCAN          | W_GL_ACCOUNT_D_M1          |       |       |            |          |
    |  31 |       BITMAP CONVERSION TO ROWIDS      |                            |   199K|  7775K|   118   (0)| 00:00:02 |
    |  32 |        BITMAP INDEX FULL SCAN          | W_GL_ACCOUNT_D_M10         |       |       |            |          |
    |  33 |      INDEX FAST FULL SCAN              | W_GL_ACCOUNT_D_M18         |   199K|  7775K|   733   (1)| 00:00:09 |
    Predicate Information (identified by operation id):
       2 - access("T91397"."ROW_WID"="ITEM_1")
       6 - filter("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
       7 - access(ROWID=ROWID)
       9 - access("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
      12 - access("T91573"."BALANCE_DT_WID"="C0")
      19 - filter("T111515"."COMPANY_FLG"='Y')
      20 - access("T111515"."ORG_NUM"='02000')
      21 - access("T91573"."COMPANY_ORG_WID"="T111515"."ROW_WID")
      25 - access("T91573"."BALANCE_DT_WID"="C0")
      27 - access(ROWID=ROWID)
      28 - access(ROWID=ROWID)
    Note
       - star transformation used for this statement
    Statistics
           1067  recursive calls
              9  db block gets
         417513  consistent gets
         296603  physical reads
           6708  redo size
          25220  bytes sent via SQL*Net to client
            520  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
            522  rows processedAnd here is the cursor details:
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  6s625d3821nq3, child number 0
    SELECT /*+ gather_plan_statistics */ SUM(T91573.ACTIVITY_GLOBAL1_AMT)
    AS c2,   SUM(   CASE     WHEN T91573.DB_CR_IND = 'CREDIT'     THEN
    T91573.ACTIVITY_GLOBAL1_AMT   END )                           AS c3,
    T91397.GL_ACCOUNT_NAME          AS c4,   T91397.GROUP_ACCOUNT_NUM
    AS c5,   SUM(T91573.BALANCE_GLOBAL1_AMT) AS c6,   T156337.ROW_WID
               AS c7 FROM W_MCAL_DAY_D T156337   /*
    Dim_W_MCAL_DAY_D_Fiscal_Day */   ,   W_INT_ORG_D T111515   /*
    Dim_W_INT_ORG_D_Company */   ,   W_GL_ACCOUNT_D T91397   /*
    Dim_W_GL_ACCOUNT_D */   ,   W_GL_BALANCE_F T91573   /*
    PLAN_TABLE_OUTPUT
    Fact_W_GL_BALANCE_F */ WHERE ( T91397.ROW_WID        =
    T91573.GL_ACCOUNT_WID AND T91573.COMPANY_ORG_WID    = T111515.ROW_WID
    AND T91573.BALANCE_DT_WID     = T156337.ROW_WID AND T111515.COMPANY_FLG
          = 'Y' AND T111515.ORG_NUM           = '02000' AND
    T156337.MCAL_PER_NAME_QTR = '2010 Q 1' ) GROUP BY
    T91397.GL_ACCOUNT_NAME,   T91397.GROUP_ACCOUNT_NUM,   T156337.ROW_WID
    Plan hash value: 3262111942
    PLAN_TABLE_OUTPUT
    | Id  | Operation                              | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem| Used-Mem |
    |   0 | SELECT STATEMENT                       |                            |   1 |        |    522 |00:51:34.16 |     424K|    111K|      2 |       |       |          |
    |   1 |  HASH GROUP BY                         |                            |   1 |   7882 |    522 |00:51:34.16 |     424K|    111K|      2 |   748K|   748K| 1416K (0)|
    |*  2 |   HASH JOIN                            |                            |   1 |   7882 |   5127 |00:51:34.00 |     424K|    111K|      2 |  1035K|  1035K| 1561K (0)|
    |   3 |    VIEW                                | VW_GBC_13                  |   1 |   7837 |   5127 |00:51:32.65 |     423K|    111K|      2 |       |       |          |
    |   4 |     TEMP TABLE TRANSFORMATION          |                            |   1 |        |   5127 |00:51:32.64 |     423K|    111K|      2 |       |       |          |
    |   5 |      LOAD AS SELECT                    |                            |   1 |        |      0 |00:00:00.09 |     188 |      0 |      2 |   269K|   269K|  269K (0)|
    |*  6 |       VIEW                             | index$_join$_114           |   1 |    572 |    724 |00:00:00.01 |     183 |      0 |      0 |       |       |          |
    |*  7 |        HASH JOIN                       |                            |   1 |        |    724 |00:00:00.01 |     183 |      0 |      0 |  1011K|  1011K| 1573K (0)|
    |   8 |         BITMAP CONVERSION TO ROWIDS    |                            |   1 |    572 |    724 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
    |*  9 |          BITMAP INDEX SINGLE VALUE     | W_MCAL_DAY_D_F46           |   1 |        |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
    |  10 |         INDEX FAST FULL SCAN           | W_MCAL_DAY_D_P1            |   1 |    572 |  64822 |00:00:00.06 |     180 |      0 |      0 |       |       |          |
    |  11 |      HASH GROUP BY                     |                            |   1 |   7837 |   5127 |00:51:32.54 |     423K|    111K|      0 |  1168K|  1038K| 2598K (0)|
    |* 12 |       HASH JOIN                        |                            |   1 |  26186 |   3267K|03:18:27.02 |     423K|    111K|      0 |  1236K|  1236K| 1248K (0)|
    |  13 |        TABLE ACCESS FULL               | SYS_TEMP_0FD9D73B3_F97A325 |   1 |    572 |    724 |00:00:00.02 |       7 |      2 |      0 |       |       |          |
    |  14 |        TABLE ACCESS BY INDEX ROWID     | W_GL_BALANCE_F             |   1 |  26186 |   3267K|03:18:12.81 |     423K|    111K|      0 |       |       |          |
    |  15 |         BITMAP CONVERSION TO ROWIDS    |                            |   1 |        |   3267K|00:00:06.29 |   16142 |   1421 |      0 |       |       |          |
    |  16 |          BITMAP AND                    |                            |   1 |        |     74 |00:00:03.06 |   16142 |   1421 |      0 |       |       |          |
    |  17 |           BITMAP MERGE                 |                            |   1 |        |     83 |00:00:00.08 |     393 |      0 |      0 |  1024K|   512K| 2754K (0)|
    |  18 |            BITMAP KEY ITERATION        |                            |   1 |        |    764 |00:00:00.01 |     393 |      0 |      0 |       |       |          |
    |* 19 |             TABLE ACCESS BY INDEX ROWID| W_INT_ORG_D                |   1 |      2 |      2 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
    |* 20 |              INDEX RANGE SCAN          | W_INT_ORG_ORG_NUM          |   1 |      2 |      2 |00:00:00.01 |       1 |      0 |      0 |       |       |          |
    |* 21 |             BITMAP INDEX RANGE SCAN    | W_GL_BALANCE_F_F4          |   2 |        |    764 |00:00:00.01 |     390 |      0 |      0 |       |       |          |
    |  22 |           BITMAP MERGE                 |                            |   1 |        |    210 |00:00:03.12 |   15749 |   1421 |      0 |    57M|  7389K|   17M (3)|
    |  23 |            BITMAP KEY ITERATION        |                            |   4 |        |  16405 |00:00:15.36 |   15749 |   1421 |      0 |       |       |          |
    |  24 |             TABLE ACCESS FULL          | SYS_TEMP_0FD9D73B3_F97A325 |   4 |    572 |   2896 |00:00:00.05 |      16 |      6 |      0 |       |       |          |
    |* 25 |             BITMAP INDEX RANGE SCAN    | W_GL_BALANCE_F_F1          |2896 |        |  16405 |00:00:24.99 |   15733 |   1415 |      0 |       |       |          |
    |  26 |    VIEW                                | index$_join$_003           |   1 |    199K|    199K|00:00:02.50 |     737 |      1 |      0 |       |       |          |
    |* 27 |     HASH JOIN                          |                            |   1 |        |    199K|00:00:02.18 |     737 |      1 |      0 |    14M|  2306K|   17M (0)|
    |* 28 |      HASH JOIN                         |                            |   1 |        |    199K|00:00:01.94 |     144 |      1 |      0 |    10M|  2639K|   13M (0)|
    |  29 |       BITMAP CONVERSION TO ROWIDS      |                            |   1 |    199K|    199K|00:00:00.19 |      26 |      0 |      0 |       |       |          |
    |  30 |        BITMAP INDEX FULL SCAN          | W_GL_ACCOUNT_D_M1          |   1 |        |     93 |00:00:00.01 |      26 |      0 |      0 |       |       |          |
    |  31 |       BITMAP CONVERSION TO ROWIDS      |                            |   1 |    199K|    199K|00:00:01.05 |     118 |      1 |      0 |       |       |          |
    |  32 |        BITMAP INDEX FULL SCAN          | W_GL_ACCOUNT_D_M10         |   1 |        |   5791 |00:00:00.01 |     118 |      1 |      0 |       |       |          |
    |  33 |      INDEX FAST FULL SCAN              | W_GL_ACCOUNT_D_M18         |   1 |    199K|    199K|00:00:00.19 |     593 |      0 |      0 |       |       |          |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       2 - access("T91397"."ROW_WID"="ITEM_1")
       6 - filter("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
       7 - access(ROWID=ROWID)
       9 - access("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
      12 - access("T91573"."BALANCE_DT_WID"="C0")
      19 - filter("T111515"."COMPANY_FLG"='Y')
      20 - access("T111515"."ORG_NUM"='02000')
      21 - access("T91573"."COMPANY_ORG_WID"="T111515"."ROW_WID")
      25 - access("T91573"."BALANCE_DT_WID"="C0")
      27 - access(ROWID=ROWID)
      28 - access(ROWID=ROWID)
    PLAN_TABLE_OUTPUT
    Note
       - star transformation used for this statement
    78 rows selected.Can any suggest a way to improve the performance? Or even hint at a good place for me to start looking?
    Please let me know if there is any additional information I can give.
    -Joe

  • Tuning the sql query when we have 30 Million records

    Hi Friends,
    I have query which takes around 25 to 30 Minutes to retrieve 9 Million records.
    Oracle version=11.2.0.2
    OS=Solaris 10 64bit
    query details
    CREATE OR REPLACE VIEW TIBEX_ORDERSBYQSIDVIEW
    AS 
    SELECT  A."ORDERID", A."USERORDERID", A."ORDERSIDE", A."ORDERTYPE",
              A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
              A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
              A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
              A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
              A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
              A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
              A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
              A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
              A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
              A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
              A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
              A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
              A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
              A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
              a.lastExecutionRole, a.MDEntryID, a.PegOffset,
              a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
              A.ISPEX, A.CLEARINGHANDLING, B.qsid
        FROM  tibex_Order A,
              tibex_Participant b
        WHERE a.participantID = b.participantID
          AND (A.MessageSequence, A.OrderID) IN (
                SELECT  max(C.MessageSequence), C.OrderID
                  FROM  tibex_Order C
                  WHERE LastInstRejectCode = 'OK'
                  GROUP By C.OrderID
          AND a.OrderStatus IN (
                SELECT OrderStatus
                  FROM  tibex_orderStatusEnum
                  WHERE ShortDesc IN (
                          'ORD_OPEN', 'ORD_EXPIRE', 'ORD_CANCEL', 'ORD_FILLED','ORD_CREATE','ORD_PENDAMD','ORD_PENDCAN'
      UNION ALL
      SELECT  A.ORDERID, A.USERORDERID, A.ORDERSIDE, A.ORDERTYPE,
              A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
              A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
              A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
              A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
              A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
              A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
              A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
              A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
              A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
              A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
              A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
              A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
              A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
              A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
              a.lastExecutionRole, A.MDEntryID, a.PegOffset,
              a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
              A.ISPEX, A.CLEARINGHANDLING, B.qsid
        FROM  tibex_Order A,
              tibex_Participant b
        WHERE a.participantID = b.participantID
          AND orderstatus in (
                SELECT  orderstatus
                  FROM  tibex_orderStatusEnum
                  WHERE ShortDesc in ('ORD_REJECT')
          AND 1 IN (
                  SELECT count(*)
                    FROM tibex_order c
                    WHERE c.orderid=a.orderid
                      AND c.instrumentID=a.instrumentID
    /Tried by modifying the query but same result was not retrieved but it was Quicker 6 min.Can Somebody check where i am going wrong.
    CREATE OR REPLACE VIEW TIBEX_ORDERSBYQSIDVIEW
    AS   
    WITH REJ AS
    SELECT ROWID RID
    FROM   TIBEX_ORDER
    WHERE  ORDERSTATUS = (SELECT ORDERSTATUS
                          FROM   TIBEX_ORDERSTATUSENUM
                          WHERE  SHORTDESC = 'ORD_REJECT')
    REJ1 AS
    SELECT ROWID RID
    FROM   TIBEX_ORDER
    WHERE  ORDERSTATUS NOT IN (SELECT ORDERSTATUS
                               FROM   TIBEX_ORDERSTATUSENUM
                               WHERE  SHORTDESC = 'ORD_NOTFND'
                               OR     SHORTDESC = 'ORD_REJECT')
    SELECT O.*,
           P.QSID
    FROM   TIBEX_ORDER O,
           TIBEX_PARTICIPANT P
    WHERE  O.PARTICIPANTID = P.PARTICIPANTID
    AND    O.ROWID IN (
                       SELECT RID
                       FROM   (
                               SELECT   ROWID RID,
                                        ORDERSTATUS,
                                        RANK () OVER (PARTITION BY ORDERID ORDER BY MESSAGESEQUENCE ASC) R
                               FROM     TIBEX_ORDER
                       WHERE  R = 1
                       AND    RID IN (SELECT RID FROM REJ)
    UNION ALL
    SELECT O.*,
           P.QSID
    FROM   TIBEX_ORDER O,
           TIBEX_PARTICIPANT P
    WHERE  O.PARTICIPANTID = P.PARTICIPANTID
    AND    O.ROWID IN (
                       SELECT RID
                       FROM   (
                               SELECT   ROWID RID,
                                        ORDERSTATUS,
                                        RANK () OVER (PARTITION BY ORDERID ORDER BY MESSAGESEQUENCE DESC) R
                               FROM     TIBEX_ORDER
                       WHERE  R = 1
                       AND    RID IN (SELECT RID FROM REJ1)
                      );Regards
    NM

    Hi Satish,
    CREATE OR REPLACE VIEW TIBEX_ORDERSBYQSIDVIEW
    (ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
    BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID, PRICETYPE,
    PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL, DISCLOSEDQTY,
    REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE, ACCOUNTNO,
    CLEARINGAGENCY, LASTINSTRESULT, LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE,
    TIMESTAMP, QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE,
    LASTEXECQTY, LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY, STOPPRICE,
    LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO, LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
    BOOKTIMESTAMP, PARTICIPANTIDMM, MARKETSTATE, PARTNEREXID, LASTEXECSETTLEMENTCYCLE,
    LASTEXECPOSTTRADEVENUETYPE, PRICELEVELPOSITION, PREVREFERENCEID, EXPIRYTIMESTAMP, MATCHTYPE,
    LASTEXECUTIONROLE, MDENTRYID, PEGOFFSET, HALTREASON, COMPARISONPRICE,
    ENTEREDPRICETYPE, ISPEX, CLEARINGHANDLING, QSID)
    AS
    SELECT  A."ORDERID", A."USERORDERID", A."ORDERSIDE", A."ORDERTYPE",
              A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
              A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
              A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
              A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
              A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
              A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
              A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
              A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
              A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
              A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
              A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
              A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
              A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
              A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
              a.lastExecutionRole, a.MDEntryID, a.PegOffset,
              a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
              A.ISPEX, A.CLEARINGHANDLING, B.qsid
        FROM  tibex_Order A,
              tibex_Participant b
        WHERE a.participantID = b.participantID
          AND (A.MessageSequence, A.OrderID) IN ( SELECT MAX (C.MessageSequence), C.OrderID
               FROM tibex_Order C
                 WHERE c.LastInstRejectCode = 'OK'
                 and a.OrderID=c.OrderID
                   GROUP BY C.OrderID)
          AND a.OrderStatus IN (2,4,5,6,1,9,10)
      UNION ALL
      SELECT  A.ORDERID, A.USERORDERID, A.ORDERSIDE, A.ORDERTYPE,
              A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
              A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
              A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
              A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
              A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
              A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
              A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
              A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
              A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
              A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
              A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
              A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
              A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
              A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
              a.lastExecutionRole, A.MDEntryID, a.PegOffset,
              a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
              A.ISPEX, A.CLEARINGHANDLING, B.qsid
        FROM  tibex_Order A,
              tibex_Participant b
        WHERE a.participantID = b.participantID
          AND orderstatus=3
          AND 1 IN (
                  SELECT count(*)
                    FROM tibex_order c
                    WHERE c.orderid=a.orderid
                      AND c.instrumentID=a.instrumentID
    select * from TIBEX_ORDERSBYQSIDVIEW where participantid='NITE';
    Current SQL using Temp Segment and Look for Column TEMPSEG_SIZE_MB
           SID TIME                OPERATION                 ESIZE        MEM    MAX MEM       PASS TEMPSEG_SIZE_MB
           183 11/10/2011:13:38:44 HASH-JOIN                    43         43       1556          1            1024
           183 11/10/2011:13:38:44 GROUP BY (HASH)            2043       2072       2072          0            4541Edited by: NM on 11-Oct-2011 04:38

  • Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long.

    Hello Friends,
    The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
    Scenario:
    Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
    Questions are…
    What is best option?
    IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
    I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
    When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
    Thanks
    Karthikeyan Jothi

    http://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
    Processing
    hundreds of millions records can be done in less than an hour.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • SQL Query to fetch records from tables which have 75+ million records

    Hi,
    I have the explain plan for a sql stmt.Require your suggestions to improve this.
    PLAN_TABLE_OUTPUT
    | Id  | Operation                            | Name                         | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT                     |                              |   340 |   175K| 19075 |
    |   1 |  TEMP TABLE TRANSFORMATION           |                              |       |       |       |
    |   2 |   LOAD AS SELECT                     |                              |       |       |       |
    |   3 |    SORT GROUP BY                     |                              |    32M|  1183M|   799K|
    |   4 |     TABLE ACCESS FULL                | CLM_DETAIL_PRESTG            |   135M|  4911M|   464K|
    |   5 |   LOAD AS SELECT                     |                              |       |       |       |
    |   6 |    TABLE ACCESS FULL                 | CLM_HEADER_PRESTG            |     1 |   274 |   246K|
    PLAN_TABLE_OUTPUT
    |   7 |   LOAD AS SELECT                     |                              |       |       |       |
    |   8 |    SORT UNIQUE                       |                              |   744K|    85M|  8100 |
    |   9 |     TABLE ACCESS FULL                | DAILY_PROV_PRESTG            |   744K|    85M|  1007 |
    |  10 |   UNION-ALL                          |                              |       |       |       |
    |  11 |    SORT UNIQUE                       |                              |   177 | 97350 |  9539 |
    |  12 |     HASH JOIN                        |                              |   177 | 97350 |  9538 |
    |  13 |      HASH JOIN OUTER                 |                              |     3 |  1518 |  9533 |
    |  14 |       HASH JOIN                      |                              |     1 |   391 |  8966 |
    |  15 |        TABLE ACCESS BY INDEX ROWID   | CLM_DETAIL_PRESTG            |     1 |    27 |     3 |
    |  16 |         NESTED LOOPS                 |                              |     1 |   361 |    10 |
    |  17 |          NESTED LOOPS OUTER          |                              |     1 |   334 |     7 |
    PLAN_TABLE_OUTPUT
    |  18 |           NESTED LOOPS OUTER         |                              |     1 |   291 |     4 |
    |  19 |            VIEW                      |                              |     1 |   259 |     2 |
    |  20 |             TABLE ACCESS FULL        | SYS_TEMP_0FD9D66C9_DA2D01AD  |     1 |   269 |     2 |
    |  21 |            INDEX RANGE SCAN          | CLM_PAYMNT_CLMEXT_PRESTG_IDX |     1 |    32 |     2 |
    |  22 |           TABLE ACCESS BY INDEX ROWID| CLM_PAYMNT_CHKEXT_PRESTG     |     1 |    43 |     3 |
    |  23 |            INDEX RANGE SCAN          | CLM_PAYMNT_CHKEXT_PRESTG_IDX |     1 |       |     2 |
    |  24 |          INDEX RANGE SCAN            | CLM_DETAIL_PRESTG_IDX        |     6 |       |     2 |
    |  25 |        VIEW                          |                              |    32M|   934M|  8235 |
    |  26 |         TABLE ACCESS FULL            | SYS_TEMP_0FD9D66C8_DA2D01AD  |    32M|   934M|  8235 |
    |  27 |       VIEW                           |                              |   744K|    81M|   550 |
    |  28 |        TABLE ACCESS FULL             | SYS_TEMP_0FD9D66CA_DA2D01AD  |   744K|    81M|   550 |
    PLAN_TABLE_OUTPUT
    |  29 |      TABLE ACCESS FULL               | CCP_MBRSHP_XREF              |  5288 |   227K|     5 |
    |  30 |    SORT UNIQUE                       |                              |   163 | 82804 |  9536 |
    |  31 |     HASH JOIN                        |                              |   163 | 82804 |  9535 |
    |  32 |      HASH JOIN OUTER                 |                              |     3 |  1437 |  9530 |
    |  33 |       HASH JOIN                      |                              |     1 |   364 |  8963 |
    |  34 |        NESTED LOOPS OUTER            |                              |     1 |   334 |     7 |
    |  35 |         NESTED LOOPS OUTER           |                              |     1 |   291 |     4 |
    |  36 |          VIEW                        |                              |     1 |   259 |     2 |
    |  37 |           TABLE ACCESS FULL          | SYS_TEMP_0FD9D66C9_DA2D01AD  |     1 |   269 |     2 |
    |  38 |          INDEX RANGE SCAN            | CLM_PAYMNT_CLMEXT_PRESTG_IDX |     1 |    32 |     2 |
    |  39 |         TABLE ACCESS BY INDEX ROWID  | CLM_PAYMNT_CHKEXT_PRESTG     |     1 |    43 |     3 |
    PLAN_TABLE_OUTPUT
    |  40 |          INDEX RANGE SCAN            | CLM_PAYMNT_CHKEXT_PRESTG_IDX |     1 |       |     2 |
    |  41 |        VIEW                          |                              |    32M|   934M|  8235 |
    |  42 |         TABLE ACCESS FULL            | SYS_TEMP_0FD9D66C8_DA2D01AD  |    32M|   934M|  8235 |
    |  43 |       VIEW                           |                              |   744K|    81M|   550 |
    |  44 |        TABLE ACCESS FULL             | SYS_TEMP_0FD9D66CA_DA2D01AD  |   744K|    81M|   550 |
    |  45 |      TABLE ACCESS FULL               | CCP_MBRSHP_XREF              |  5288 |   149K|     5 |
    The CLM_DETAIL_PRESTG table has 100 million records and the CLM_HEADER_PRESTG table has 75 million records.
    Any gussestions on how to getch huge records from tables of this size will help.
    Regards,
    Narayan

    WITH CLAIM_DTL
         AS (  SELECT
                      ICN_NUM,
    MIN (FIRST_SRVC_DT) AS FIRST_SRVC_DT,
    MAX (LAST_SRVC_DT) AS LAST_SRVC_DT,
    MIN (PLC_OF_SRVC_CD) AS PLC_OF_SRVC_CD
    FROM CCP_STG.CLM_DETAIL_PRESTG  CD WHERE ACT_CD <>'D'
    GROUP BY ICN_NUM),
    CLAIM_HDR
         AS (SELECT
                    ICN_NUM,
    SBCR_ID,
    MBR_ID,
    MBR_FIRST_NAME,
    MBR_MI,
    MBR_LAST_NAME,
    MBR_BIRTH_DATE,
    GENDER_TYPE_CD,
    SBCR_RLTNSHP_TYPE_CD,
    SBCR_FIRST_NAME,
    SBCR_MI,
    SBCR_LAST_NAME,
    SBCR_ADDR_LINE_1,
    SBCR_ADDR_LINE2,
    SBCR_ADDR_CITY,
    SBCR_ADDR_STATE,
    SBCR_ZIP_CD,
    PRVDR_NUM,
    CLM_PRCSSD_DT,
    CLM_TYPE_CLASS_CD,
    AUTHO_NUM,
    TOT_BILLED_AMT,
    HCFA_DRG_TYPE_CD,
    FCLTY_ADMIT_DT,
    ADMIT_TYPE,
    DSCHRG_STATUS_CD,
    FILE_BILLING_NPI,
    CLAIM_LOCATION_CD,
    CLM_RELATED_ICN_1,
    SBCR_ID||0
    || MBR_ID
    || GENDER_TYPE_CD
    || SBCR_RLTNSHP_TYPE_CD
    || MBR_BIRTH_DATE
    AS MBR_ENROLL_ID,
    SUBSCR_INSGRP_NM ,
    CAC,
    PRVDR_PTNT_ACC_ID,
    BILL_TYPE,
      PAYEE_ASSGN_CODE,
    CREAT_RUN_CYC_EXEC_SK,
    PRESTG_INSRT_DT
    FROM CCP_STG.CLM_HEADER_PRESTG P WHERE ACT_CD <>'D' AND SUBSTR(CLM_PRCSS_TYPE_CD,4,1) NOT IN  ('1','2','3','4','5','6')  ),
    PROV AS ( SELECT DISTINCT
    PROV_ID,
    PROV_FST_NM,
    PROV_MD_NM,
    PROV_LST_NM,
    PROV_BILL_ADR1,
    PROV_BILL_CITY,
    PROV_BILL_STATE,
    PROV_BILL_ZIP,
    CASE WHEN PROV_SEC_ID_QL='E' THEN PROV_SEC_ID
    ELSE NULL
    END AS PROV_SEC_ID,
    PROV_ADR1,
    PROV_CITY,
    PROV_STATE,
    PROV_ZIP
    FROM CCP_STG.DAILY_PROV_PRESTG),
    MBR_XREF AS (SELECT SUBSTR(MBR_ENROLL_ID,1,17)||DECODE ((SUBSTR(MBR_ENROLL_ID,18,1)),'E','1','S','2','D','3')||SUBSTR(MBR_ENROLL_ID,19) AS MBR_ENROLLL_ID,
      NEW_MBR_FLG
    FROM CCP_STG.CCP_MBRSHP_XREF)
    SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
    CLAIM_HDR.SBCR_ID AS SBCR_ID,
    CLAIM_HDR.MBR_ID AS MBR_ID,
    CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
    CLAIM_HDR.MBR_MI AS MBR_MI,
    CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
    CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
    CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
    CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
    CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
    CLAIM_HDR.SBCR_MI AS SBCR_MI,
    CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
    CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
    CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
    CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
    CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
    CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
    CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
    CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
    CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
    CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
    CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
    CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
    CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
    CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
    CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
    CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
    CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
    CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
    CLAIM_HDR.SUBSCR_INSGRP_NM,
    CLAIM_HDR.CAC,
    CLAIM_HDR.PRVDR_PTNT_ACC_ID,
    CLAIM_HDR.BILL_TYPE,
    CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
    CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
    CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
    PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
    PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
    PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
    PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
    PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
    PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
    PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
    PROV.PROV_SEC_ID AS BILL_PROV_EIN,
    PROV.PROV_ID AS SERV_FAC_ID    ,
    PROV.PROV_ADR1 AS SERV_FAC_ADDR1          ,
    PROV.PROV_CITY AS SERV_FAC_CITY ,
    PROV.PROV_STATE AS SERV_FAC_STATE          ,
    PROV.PROV_ZIP AS     SERV_FAC_ZIP  ,
    CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
    CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
    CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
    CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
      CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
    CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK
      FROM CLAIM_DTL,(select * FROM CCP_STG.CLM_DETAIL_PRESTG WHERE ACT_CD <>'D') CLM_DETAIL_PRESTG, CLAIM_HDR,CCP_STG.MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
    WHERE    
    CLAIM_HDR.ICN_NUM = CLM_DETAIL_PRESTG.ICN_NUM
    AND       CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
    AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
    AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
    AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
    AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
    AND CLM_DETAIL_PRESTG.FIRST_SRVC_DT >= 20110101
    AND MBR_XREF.NEW_MBR_FLG = 'Y'
    AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
    AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0
    UNION ALL
    SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
    CLAIM_HDR.SBCR_ID AS SBCR_ID,
    CLAIM_HDR.MBR_ID AS MBR_ID,
    CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
    CLAIM_HDR.MBR_MI AS MBR_MI,
    CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
    CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
    CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
    CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
    CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
    CLAIM_HDR.SBCR_MI AS SBCR_MI,
    CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
    CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
    CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
    CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
    CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
    CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
    CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
    CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
    CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
    CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
    CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
    CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
    CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
    CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
    CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
    CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
    CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
    CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
    CLAIM_HDR.SUBSCR_INSGRP_NM,
    CLAIM_HDR.CAC,
    CLAIM_HDR.PRVDR_PTNT_ACC_ID,
    CLAIM_HDR.BILL_TYPE,
    CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
    CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
    CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
    PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
    PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
    PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
    PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
    PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
    PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
    PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
    PROV.PROV_SEC_ID AS BILL_PROV_EIN,
    PROV.PROV_ID AS SERV_FAC_ID    ,
    PROV.PROV_ADR1 AS SERV_FAC_ADDR1          ,
    PROV.PROV_CITY AS SERV_FAC_CITY ,
    PROV.PROV_STATE AS SERV_FAC_STATE          ,
    PROV.PROV_ZIP AS     SERV_FAC_ZIP  ,
    CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
    CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
    CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
    CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
    CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
    CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK  
      FROM CLAIM_DTL, CLAIM_HDR,MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
    WHERE CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
    AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
    AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
    AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
    AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
    -- AND TRUNC(CLAIM_HDR.PRESTG_INSRT_DT) = TRUNC(SYSDATE)
    AND CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK = 123638.000000000000000
    AND MBR_XREF.NEW_MBR_FLG = 'N'
    AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
    AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0;

  • Planfunction in IP or with BW modelling - case with 15 million records

    Hi,
    we need to implement a simple planfunction (qty * price) which has to be executed for 15 million records at a time (qty of 15 million records multiplied with average price calculated on a higher level). I'd like to still implement this with a simple FOX formula but are fearing the performance, given the number of records. Does anyone has experience with this number of records. Would you suggest to do this within IP or using BW modelling. The maximum lead time accepted is 24 hours for this planfunction ...
    The planfunction is expected to be executed in a batch or background mode, but should be triggered from an IP input query and not using RSPC for example...
    please advise.
    D

    Hi Dries,
    using BI IP you should definitely do a partition via planning sequence in a process chain, cf.
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/45/946677f8fb0cf2e10000000a114a6b/frameset.htm
    Planning functions load the requested data into main memory, with 15 million records you will have a problem. In addition it is not a good idea to emply only one work process with the whole work (a planning function uses only one work process). So partition the problem to be able to use parallelization.
    Process chains can be triggered via an API, cf. function group RSPC_API. So you can easily start a process chain via a planning function.
    Regards,
    Gregor

  • Table with 200 millions records.

    Dear all,
    I have to create table which will accept 200 millions record. I have to do monthly report from these data.
    The performance make me very very concern, does anyone has any suggestion?
    Thanks in advance.

    Hi,
    I have a situation like yours.
    Each month, you need to create a new partition, for the next year, this is anothers partition.
    For example, you have a table
    SQL> CREATE TABLE sales99_cpart(
    2> sale_id NUMBER NOT NULL,
    3> sale_date DATE,
    4> prod_id NUMBER,
    5> qty NUMBER)
    6> PARTITION BY RANGE(sale_date)
    7> SUBPARTITION BY HASH(prod_id) SUBPARTITIONS 4
    8> STORE IN (data01,data02,data03,data04)
    9> (PARTITION cp1 VALUES LESS THAN('01-APR-1999'),
    10> PARTITION cp2 VALUES LESS THAN('01-JUL-1999'),
    11> PARTITION cp3 VALUES LESS THAN('01-OCT-1999'),
    12> PARTITION cp4 VALUES LESS THAN('01-JAN-2000'))
    13> /
    For the next year, add new partition and subpartition.
    Subpartitions are like table, after what you can use parallel query. It is very interristing for good performance.
    You can partition table by range on date, and subpartition by hash on id callcenter.
    Next year, if you want history, you can drop only one subpartition.
    The cost : Oracle partitionning is an option of Oracle entreprise edition, this is not default option.
    Nicolas.

  • Deleteing 110 million records..!!!!!!!1

    I have got a table which has 120 million records out of which only 10 million records are usefull. Now I want to delete the remaining 110 million records with the where condition. I spoke to my DBA and he said this has it will take around 2 weeks or more time for this task but I need to get this done quickly as this thing has been effecting oue daily rollup process for the generation of alerts for a high priority applications
    i want delete based on this condition
    delete from tabA where colA=0.
    Any kind of help is highly appreciated.
    Oracle Version:11g

    >
    3.) insert /*+ append */ into taba select * from taba_temp;
    >
    That's the 'old' way that should be used ONLY if OP does not have the partitioning option licensed.
    >
    1.) create table taba_temp as select * from taba where cola != 0;
    >
    That 'temp' table should be created in the desired tablespace as a RANGE partitioned table with one partition: VALUES LESS THAN (MAXVALUE)
    Then step 3 can just do 'ALTER TABLE EXCHANGE PARTITION' to swap the data in. That is metadata only operation and takes a fraction of a second.
    No need to query the data again.
    DROP TABLE EMP_COPY
    CREATE TABLE EMP_COPY AS SELECT * FROM EMP;  -- this is a copy of emp and acts as the MAIN table that we want to keep
    drop table emp_temp
    -- create a partitioned temp table with the same structure as the actual table
    -- we only want to keep emp records for deptno = 20 for this example
    CREATE TABLE EMP_TEMP 
    PARTITION BY RANGE (empno)
    (partition ALL_DATA values less than (MAXVALUE)
    AS SELECT * FROM EMP_COPY WHERE DEPTNO = 20
    -- truncate our 'real' table - very fast
    TRUNCATE TABLE EMP_COPY
    -- swap in the 'deptno=20' data from the temp table - very fast
    ALTER TABLE EMP_TEMP EXCHANGE PARTITION ALL_DATA WITH TABLE EMP_COPY

  • Deleting 5 million records(slowness issue)

    Hi guys ,
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .

    >
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .
    >
    Nothing much you can do.
    About the only alternatives are
    1) create a new table that copies the records you want to keep. Then drop the old table and rename the new one to the old name. If you are deleting most of the records this is a good approach.
    2) create a new table that copies the records you want to keepl. Then truncate the partitions of the old table and use partition exchange to put the data back.
    3) delete the data in smaller batches of 100K records or so each. You could do this by using a different date value in the WHERE clause. Delete data < 2003, then delete data < 2004 and so on.
    4. If you want to delete all data in a partition you can just truncate the partition. That is the approach to use if you partition by date and are trying to remove older data.

  • Table has 80 million records - Performance impact if we stop archiving

    HI All,
    I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.
    I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?
    Any comments welcomed.

    What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.
    About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.
    You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.
    In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
    Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 .

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Insert/select one million rows at a time from source to target table

    Hi,
    Oracle 10.2.0.4.0
    I am trying to insert around 10 million rows into table target from source as follows:
    INSERT /*+ APPEND NOLOGGING */ INTO target
    SELECT *
    FROM source f
    WHERE
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);There is a unique index on target table on col1,col2
    I was having issues with undo and now I am getting the follwing error with temp space
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPI believce it would be easier if I did bulk insert one million rows at a time and commit.
    I appriciate any advice on this please.
    Thanks,
    Ashok

    902986 wrote:
    NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);
    I don't know if it has any bearing on the case, but is that WHERE clause on purpose or a typo? Should it be:
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.COL1 and f.col2 = m.col2);Anyway - how much of your data already exists in target compared to source?
    Do you have 10 million in source and very few in target, so most of source will be inserted into target?
    Or do you have 9 million already in target, so most of source will be filtered away and only few records inserted?
    And what is the explain plan for your statement?
    INSERT /*+ APPEND NOLOGGING */ INTO target
    SELECT *
    FROM source f
    WHERE
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);As your error has to do with TEMP, your statement might possibly try to do a lot of work in temp to materialize the resultset or parts of it to maybe use in a hash join before inserting.
    So perhaps you can work towards an explain plan that allows the database to do the inserts "along the way" rather than calculate the whole thing in temp first.
    That probably will go much slower (for example using nested loops for each row to check the exists), but that's a tradeoff - if you can't have sufficient TEMP then you may have to optimize for less usage of that resource at the expense of another resource ;-)
    Alternatively ask your DBA to allocate more room in TEMP tablespace. Or have the DBA check if there are other sessions using a lot of TEMP in which case maybe you just have to make sure your session is the only one using lots of TEMP at the time you execute.

Maybe you are looking for

  • Use of software on more than one computer

    Hi, all I would like to find out is if I can use my monthly subscription to Photoshop on more than one machine at home and whether there are discounts on subscribing to a second product when the first has been set up already?

  • Error message when loading a web page

    When I'm online, while the page is still loading I get the following error message : A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete. Script: ch

  • Help, how do I get my upgrade back???

    Has anyone experienced something like this before? I preordered the iPhone 4S when they became available on the 7th but changed my mind a couple hours later because I decided I really wanted the white one. I cancelled my order thinking since I had ju

  • Suggestion to Moderators: time to split this forum into subjects

    I think this forum went through significant activation and change since first initiated after SAPPHIRE 2010. As HANA went from the idea on the slide to real thing with many different aspects I think it is time to split this forum into few focused top

  • HT1277 cannot receive mail since mountain lion

    since my upgrade to mountain lion I had to reinstall my mail settings to be able to use mail again. however, now it is not working anymore. I did the installation together with the UPC helpdesk, so no mistakes, but I still do not receive any mail. I