Performance Tuning on a table where WHERE clause is having only PK columns

Hi folks!!
Here is the gotcha!!!!
I am having a table, named X, with primary key columns A and B.
I running a packaged procedure in which there is a SELECT statment to get the record from the table X with WHERE condition on A and B columns. Now the problem is, the package is taking too much amount of time to get executed.
Eventually, I used TKPROF utility on the trace file of that package execution.
The trace file is showing that huge amount of time is spent on the SELECT statement of table X, as I said above.
I am baffled why it is taking that much of time eventhough the columns in the WHERE clause are Primary Key columns only and that too there are in the sequential order as that of in primary key contituent sequence.
I would be very grateful to you guyz... plz help me out of this one....
Cheers...........PCZ

Also can you use preformatting as requested above, so
that we can read it?
[pre]Your preformatted
text[/pre]
Hi William,
Plz find the requested:
SELECT *
FROM
STTMS_CUST_ACCOUNT WHERE BRANCH_CODE=:B2 AND CUST_AC_NO=:B1
call     count       cpu    elapsed       disk      query    current        rows
Parse      180      0.00       0.04          0          0          0           0
Execute  56538      0.96       4.02          0          0         26           0
Fetch    56512   5581.98   19547.83         12 1134505869          6       56512
total   113230   5582.94   19551.89         12 1134505869         32       56512
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 42  (FLEXMIG)   (recursive depth: 1)
Rows     Row Source Operation
    325  TABLE ACCESS BY INDEX ROWID STTM_CUST_ACCOUNT (cr=6427406 pr=1 pw=0 time=327599336 us)
10064925   INDEX RANGE SCAN IND_DORMANCY (cr=46475 pr=0 pw=0 time=15740976 us)(object id 512984)
Rows     Execution Plan
      0  SELECT STATEMENT   MODE: ALL_ROWS
    325   TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
              'STTM_CUST_ACCOUNT' (TABLE)
10064925    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'IND_DORMANCY' (INDEX)
********************************************************************************This is what I can do. I apologize if it still does not come out in proper format. This is the first time I am trying it. :)
Cheers.......PCZ

Similar Messages

  • Performance with dates in the where clause

    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?
    Thanks in advance.
    Execution Plan 1:
    SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1486387033
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
    Note
    - dynamic sampling used for this statement
    Statistics
    4 recursive calls
    0 db block gets
    1610 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Execution Plan 2:
    SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
    259259259259259259)
    3 - access("FDATE">=TRUNC(SYSDATE@!) AND
    "FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
    9)
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows
    Execution Plan 3:
    SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
    e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
    23:59:59','DD-MON-YY hh24:mi:ss'))
    3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
    "FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Hi,
    user10541890 wrote:
    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
    Be careful; post the code you're actually running.
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
    If "better" means faster, you've already shown that one is about as good as the other.
    Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
    For clarity, I prefer:
    WHERE     fdate >= TRUNC (SYSDATE)
    AND     fdate <  TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?Sorry, I can't.

  • Updating one table with mult. table where clause

    I'm having problems with my update statement. I want to update one table that has a mulitple table where clause. Not sure how to accomplish this. Here is what I have so far.
    update lawson.apvenmast a
    set vendor_status = 'I'
    where ((select * from apinvoice i
    where i.due_date <= TO_DATE('20011231', 'YYYYMMDD') and
    i.vendor = a.vendor)
    ((apvenmast.ven_class = 'INS') or
    (apvenmast.ven_class = 'REF')));
    Am I on the right track?
    thanks in advance for any help.
    Lisa Mears

    A lot is missing.
    where ((select * from apinvoice iA where clause should be like
    where <something> IN (select <something> from ...)
    ((apvenmast.ven_class = 'INS') or
    (apvenmast.ven_class = 'REF')));Where does this belong? There is no AND or OR with these two lines.
    Check your table aliases too.

  • PL/SQL Evaluation problem of where clause in case of  NUMBER column type

    I found the following problem in Oracle® Database 2 Day Developer's Guide 11g Release 1 (11.1) B28843-04:
    The sole parameter of function eval_frequency is employee_id IN employees.employee_id%TYPE.
    An ORA-01422 exception occurs when the execution reaches the following select command
    SELECT e.hire_date
    INTO hire_date
    FROM employees e
    WHERE employee_id= e.employee_id;
    A possible cause of the error is that the type of employee_id is NUMBER while the employees.employee_id is NUMBER(6,0) . The result of the selection is the same as there were no WHERE clause at all.
    Everything worked fine, when I declared a temporary variable of NUMBER(6,0) for storing the actual parameter of function and used this variable in the where clause, but I consider this "solution" as being no solution.
    It is pointless to use %TYPE parameter of a function for flexibility if I must degrade this flexibility by a fixed declaration of a temporary variable of the same type as the column in question.
    What is wrong?
    The Developer'Guide I used, the Oracle Sql Developer I used or the PL/SQL version ?

    Hi,
    Welcome to the forum!
    user8949829 wrote:
    A possible cause of the error is that the type of employee_id is NUMBER while the employees.employee_id is NUMBER(6,0) . The result of the selection is the same as there I don't think so. The variable employee_id is defined as having the exact same type as the eponymous column. Even if it didn't, I believe Oracle will always implicity convert between datatypes when possible, rounding if necessary. That may cause errors, but it isn't causing this error.
    No, the error has nothing to do with the data type. It has to do with the ambiguity of employee_id: is it a column, or is it a variable?
    The default is that it means the column name, so
    WHERE   employee_id = e.employee_idis equivalent to saying
    WHERE   e.employee_id = e.employee_idwhich isn't quite the same thing as not having a WHERE clause; rows with NULL employee_id would still be excluded, if there were any.
    I think it's best not to use variable names that are the same as column names. You could call the variable v_employee_id, or, since it's an IN-argument, in_employee_id.
    If you must use a variable that can be mistaken for a column, then qulaify it with the name of the procedure, like this:
    WHERE   eval_frequency.employee_id = e.employee_id
    Everything worked fine, when I declared a temporary variable of NUMBER(6,0) for storing the actual parameter of function That makes sens. You probably gave that variable a name that couldn't be mistaken for a column in the table.
    Edited by: Frank Kulash on Jan 12, 2011 8:27 PM

  • Using multiple values in a where clause, for values only known at runtime

    Dear all
    I am creating a PL/SQL program which returns multiple rows of data but only where it meets a set id values that a user has previously chosen. The id values are stored in an associative array and are chosen by a user in the preceding procedure at run time.
    I know all the table and column names in advance. The only things I don't know are the exact number of ids selected from the id column and what their values will be. This will only be known at runtime. When the procedure is run by the user it prints multiple rows of data to a web browser.
    I have been reading the following posting, which I understand to a large extent, Query for multiple value search But I cannot seem to figure out how I would apply it to my work as I am dealing with multiple rows and a cursor.
    The code as I have currently written it is wrong because I get an error not found message in my web browser. I think the var_user_chosen_map_list_ids in the for cursor loop could be the problem. I am using the variable_user_chosen_map_list_ids to store all the id values from my associatative array as a string. Which I modified from the code that vidyadhars posted in the other thread.
    Should I be creating a OPEN FOR ref cursor and if so where would I put my associative array into it? At the moment I take the value, turning it into a string and IN part in the WHERE clause holds the string, allowing the WHERE clause to read all the values from it. I would expect the where clause to read everything in the string as 1 complete string of VARCHAR2 data but this would not be the case if this part of the code at least was correct. The code is as follows:
    --Global variable section contains:
    var_user_chosen_map_list_ids VARCHAR2(32767);
    PROCEDURE PROCMAPSEARCH (par_user_chosen_map_list_ids PKG_ARR_MAPS.ARR_MAP_LIST)
    IS
    CURSOR cur_map_search (par_user_chosen_map_list_ids IN NUMBER)
    IS
    SELECT MI.map_date
           MT.map_title,
    FROM map_info MI,
         map_title MT,
    WHERE MI.map_title_id = MT.map_title_id
    AND MI.map_publication_id IN
    (var_user_chosen_map_list_ids);
    var_map_list_to_compare VARCHAR2(32767) := '';
    var_exe_imm_map VARCHAR2(32767);
    BEGIN
    FOR rec_user_chosen_map_list_ids IN 1 .. par_user_chosen_map_list_ids.count
    LOOP
       var_user_chosen_map_list_ids := var_user_chosen_map_list_ids ||
       '''' ||
       par_user_chosen_map_list_ids(rec_user_chosen_map_list_ids) ||
    END LOOP;
    var_user_chosen_map_list_ids := substr(var_user_chosen_map_list_ids,
                                            1,
                                            length(var_user_chosen_map_list_ids)-1);
    var_exe_imm_map := 'FOR rec_search_entered_details IN cur_map_search
    LOOP
    htp.print('Map date: ' || cur_map_search.map_date || ' Map title: ' || cur_map_search.map_title)
    END LOOP;';
    END PROCMAPSEARCH;EXECUTE IMMEDIATE var_exe_imm_map;
    I would be grateful of any comments or advice.
    Kind regards
    Tim

    I would like to thank everyone for their kind help.
    I have now successfully converted my code for use with dynamic SQL. Part of my problem was getting the concept confused a little, especially as I could get everything work in a static cursor, including variables, as long as they did not contain multiple values. I have learnt that dynamic sql runs the complete select statement at runtime. However even with this I was getting concepts confused. For example I was including variables and the terminator; inside my select string, where as these should be outside it. For example the following is wrong:
         TABLE (sys.dbms_debug_vc2coll(par_user_chosen_map_list_ids))....
    AND MI.map_publication_id = column_value;';Where as the following is correct:
         TABLE (sys.dbms_debug_vc2coll('||par_user_chosen_map_list_ids||'))....
    AND MI.map_publication_id = column_value';PL/SQL is inserting the values and then running the select statement, as opposed to running the select statement with the variables and then accessing the values stored in those variables. Once I resolved that it worked. My revised code is as follows:
    --Global variable section contains:
    var_user_chosen_map_list_ids VARCHAR2(32767);
    var_details VARCHAR(32767);
    PROCEDURE PROCMAPSEARCH (par_user_chosen_map_list_ids PKG_ARR_MAPS.ARR_MAP_LIST)
    IS
    BEGIN
    FOR rec_user_chosen_map_list_ids IN 1 .. par_user_chosen_map_list_ids.count
    LOOP
       var_user_chosen_map_list_ids := var_user_chosen_map_list_ids ||
       '''' ||
       par_user_chosen_map_list_ids(rec_user_chosen_map_list_ids) ||
    END LOOP;
    var_user_chosen_map_list_ids := substr(var_user_chosen_map_list_ids,
                                            1,
                                            length(var_user_chosen_map_list_ids)-1);
    var_details := FUNCMAPDATAFIND (var_user_chosen_map_list_ids);
    htp.print(var_details);
    END PROCMAPSEARCH;
    FUNCTION FUNCMAPDETAILS (par_user_chosen_map_list_ids IN VARCHAR2(32767)
    RETURN VARCHAR2
    AS
    TYPE cur_type_map_search IS REF CURSOR;
    cur_map_search cur_type_map_search;
    var_map_date NUMBER(4);
    var_map_title VARCHAR2(32767);
    begin:
    OPEN cur_map_search FOR
    'SELECT MI.map_date,
           MT.map_title
    FROM map_info MI,
         map_title MT,
         TABLE (sys.dbms_debug_vc2coll(' || par_user_chosen_map_list_ids || '))
    WHERE MI.map_title_id = MT.map_title_id
    AND MI.map_publication_id = column_value';
    LOOP
    FETCH cur_map_compare INTO
    var_map_date,
    var_map_title;
    var_details := var_details || 'Map date: '||
                        var_map_date ||
                        'Map title: ' ||
                        var_map_title;
    EXIT WHEN cur_map_compare%NOTFOUND;
    END LOOP;
    RETURN var_details;
    END FUNCMAPDETAILS;Kind regards
    Tim

  • Performance tuning for BSAD table

    Hi All,
    How can I improve performance for this select query?I have only 'Clearing Date' on my selection screen.So I can use only 1 table key out of 10 keys..please suggest if I can use BSEG table or not?
          SELECT augdt        "Clearing Date
                 vbeln        "Billing Document
          FROM bsad
          INTO TABLE it_bseg
          WHERE augdt EQ p_date      "Parameter: Clearing Date
          AND   bschl IN ('01' , '02' , '11' , '12')
          AND   vbeln NE space.

    Hi,
    you do the following syt which is not give 100% performance but it gives good than ur select statment.
    Any way you have the CLREARING date from that u need to pick the physical year and move into one field say w_gjahr.
    w_gjahr = p_date +0(4).
    SELECT augdt "Clearing Date
    vbeln "Billing Document
    FROM bsad
    INTO TABLE it_bseg
    WHERE <b>augdt EQ p_date</b> "Parameter: Clearing Date
         AND <b>gjahr  EQ w_gjahr</b>.
    if sy-subrc = 0.
    sort it_bseg by bukrs.
    endif.
    DELETE it_bseg where vbeln eq space
                                and bschl eq '01'
                                and bschl eq '02'
                                and bschl eq '11'
                                and bschl eq '12'.
    <b>
    Reward with points if helpful.</b>
    Regards,
    Vijay

  • Performance tuning on Siebel table

    Hi Team,
    Below query i am using to get data from s_contact table.
    SELECT   PERSON_UID as PARTY_UID,
      PERSON_UID as PERSON_UID,
      RIV_FLG,
      INTEGRATION_ID
    FROM SIEBEL.S_CONTACT While checking explain plan, total cost it gives nearly 40,000. If i put /*+ FIRST_ROWS(100) */, total cost it gives 700.
    While i use this query along with other query which has cost, 1000, full query gives me cost ~45,000. I do not know why i am facing this issue. The table S_CONTACT has 1.5 M records where as other query fetches only 23,000 records.

    Do you actually want to retrieve all the rows from the S_CONTACT table? If so, the /*+ FIRST_ROWS(100) */ hint will be counterproductive-- you don't want Oracle to optimize the retrieval of the first 100 rows, you want it to optimize the retrieval of the last row. If not, your query should alert the optimizer to the fact that you only care about the first 100 rows by doing something like
    SELECT <<list of columns>>
      FROM (SELECT <<list of columns>>, row_number() over (order by <<something>>) rnk
               FROM siebel.s_contact)
    WHERE rnk <= 100Ignore the cost.
    Justin

  • Slow split table export (R3load and WHERE clause)

    For our split table exports, we used custom coded WHERE clauses. (Basically adding additional columns to the R3ta default column to take advantage of existing indexes).
    The results have been good so far. Full tablescans have been eliminated and export times have gone down, in some cases, tables export times have improved by 50%.
    However, our biggest table, CE1OC01 (120 GB), continues to be a bottleneck. Initially, after using the new WHERE clause, it looked like performance gains were dramatic, with export times for the first 5 packages dropping from 25-30 hours down to 1 1/2 hours.
    However, after 2 hours, the remaining CE1OC01 split packages have shown no improvement. This is very odd because we are trying to determine why part of the table exports very fast, but other parts are running very slow.
    Before the custom WHERE clauses, the export server had run into issues with SORTHEAP being exhausted, so we thought that might be the culprit. But that does not seem to be an issue now, since the improved WHERE clauses have reduced or eliminated excessive sorting.
    I checked the access path of all the CE1OC01 packages, through EXPLAIN, and they all access the same index to return results. The execution time in EXPLAIN returns similar times for each of the packages:
    CE1OC01-11: select * from CE1OC01  WHERE MANDT='212'
    AND ("BELNR" > '0124727994') AND ("BELNR" <= '0131810250')
    CE1OC01-19: select * from CE1OC01 WHERE MANDT='212'
    AND ("BELNR" > '0181387534') AND ("BELNR" <= '0188469413')
          0 SELECT STATEMENT ( Estimated Costs =  8.448E+06 [timerons] )
      |
      ---      1 RETURN
          |
          ---      2 FETCH CE1OC01
              |
              ------   3 IXSCAN CE1OC01~4 #key columns:  2
    query execution time [millisec]            |       333
    uow elapsed time [microsec]                |   429,907
    total user CPU time [microsec]             |         0
    total system cpu time [microsec]           |         0
    Both queries utilize an index that has fields MANDT and BELNR. However, during R3load, CE1OC01-19 finishes in an hour and a half, whereas CE1OC01-11 can take 25-30 hours.
    I am wondering if there is anything else to check on the DB2 access path side of things or if I need to start digging deeper into other aggregate load/infrastructure issues. Other tables don't seem to exhibit this behavior. There is some discrepancy between other tables' run times (for example, 2-4 hours), but those are not as dramatic as this particular table.
    Another idea to test is to try and export only 5 parts of the table at a time, perhaps there is a throughput or logical limitation when all 20 of the exports are running at the same time. Or create a single column index on BELNR (default R3ta column) and see if that shows any improvement.
    Anyone have any ideas on why some of the table moves fast but the rest of it moves slow?
    We also notice that the "fast" parts of the table are at the very end of the table. We are wondering if perhaps the index is less fragmented in that range, a REORG or recreation of the index may do this table some good. We were hoping to squeeze as many improvements out of our export process as possible before running a full REORG on the database. This particular index (there are 5 indexes on this table) has a Cluster Ratio of 54%, so, perhaps for purposes of the export, it may make sense to REORG the table and cluster it around this particular index. By contrast, the primary key index has a Cluster Ratio of 86%.
    Here is the output from our current run. The "slow" parts of the table have not completed, but they average a throughput of 0.18 MB/min, versus the "fast" parts, which average 5 MB/min, a pretty dramatic difference.
    package     time      start date        end date          size MB  MB/min
    CE1OC01-16  10:20:37  2008-11-25 20:47  2008-11-26 07:08   417.62    0.67
    CE1OC01-18   1:26:58  2008-11-25 20:47  2008-11-25 22:14   429.41    4.94
    CE1OC01-17   1:26:04  2008-11-25 20:47  2008-11-25 22:13   416.38    4.84
    CE1OC01-19   1:24:46  2008-11-25 20:47  2008-11-25 22:12   437.98    5.17
    CE1OC01-20   1:20:51  2008-11-25 20:48  2008-11-25 22:09   435.87    5.39
    CE1OC01-1    0:00:00  2008-11-25 20:48                       0.00
    CE1OC01-10   0:00:00  2008-11-25 20:48                     152.25
    CE1OC01-11   0:00:00  2008-11-25 20:48                     143.55
    CE1OC01-12   0:00:00  2008-11-25 20:48                     145.11
    CE1OC01-13   0:00:00  2008-11-25 20:48                     146.92
    CE1OC01-14   0:00:00  2008-11-25 20:48                     140.00
    CE1OC01-15   0:00:00  2008-11-25 20:48                     145.52
    CE1OC01-2    0:00:00  2008-11-25 20:48                     184.33
    CE1OC01-3    0:00:00  2008-11-25 20:48                     183.34
    CE1OC01-4    0:00:00  2008-11-25 20:48                     158.62
    CE1OC01-5    0:00:00  2008-11-25 20:48                     157.09
    CE1OC01-6    0:00:00  2008-11-25 20:48                     150.41
    CE1OC01-7    0:00:00  2008-11-25 20:48                     175.29
    CE1OC01-8    0:00:00  2008-11-25 20:48                     150.55
    CE1OC01-9    0:00:00  2008-11-25 20:48                     154.84

    Hi all, thanks for the quick and extremely helpful answers.
    Beck,
    Thanks for the health check. We are exporting the entire table in parallel, so all the exports begin at the same time. Regarding the SORTHEAP, we initially thought that might be our problem, because we were running out of SORTHEAP on the source database server. Looks like for this run, and the previous run, SORTHEAP has remained available and has not overrun. That's what was so confusing, because this looked like a buffer overrun.
    Ralph,
    The WHERE technique you provided worked perfectly. Our export times have improved dramatically by switching to the forced full tablescan. Being always trained to eliminate full tablescans, it seems counterintuitive at first, but, given the nature of the export query, combined with the unsorted export, it now makes total sense why the tablescan works so much better.
    Looks like you were right, in this case, the index adds too much additional overhead, and especially since our Cluster Ratio was terrible (in the 50% range), so the index was definitely working against us, by bouncing all over the place to pull the data out.
    We're going to look at some of our other long running tables and see if this technique improves runtimes on them as well.
    Thanks so much, that helped us out tremendously. We will verify the data from source to target matches up 1 for 1 by running a consistency check.
    Look at the throughput difference between the previous run and the current run:
    package     time       start date        end date          size MB  MB/min
    CE1OC01-11   40:14:47  2008-11-20 19:43  2008-11-22 11:58   437.27    0.18
    CE1OC01-14   39:59:51  2008-11-20 19:43  2008-11-22 11:43   427.60    0.18
    CE1OC01-12   39:58:37  2008-11-20 19:43  2008-11-22 11:42   430.66    0.18
    CE1OC01-13   39:51:27  2008-11-20 19:43  2008-11-22 11:35   421.09    0.18
    CE1OC01-15   39:49:50  2008-11-20 19:43  2008-11-22 11:33   426.54    0.18
    CE1OC01-10   39:33:57  2008-11-20 19:43  2008-11-22 11:17   429.44    0.18
    CE1OC01-8    39:27:58  2008-11-20 19:43  2008-11-22 11:11   417.62    0.18
    CE1OC01-6    39:02:18  2008-11-20 19:43  2008-11-22 10:45   416.35    0.18
    CE1OC01-5    38:53:09  2008-11-20 19:43  2008-11-22 10:36   413.29    0.18
    CE1OC01-4    38:52:34  2008-11-20 19:43  2008-11-22 10:36   424.06    0.18
    CE1OC01-9    38:48:09  2008-11-20 19:43  2008-11-22 10:31   416.89    0.18
    CE1OC01-3    38:21:51  2008-11-20 19:43  2008-11-22 10:05   428.16    0.19
    CE1OC01-2    36:02:27  2008-11-20 19:43  2008-11-22 07:46   409.05    0.19
    CE1OC01-7    33:35:42  2008-11-20 19:43  2008-11-22 05:19   414.24    0.21
    CE1OC01-16    9:33:14  2008-11-20 19:43  2008-11-21 05:16   417.62    0.73
    CE1OC01-17    1:20:01  2008-11-20 19:43  2008-11-20 21:03   416.38    5.20
    CE1OC01-18    1:19:29  2008-11-20 19:43  2008-11-20 21:03   429.41    5.40
    CE1OC01-19    1:16:13  2008-11-20 19:44  2008-11-20 21:00   437.98    5.75
    CE1OC01-20    1:14:06  2008-11-20 19:49  2008-11-20 21:03   435.87    5.88
    PLPO          0:52:14  2008-11-20 19:43  2008-11-20 20:35    92.70    1.77
    BCST_SR       0:05:12  2008-11-20 19:43  2008-11-20 19:48    29.39    5.65
    CE1OC01-1     0:00:00  2008-11-20 19:43                       0.00
                558:13:06  2008-11-20 19:43  2008-11-22 11:58  8171.62
    package     time      start date        end date          size MB   MB/min
    CE1OC01-9    9:11:58  2008-12-01 20:14  2008-12-02 05:26   1172.12    2.12
    CE1OC01-5    9:11:48  2008-12-01 20:14  2008-12-02 05:25   1174.64    2.13
    CE1OC01-4    9:11:32  2008-12-01 20:14  2008-12-02 05:25   1174.51    2.13
    CE1OC01-8    9:09:24  2008-12-01 20:14  2008-12-02 05:23   1172.49    2.13
    CE1OC01-1    9:05:55  2008-12-01 20:14  2008-12-02 05:20   1188.43    2.18
    CE1OC01-2    9:00:47  2008-12-01 20:14  2008-12-02 05:14   1184.52    2.19
    CE1OC01-7    8:54:06  2008-12-01 20:14  2008-12-02 05:08   1173.23    2.20
    CE1OC01-3    8:52:22  2008-12-01 20:14  2008-12-02 05:06   1179.91    2.22
    CE1OC01-10   8:45:09  2008-12-01 20:14  2008-12-02 04:59   1171.90    2.23
    CE1OC01-6    8:28:10  2008-12-01 20:14  2008-12-02 04:42   1172.46    2.31
    PLPO         0:25:16  2008-12-01 20:14  2008-12-01 20:39     92.70    3.67
                90:16:27  2008-12-01 20:14  2008-12-02 05:26  11856.91

  • SQL query optimization by changinf the WHERE clause

    Hi all,
    I need to know about the SQL query performance improvement by changing the WHERE clause. Consider a query :
    select * from student where country ='India' and age = 20
    Say, country = 'India' filters 100000 records and age = 20 filters 2000 records if given one by one. Now can anyone tell if the performance of the query can be changed by changing the query like this :
    select * from student where age = 20 and country ='India'
    as first where clause will give 2000 results and next filter will be applicable on only 2000 rows. While in the former query first where clause would give 100000 rows and seconde filter, hence, would be applicable on 100000 rows???
    Kindly explain.
    Thanks in advance.
    Abhideep

    in general the order of the where condition should not be important. However there are a few exeptions where sometimes it might play a role. Sometimes this is called order of filter conditions. Among others it depends on RBO or CBO used, Oracle Version, Indexes on the columns, statistic information on the table and the columns, CPU statistics in place etc.
    If you want to make this query fast and you know that the age column has much better selectivity then you can simply put an index on the age column. An index on the country column is probably not useful at all, since to little different values are in this column. If you are already in 11g I would suggest to use a composite index on both columns with the more selective in first position.
    Edited by: Sven W. on Nov 17, 2008 2:23 PM
    Edited by: Sven W. on Nov 17, 2008 2:24 PM

  • Fragmentation vs where clause obiee 10g rpd

    fragmentation vs where clause
    In obiee we can do fragmentation of table.
    for example if i have data from a to z. iam fragmenting data
    from a to m in on LTS and M to Z in another.
    this we can do with the where clause like in one LTS
    data from A to Z by using where clause .. what is difference.
    please help me.. any performance issues.
    thank you.

    In an enterprise world, data is often partitioned into multpile physical sources for a single logical table. This helps in organised data set with faster query results fetching. In such cases, BI server should know where to go for what type of data and under what conditions. Hence the fragmentation is defined to to handle efficient access by navigation to the appropriate physical source as per the request.
    e.g. there may be an aggregate table and a detailed table being accessed for different requests appropriately.
    This helps in performance improvement.
    Adding a where clause will still refer to the same table source hence no benefit of this feature is exhibited in this scenario.

  • Where clause for polling database Adapter

    I am creating a database adapter that polls a table for documents that are more than a month old based on a last updated field. For My where clause in the select statement I would like to have something like:
    where last_updated_date < add_months(sysdate, -1)
    but the where clause wizard will only allow me to use field names or literals. Any suggestions?

    Hi Stephen,
    Have you explored the option of calling a Stored Procedure or function which will do the SQL query as specified by you?
    Regards,
    lakshmi

  • Dynamic Where clause - from third party system

    I have internal table A with fields I am getting from legacy system
    field1
    field2
    field3
    field4
    fieldn
    and i am getting a condition table (Where clause conditions) from a legacy system thru an RFC call.
    and in the RFC i need to delete records from internal table A using this where clause
    and where clause condition table(is 72 char length single field table) from legacy system looks like the following
             ( FIELD1                          BETWEEN '01' AND '05' ) AND
         FIELD1                         NE '06'
         AND ( FIELD4                     EQ 'BU' )
         AND ( FIELD5                      BETWEEN '001' AND '009' )
    How could i delete entries from internal table A using this where clause conditon table mentioned above

    Hi a®s,
    yes, thats what I thought of: Generate subroutine Pool.
    I did not elaborate on the fact that dynamic where clauses are allowed only for database selections. That means that FUNCTION 'FREE_SELECTIONS_WHERE_2_RANGE' can't be used either.
    But if you generate a subroutine, you can generate TYPE and DATA declarations as well. This gives you the ability to have static where condition for the DELETE itab WHERE statement.
    If the type vaies during runtime, the generation may vary too.
    I think you are able to do the coding for this.
    On the other hand, Thomas' 'slick' proposal using a temporary database table and a subsequent SELECT ... WHERE (<dynamic where clause) may be easier to implement.
    BTW: Are ther release notes on ABAP720?
    Regards,
    Clemens

  • Using a WHERE clause in APEX Tree

    Hi All -
    I have an hierarchical SQL query that I'm displaying as an APEX tree.
    My sample app is here:
    https://apex.oracle.com/pls/apex/f?p=32581:29
    login: guest
    pw: app_1000
    workspace: leppard
    I am trying to add a WHERE clause so that only nodes with the lowest level children are displayed, i.e. something like 'WHERE connect_by_isleaf = 0 OR level = 5'
    The tree query with the where clause works fine in the SQL commands window, but when I add the WHERE clause to my apex tree the page no longer displays anything. Is this an issue with APEX or is there some other way to filter my results?
    Thanks in advance for any suggestions,
    john

    Connect by occurs first, the where clause then is applied to those results, effectively cutting away in the hierarchical structure. Since apex has to build a hierarchical structure from the query it relies on the level pseudocolumn, which is butchered by applying the where clause. It's a miracle you aren't even receiving errors because I'd almost expect an incorrect json array to have been built. With no top level to start from and only level 5 or leaf nodes there is no structure to present: both apex can not put out a correct representation and not jstree either. In your query you will see your "root nodes" but it is not representable.
    I'm not sure why you want to present this in a tree? Both connect_by_is_leaf and level = 5 will simply give you a list of nodes without any hierarchical structure.
    The better thing to do is to use a subquery to first limit your dataset and then use that to base the tree query on, that way you don't break any of those vital columns.
    Eg if you only want leaf nodes and their immediate parent you could go for something like this (quick mockup on some testdata):
    with dataset as (
    select node_id, parent_id
       from treedata
      where connect_by_isleaf = 0
    connect by prior node_id = parent_id
      start with parent_id = 0
    dataset2 as (
    select node_id, parent_id
      from dataset
    union all
    select node_id, null parent_id
      from treedata
    where node_id in (select parent_id from dataset)
    select level, node_id
       from dataset2
    connect by prior node_id = parent_id
      start with parent_id is null

  • Where Clause messing up results of Query using BC4J

    Hello all.
    I am developing a hand-coded JSP application that uses BC4Js and is deployed to a 9iAS on a SPARC box.
    I'm not sure what has been happening, but the results that are printed out to the screen are vastly different from the SQL results. I've gone as far as printing the Query that is executed onto the results page and have run it on SQL*Plus -- the query works on Plus, but displays no rows on the screen.
    I have a pretty complex where clause, I compare around four columns, and while developing I noticed that the sequencing of the columns greatly affected the output.
    Is this the same case?
    Thanks for the help!

    Additional Information:
    One of the Where parameters is a date range, when I made the range narrow I was able to get the right number of rows for the result set (34 in all). But when I widened it to the day right after that, I got no results.
    I'm stumped.

  • Expert mode query in View objects and appended where clause

    My company is Oracle Member Partner and we are developing enterprise web applications using Oracle database and BC4J.
    I have the following problem...
    When I enable EXPERT MODE option in View Object I have trouble appending to query statement in my client code.
    I need expert mode because I must use "SELECT DISTINCT" insted of "SELECT" in my query.
    It looks something like this:
    viewObject.setWhereClause("CLA_ID = " + claId);
    viewObject.executeQuery();
    SQL query from View Object becomes sub-query and fails to execute:
    select * from (original view object query) where (... appended where clause)
    Order by part of the query causes sql errors because original query is now sub-query.
    Is there any way around this?

    I tried creating an expert mode SQL query:
    SELECT DISTINCT EMPNO, ENAME FROM EMP.
    Then at runtime I do:
    vo.setWhereClause("ENAME LIKE '%'||?||'%');
    vo.setWhereClauseParam(0,'A');.
    and this works fine. The trick is that since expert-mode view objects get wrapped as inline views (to allow runtime appending of WHERE clause, actually), you need to select any column in the select statement to which you want to later refer in a dynamically-appended where clause.
    If you want to prevent the inline-view wrapping, you can write the following code in your view object's ViewObjectImpl subclass to force the VO to NOT be treated as an expert-mode SQL VO.
      // Goes in your view object impl subclass
      public void create() {
         // Force this VO to NOT be treated as an expert-mode SQL, so that
         // its query does not get wrapped as an inline view.
         getViewDef().setFullSql(false);
      }I used this trick above to create an expert mode query like:
    SELECT DISTINCT deptno FROM empand then at runtime I add a dynamic where clause that refers to a column
    in EMP that is not in the select list like this:
        ViewObject vo = am.findViewObject("View1");
        vo.setWhereClause("ename like '%A%'");
        vo.executeQuery();
        System.out.println(vo.first().getAttribute(0));and this causes the query to come out as:
    SELECT DISTINCT deptno FROM emp WHERE ename like '%A%'.
    instead of:
    SELECT * FROM (SELECT DISTINCT deptno FROM emp) QRSLT WHERE ename like '%A%'which would cause an error due to the fact that ename is not in the select list of the original (wrapped, inline) query.

Maybe you are looking for

  • Submit RSEOUT00 as background job

    Hi all, Here is my requirement.There is a Z program from which RSEOUT00 program has to be called , so that it will change the Idoc status to '03'. I tried using Submit RSEOUT00..and return and also tried JOB_open and then SUBMIT rseout00 WITH docnum

  • How to show a field value and a message text in a message box

    Hi I have made a message box that shows the value of a text field when a button is clicked. I would like to add some message text to the message box as well, so that my message box will show something like: You have entered this data: "textField valu

  • I am not being able to add contacts to my skype,how can I?

    after I enter a e-mail address on the new contacts line and click find, it does not add it.  What do I need to do to add a new contact, from my address book?

  • Microsoft outlook are not responding

    Dear Sir, we have purchase microsoft office 2013 & install ms office standard 2007 in out PC, last four days emails are not downloaded. when repair email given error message screen shot attached for your information. it is requested to please check t

  • How to make a compass graph with Numbers?

    Hi, I did a search for compass graphs and there are a few excel examples, do you know if there are any Numbers, versions, and user friendly ones at that. As I am not a mathematician. Basically I am plotting the frequency of doorway directions to comp