CBO statistics

Hi,
In which table the cbo statistics will get stored in oracle?
OS:solaris 10
Version:10.2.0.4

In addition (and more importantly):
user_tab_col_statistics
user_part_col_statistics (for partitions)
user_subpart_col_statistics (for subpartitions)
Regards,
Greg Rahn
http://structureddata.org

Similar Messages

  • Create new CBO statistics for the tables

    Dear All,
    I am facing bad performance in server.In SM50 I see that the read and delete process at table D010LINC takes
    a longer time.how to  create new CBO statistics for the tables D010TAB and D010INC.  Please suggest.
    Regards,
    Kumar

    Hi,
    I am facing problem in when save/activating  any problem ,so sap has told me to create new CBO statistics for the tables D010TAB and D010INC
    Now as you have suggest when tx db20
    Table D010LINC
    there error comes  Table D010LINC does not exist in the ABAP Dictionary
    Table D010TAB
         Statistics are current (|Changes| < 50 %)
    New Method           C
    New Sample Size
    Old Method           C                       Date                 10.03.2010
    Old Sample Size                              Time                 07:39:37
    Old Number                51,104,357         Deviation Old -> New       0  %
    New Number                51,168,679         Deviation New -> Old       0  %
    Inserted Rows                160,770         Percentage Too Old         0  %
    Changed Rows                       0         Percentage Too Old         0  %
    Deleted Rows                  96,448         Percentage Too New         0  %
    Use                  O
    Active Flag          P
    Analysis Method      C
    Sample Size
    Please suggest
    Regards,
    Kumar

  • Change settings for CBO statistics

    Hi ,
    Sorry to put this info here but I did´t find the right topic for Oracle issues.
    We have a lot of process here that uses 5 indexes and each time that I run those process, we have to delete first the actual indexes and re-created again. As 4 indexes are indexes for an infocube, this re-creation takes parameters from CBO statistics and statistics generated from RSA1 transaction. The last index takes info only from CBO statistics.
    What I want to do is generate all indexes using only CBO statistics. I´ve already looked on OSS and the only info that I could use is to put those new parameters on table DDSTORAGE and use transaction SE14 to generate again those indexes. But, my problem is that I have to do this thing 2 times of a day and the info that I have on DDSTORAGE is deleted after the first creation.
    My question is:
    1. Is it normal that DDSTORAGE do this?
    2. How can I change on system to always create those indexes with those parameters that I want to use??? (I only want to change initial, next and maxextents parameter, whatever using SE14 or RSA1)
    Many thanks to all !!
    Daniela Godoi

    it depends what app, and it always appear in the menu of the game and depends too what type of setting are you talking about. greetings and hope this answer worke

  • Bad CBO statistics

    Hi All ,
    I've below questions regarding the statistics gathering.Could you please try to answer.
    If the queries performance were acceptable , then the underlying table's stats should not be gathered ( as it may go either ways i.e improve or decrease the performance of the queries,lock_table_stats is a proof for the same) ?.Do you agree?
    How to confirm the queries are performing slow because of the bad CBO statistics.?
    Could you please elaborate the "Test with the RULE hint" from [Burleson's post|http://www.dba-oracle.com/t_sql_tuning_tricks.htm] ?
    Thanks in advance,
    Uday

    The last thing I would recommend you read about any Oracle topic is something from dba-oracle.com. To better understand this point google the following:
    "Kyte" and "Burleson"My generic advice, because in Oracle there are very few absolutes, is that before you make decisions with respect to stats and stats collection you determine how Oracle is using the stats. Not collecting stats works well right up until the point-in-time when the table changes enough that the plan it is generating because a problem rather than a solution. Collecting stats always works provided you collect them properly and don't hit a bug.
    The only persons whose advice I would recommend you take on this question, Exadata or not, is that of Jonathan Lewis, Christian Antognini, Tanel Poder, and a few other members of the Oak Table.

  • AFKO without CBO statistics

    Hi,
    in a ECC 6.0 with Oracle 10.2.04 on Solaris 10 SPARC box, AFKO table is without statistics. This system is just installed as hom-sys-copy, during Sapinst statistics was performed and all other tables have got statistics.
    We have already try to calculate statistics with RSANAORA report with collect. also try delete and then collect without success. RSANAORA ends in few seconds with collect.
    If we use BRTOOLS we have same results... but if we use:
    ANALYZE TABLE AFKO COMPUTE STATISTICS;
    AFKO have statistics
    Have you got any idea?
    Regards.

    Hi,
    even when the issue is already "solved" it is possible that the DBSTATC table in your system contains "wrong" information.
    In Oracle 10g ALL tables are supposed to have statistics, the Oracle Rule base optimizer is not supported anymore. For that reason, the control table DBSTATC has to be initialized.
    I assume that you have an entry in this table that causes BRCONNECT not to calculate statistics on this (and may be other) tables.
    Please, review the table, there should not be any entry with the "active" field set to "N" or "R". If there are tables with such status someone should know the reason (or the upgrade to 10g has not been done following the SAP upgrade guide). You can initialize it as mentioned on the 10g upgrade guide with the script updDBSTATC10.sql from note 819830.

  • Best practices for gathering statistics in 10g

    I would like to get some opinions on what is considered best practice for gathering statistics in 10g. I know that 10g has auto statistics gathering, but that doesn't seem to be very effective as I see some table stats are way out of date.
    I have recommended that we have at least a weekly job that generates stats for our schema using DBMS_STATS (DBMS_STATS.gather_schema_stats). Is this the right approach to generate object stats for a schema and keep it up to date? Are index stats included in that using CASCADE?
    Is it also necessary to gather system stats? I welcome any thoughts anyone might have. Thanks.

    Hi,
    Is this the right approach to generate object stats for a schema and keep it up to date? The choices of executions plans made by the CBO are only as good as the statistics available to it. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. As we may know, the CBO uses object statistics to choose the best execution plan for all SQL statements.
    I spoke with Andrew Holsworth of Oracle Corp SQL Tuning group, and he says that Oracle recommends taking a single, deep sample and keep it, only re-analyzing when there is a chance that would make a difference in execution plans (not the default 20% re-analyze threshold).
    I have my detailed notes here:
    http://www.dba-oracle.com/art_otn_cbo.htm
    As to system stats, oh yes!
    By measuring the relative costs of sequential vs. scattered I/O, the CBO can make better decisons. Here are the data items collected by dbms_stats.gather_system_stats:
    No Workload (NW) stats:
    CPUSPEEDNW - CPU speed
    IOSEEKTIM - The I/O seek time in milliseconds
    IOTFRSPEED - I/O transfer speed in milliseconds
    I have my notes here:
    http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm
    Hope this helps. . . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • RBO X CBO

    Hi everybody.
    I would like to understand where what we are missing here.
    We have several installations running 10g, and it's default value for OPTIMIZER_MODE is ALL_ROWS. Well, if there are no statistcs for the application tables, the RDBMS uses by default RULE BASED OPTIMIZER. Ok.
    After the statistics are generated (oracle job, automatic), the RDBMS turns to use COST BASED OPTIMIZER. I can understand that.
    The problem is: why do several queries run much slower when using CBO? When we analyze the execution plan, we see the wrong indexes being used.
    The solution I have for now is set OPTIMIZER_MODE=RULE. Then everything runs smoothly again.
    Why does this happen? Shouldn't CBO, after the statistics are generated, find out the best execution plan possible? I really can't use CBO on our sites, because performance is so much worse...
    Thanks in advance.
    Carlos Inglez

    Hi Carlos,
    The solution I have for now is set OPTIMIZER_MODE=RULE. Then everything runs smoothly again.It's almost always an issue with CBO parms or CBO statistics.
    There are several issues in 10g CBO, and here are my notes:
    http://www.dba-oracle.com/t_slow_performance_after_upgrade.htm
    Oracle has improved the cost-based Oracle optimizer in 9.0.5 and again in 10g, so you need to take a close look at your environmental parameter settings (init.ora parms) and your optimizer statistics.
    - Check optimizer parameters - Ensure that you are using the proper optimizer_mode (default is all_rows) and check optimal settings for optimizer_index_cost_adj (lower from the default of 100) and optimizer_index_caching (set to a higher value than the default).
    - Re-set optimizer costing - Consider unsetting your CPU-based optimizer costing (the 10g default, a change from 9i). CPU costing is best of you see CPU in your top-5 timed events in your STATSPACK/AWR report, and the 10g default of optimizercost_model=cpu will try to minimize CPU by invoking more full scans, especially in tablespaces with large blocksizes. To return to your 9i CBO I/O-based costing, set the hidden parameter "_optimizer_cost_model"=io
    - Verify deprecated parameters - you need to set optimizer_features_enable = 10.2.0.2 and optimizer_mode = FIRST_ROWS_n (or ALL_ROWS for a warehouse, but remove the 9i CHOOSE default).
    - Verify quality of CBO statistics - Oracle 10g does automatic statistics collection and your original customized dbms_stats job (with your customized parameters) will be overlaid. You may also see a statistics deficiency (i.e. not enough histograms) causing performance issues. Re-analyze object statistics using dbms_stats and make sure that you collect system statistics.
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author

  • Warnings  Pool or cluster table selected to check/collect statistics

    Dear all,
    I am getting error in in db13 backup.
    We are using Sap Ecc5 and
    oracle 9i on Window 2003.
    Production Server I am facing problem suddenly in db13 the UpdateStatsEnded with Return code:    0001 Success with warnings   
    BR0819I Number of pool and cluster tables found in DDNTT for owner SAPPRD: 169
    BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXB
    BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXC
    BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLSP
    BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLTP
    BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KAPOL
    BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KOCLU
    BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.M_IFLM
    BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBCLU
    BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBFCL
    And in db02      
    Missing in R/3 DDIC  11   index
    MARA_MEINS
    MARA_ZEINR
    MCHA_VFDAT
    VBRP_ARKTX
    VBRP_CHARG
    VBRP_FKIMG
    VBRP_KZWI1
    VBRP_MATKL
    VBRP_MATNR
    VBRP_SPART
    VBRP_WERKS
    Please guide steps   how to build index  and Pool or cluster table problem.
    Thanks,
    Kumar

    > BR0819I Number of pool and cluster tables found in DDNTT for owner SAPPRD: 169
    > BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXB
    > BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXC
    > BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLSP
    > BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLTP
    > BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KAPOL
    > BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KOCLU
    > BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.M_IFLM
    > BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBCLU
    > BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBFCL
    Upto Oracle 9i the rulebased optimizer was still used for Pool/Clustertables for reasons of plan stability (e.g. always take the index).
    To ensure that this is the case, these tables/indexes mustn't have CBO statistics.
    Therefore these tables are usually excluded from getting CBO statistics via an DBSTATC entry. You can modify this setting in transaction DB21.
    > And in db02      
    >
    >
    Missing in R/3 DDIC  11   index
    >  MARA_MEINS
    >  MARA_ZEINR
    >  MCHA_VFDAT
    >  VBRP_ARKTX
    >  VBRP_CHARG
    >  VBRP_FKIMG
    >  VBRP_KZWI1
    >  VBRP_MATKL
    >  VBRP_MATNR
    >  VBRP_SPART
    >  VBRP_WERKS
    Well, these indexes have been setup just in the database and not (how it is supposed to be) via the SE11. As the indexes have a naming-scheme, that is not supported by the ABAP Dictionary, the easiest way to get away from the warnings is to check which columns are covered by the indexes, drop the indexes on DB level and recreate them via SE11.
    Best regards,
    Lars

  • Oracle SQL Select query takes long time than expected.

    Hi,
    I am facing a problem in SQL select query statement. There is a long time taken in select query from the Database.
    The query is as follows.
    select /*+rule */ f1.id,f1.fdn,p1.attr_name,p1.attr_value from fdnmappingtable f1,parametertable p1 where p1.id = f1.id and ((f1.object_type ='ne_sub_type.780' )) and ( (f1.id in(select id from fdnmappingtable where fdn like '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#%')))order by f1.id asc
    This query is taking more than 4 seconds to get the results in a system where the DB is running for more than 1 month.
    The same query is taking very few milliseconds (50-100ms) in a system where the DB is freshly installed and the data in the tables are same in both the systems.
    Kindly advice what is going wrong??
    Regards,
    Purushotham

    SQL> @/alcatel/omc1/data/query.sql
    2 ;
    9 rows selected.
    Execution Plan
    Plan hash value: 3745571015
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    | 1 | SORT ORDER BY | |
    | 2 | NESTED LOOPS | |
    | 3 | NESTED LOOPS | |
    | 4 | TABLE ACCESS FULL | PARAMETERTABLE |
    |* 5 | TABLE ACCESS BY INDEX ROWID| FDNMAPPINGTABLE |
    |* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
    |* 7 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
    |* 8 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
    Predicate Information (identified by operation id):
    5 - filter("F1"."OBJECT_TYPE"='ne_sub_type.780')
    6 - access("P1"."ID"="F1"."ID")
    7 - filter("FDN" LIKE '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#
    8 - access("F1"."ID"="ID")
    Note
    - rule based optimizer used (consider using cbo)
    Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    0 bytes sent via SQL*Net to client
    0 bytes received via SQL*Net from client
    0 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    9 rows processed
    SQL>

  • Can I write this query in another way (prefferably in optimized manner)

    My database version._
    [oracle@localhost ~]$ uname -a
    Linux localhost.localdomain 2.6.18-194.17.1.0.1.el5 #1 SMP Wed Sep 29 15:40:03 EDT 2010 i686 i686 i386 GNU/Linux
    [oracle@localhost ~]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.2.0 Production on Fri Aug 12 04:44:21 2011
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> SELECT * FROM v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    SQL>
    Introduction to data and logic._
    I have on table called inv_leg_dummy. The main columns to consider is arrival_airport and departure_airport. Say a flight starts from kolkata (KOL) -> Goes to Dellhi (DEL) -> Goes to Hongkong (HKG) -> Goes to Tiwan (TPE). So in total KOL -> DEL -> HKG -> TPE
    Data will be like:
    Arrival Airport         Departure Airport
    HKG                       TPE
    KOL                       DEL
    DEL                       HKGPlease note that the order is not as expected, that means the flight starts from kolkata can not be determined straight way from the arrangment or any kind of flag.
    The main logic is, I first take Arrival Airport HKG and see if any Departure Airport exists as HKG, then I take the next KOL and see if any Departure Airport exists as KOL. You can notice KOL is only present as arrival airport, So, This is the first leg of the flight journey. By the same logic, I can determine next leg, that is DEL (because flight goes from KOL to DEL)...
    I need output like :
    ARRIVAL_AIRPORT     DEPARTURE_AIRPORT     SEQ
    HKG                  TPE              1
    DEL                  HKG              2
    KOL                  DEL              3
                      KOL              4So, The starting point KOL has heighest sequence (arrival is null), then KOL to DEL, DEL to HKG and finally HKG to TPE (sequence 1). The sequence may look like reverse order.
    Create Table and Insert Scripts._
    CREATE TABLE inv_leg_dummy
      carrier              VARCHAR2(3) not null,
      flt_num              VARCHAR2(4) not null,
      flt_num_suffix       VARCHAR2(1) default ' ' not null,
      flt_date             DATE not null,
      arrival_airport              VARCHAR2(5),
      departure_airport              VARCHAR2(5) not null
    alter table inv_leg_dummy
      add constraint XPKINV_LEG primary key (carrier,flt_num,flt_num_suffix,flt_date,departure_airport);
    TRUNCATE table inv_leg_dummy; 
    INSERT INTO inv_leg_dummy VALUES ('KA',1,' ',to_date('05/23/2011','mm/dd/rrrr'),'HKG','TPE');
    INSERT INTO inv_leg_dummy VALUES ('KA',1,' ',to_date('05/23/2011','mm/dd/rrrr'),'KOL','DEL');
    INSERT INTO inv_leg_dummy VALUES ('KA',1,' ',to_date('05/23/2011','mm/dd/rrrr'),'DEL','HKG');
    INSERT INTO inv_leg_dummy VALUES ('CX',1,' ',to_date('05/22/2011','mm/dd/rrrr'),'HKG','BNE');
    INSERT INTO inv_leg_dummy VALUES ('CX',1,' ',to_date('05/22/2011','mm/dd/rrrr'),'BNE','CNS');
    Now, it time to show you, What I have done!_
    SQL> ed
    Wrote file afiedt.buf
      1  SELECT Carrier,
      2         Flt_Num,
      3         Flt_Date,
      4         Flt_num_Suffix,
      5         arrival_airport,
      6         departure_airport,
      7         RANK() OVER(partition by Carrier, Flt_Num, Flt_Date, Flt_num_Suffix ORDER BY Carrier, Flt_Num, Flt_Date, Flt_num_Suffix, SEQ ASC NULLS LAST) SEQ,
      8         /* Fetching Maximum leg Seq No excluding Dummy Leg*/
      9         max(seq) over(partition by carrier, flt_num, flt_date, flt_num_suffix order by carrier, flt_num, flt_date, flt_num_suffix) max_seq
    10    FROM (SELECT k.Carrier,
    11                 k.Flt_Num,
    12                 k.Flt_Date,
    13                 k.Flt_num_Suffix,
    14                 k.departure_airport,
    15                 k.arrival_airport,
    16                 level seq
    17            FROM (SELECT
    18                   l.Carrier,
    19                   l.Flt_Num,
    20                   l.Flt_Date,
    21                   l.Flt_num_Suffix,
    22                   l.departure_airport,
    23                   l.arrival_airport
    24                    FROM inv_leg_dummy l) k
    25           START WITH k.departure_airport = case when
    26           (select count(*)
    27                         FROM inv_leg_dummy ifl
    28                        WHERE ifl.arrival_airport = k.departure_airport
    29                          AND ifl.flt_num = k.flt_num
    30                          AND ifl.carrier = k.carrier
    31                          AND ifl.flt_num_suffix = k.Flt_num_Suffix) = 0 then k.departure_airport end
    32          CONNECT BY prior k.arrival_airport = k.departure_airport
    33                 AND prior k.carrier = k.carrier
    34                 AND prior k.flt_num = k.flt_num
    35                 AND prior TRUNC(k.flt_date) =
    36                                                TRUNC(k.flt_date)
    37          UNION ALL
    38          /* Fetching Dummy Last Leg Information for Leg_Seq No*/
    39          SELECT ofl.Carrier,
    40                 ofl.Flt_Num,
    41                 ofl.Flt_Date,
    42                 ofl.Flt_num_Suffix,
    43                 ofl.arrival_airport as departure_airport,
    44                 NULL arrival_airport,
    45                 NULL seq
    46            FROM inv_leg_dummy ofl
    47           where NOT EXISTS (SELECT 1
    48                    FROM inv_leg_dummy ifl
    49                   WHERE ofl.arrival_airport = ifl.departure_airport
    50                     AND ifl.flt_num = ofl.flt_num
    51                     AND ifl.carrier = ofl.carrier
    52                     AND ifl.flt_num_suffix =ofl.Flt_num_Suffix))
    53*  ORDER BY 1, 2, 3, 4,7
    SQL> /
    CAR FLT_ FLT_DATE  F ARRIV DEPAR        SEQ    MAX_SEQ
    CX  1    22-MAY-11   BNE   CNS            1          2
    CX  1    22-MAY-11   HKG   BNE            2          2
    CX  1    22-MAY-11         HKG            3          2
    KA  1    23-MAY-11   HKG   TPE            1          3
    KA  1    23-MAY-11   DEL   HKG            2          3
    KA  1    23-MAY-11   KOL   DEL            3          3
    KA  1    23-MAY-11         KOL            4          3
    7 rows selected.
    SQL> The code is giving the right output, But I feel, I have done it in a hard way. Is there any easier/optimized approach to solve the problem ?

    Hello
    I thought I'd run run all 3 methods twice with autotrace to get an overview of the execution plans and basic performance metrics. The results are interesting.
    OPs method
    SQL> set autot on
    SQL> SELECT Carrier,
      2           Flt_Num,
      3           Flt_Date,
      4           Flt_num_Suffix,
      5           arrival_airport,
      6           departure_airport,
      7           RANK() OVER(partition by Carrier, Flt_Num, Flt_Date, Flt_num_Suffix ORDER BY Carrier, Flt_Num,
    53   ORDER BY 1, 2, 3, 4,7
    54  /
    CAR FLT_ FLT_DATE  F ARRIV DEPAR        SEQ    MAX_SEQ
    CX  1    22-MAY-11   BNE   CNS            1          2
    CX  1    22-MAY-11   HKG   BNE            2          2
    CX  1    22-MAY-11         HKG            3          2
    KA  1    23-MAY-11   HKG   TPE            1          3
    KA  1    23-MAY-11   DEL   HKG            2          3
    KA  1    23-MAY-11   KOL   DEL            3          3
    KA  1    23-MAY-11         KOL            4          3
    7 rows selected.
    Execution Plan
    Plan hash value: 3680289985
    | Id  | Operation                         | Name          |
    |   0 | SELECT STATEMENT                  |               |
    |   1 |  WINDOW SORT                      |               |
    |   2 |   VIEW                            |               |
    |   3 |    UNION-ALL                      |               |
    |*  4 |     CONNECT BY WITH FILTERING     |               |
    |*  5 |      FILTER                       |               |
    |*  6 |       TABLE ACCESS FULL           | INV_LEG_DUMMY |
    |   7 |       SORT AGGREGATE              |               |
    |*  8 |        TABLE ACCESS BY INDEX ROWID| INV_LEG_DUMMY |
    |*  9 |         INDEX RANGE SCAN          | XPKINV_LEG    |
    |  10 |      NESTED LOOPS                 |               |
    |  11 |       CONNECT BY PUMP             |               |
    |  12 |       TABLE ACCESS BY INDEX ROWID | INV_LEG_DUMMY |
    |* 13 |        INDEX RANGE SCAN           | XPKINV_LEG    |
    |* 14 |     FILTER                        |               |
    |  15 |      TABLE ACCESS FULL            | INV_LEG_DUMMY |
    |* 16 |      INDEX RANGE SCAN             | XPKINV_LEG    |
    Predicate Information (identified by operation id):
       4 - access("L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPORT" AND
                  "L"."CARRIER"=PRIOR "L"."CARRIER" AND "L"."FLT_NUM"=PRIOR "L"."FLT
    _NUM"
                  AND INTERNAL_FUNCTION(PRIOR TRUNC(INTERNAL_FUNCTION("L"."FLT_DATE"
    )))=TR
                  UNC(INTERNAL_FUNCTION("L"."FLT_DATE")))
       5 - filter("L"."DEPARTURE_AIRPORT"=CASE  WHEN ( (SELECT COUNT(*)
                  FROM "INV_LEG_DUMMY" "IFL" WHERE "IFL"."FLT_NUM_SUFFIX"=:B1 AND
                  "IFL"."FLT_NUM"=:B2 AND "IFL"."CARRIER"=:B3 AND
                  "IFL"."ARRIVAL_AIRPORT"=:B4)=0) THEN "L"."DEPARTURE_AIRPORT" END )
       6 - access("L"."CARRIER"=PRIOR "L"."CARRIER")
       8 - filter("IFL"."ARRIVAL_AIRPORT"=:B1)
       9 - access("IFL"."CARRIER"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
                  "IFL"."FLT_NUM_SUFFIX"=:B3)
      13 - access("L"."CARRIER"=PRIOR "L"."CARRIER" AND "L"."FLT_NUM"=PRIOR
                  "L"."FLT_NUM" AND "L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPO
    RT")
           filter("L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPORT" AND
                  INTERNAL_FUNCTION(PRIOR TRUNC(INTERNAL_FUNCTION("L"."FLT_DATE")))=
    TRUNC(
                  INTERNAL_FUNCTION("L"."FLT_DATE")))
      14 - filter( NOT EXISTS (SELECT 0 FROM "INV_LEG_DUMMY" "IFL" WHERE
                  "IFL"."FLT_NUM_SUFFIX"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
                  "IFL"."CARRIER"=:B3 AND "IFL"."DEPARTURE_AIRPORT"=:B4))
      16 - access("IFL"."CARRIER"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
                  "IFL"."FLT_NUM_SUFFIX"=:B3 AND "IFL"."DEPARTURE_AIRPORT"=:B4)
           filter("IFL"."DEPARTURE_AIRPORT"=:B1)
    Note
       - rule based optimizer used (consider using cbo)
    Statistics
              1  recursive calls
              0  db block gets
             33  consistent gets
              0  physical reads
              0  redo size
            877  bytes sent via SQL*Net to client
            886  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
              7  rows processed
    SQL> /
    CAR FLT_ FLT_DATE  F ARRIV DEPAR        SEQ    MAX_SEQ
    CX  1    22-MAY-11   BNE   CNS            1          2
    CX  1    22-MAY-11   HKG   BNE            2          2
    CX  1    22-MAY-11         HKG            3          2
    KA  1    23-MAY-11   HKG   TPE            1          3
    KA  1    23-MAY-11   DEL   HKG            2          3
    KA  1    23-MAY-11   KOL   DEL            3          3
    KA  1    23-MAY-11         KOL            4          3
    7 rows selected.
    Execution Plan
    Plan hash value: 3680289985
    | Id  | Operation                         | Name          |
    |   0 | SELECT STATEMENT                  |               |
    |   1 |  WINDOW SORT                      |               |
    |   2 |   VIEW                            |               |
    |   3 |    UNION-ALL                      |               |
    |*  4 |     CONNECT BY WITH FILTERING     |               |
    |*  5 |      FILTER                       |               |
    |*  6 |       TABLE ACCESS FULL           | INV_LEG_DUMMY |
    |   7 |       SORT AGGREGATE              |               |
    |*  8 |        TABLE ACCESS BY INDEX ROWID| INV_LEG_DUMMY |
    |*  9 |         INDEX RANGE SCAN          | XPKINV_LEG    |
    |  10 |      NESTED LOOPS                 |               |
    |  11 |       CONNECT BY PUMP             |               |
    |  12 |       TABLE ACCESS BY INDEX ROWID | INV_LEG_DUMMY |
    |* 13 |        INDEX RANGE SCAN           | XPKINV_LEG    |
    |* 14 |     FILTER                        |               |
    |  15 |      TABLE ACCESS FULL            | INV_LEG_DUMMY |
    |* 16 |      INDEX RANGE SCAN             | XPKINV_LEG    |
    Predicate Information (identified by operation id):
       4 - access("L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPORT" AND
                  "L"."CARRIER"=PRIOR "L"."CARRIER" AND "L"."FLT_NUM"=PRIOR "L"."FLT
    _NUM"
                  AND INTERNAL_FUNCTION(PRIOR TRUNC(INTERNAL_FUNCTION("L"."FLT_DATE"
    )))=TR
                  UNC(INTERNAL_FUNCTION("L"."FLT_DATE")))
       5 - filter("L"."DEPARTURE_AIRPORT"=CASE  WHEN ( (SELECT COUNT(*)
                  FROM "INV_LEG_DUMMY" "IFL" WHERE "IFL"."FLT_NUM_SUFFIX"=:B1 AND
                  "IFL"."FLT_NUM"=:B2 AND "IFL"."CARRIER"=:B3 AND
                  "IFL"."ARRIVAL_AIRPORT"=:B4)=0) THEN "L"."DEPARTURE_AIRPORT" END )
       6 - access("L"."CARRIER"=PRIOR "L"."CARRIER")
       8 - filter("IFL"."ARRIVAL_AIRPORT"=:B1)
       9 - access("IFL"."CARRIER"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
                  "IFL"."FLT_NUM_SUFFIX"=:B3)
      13 - access("L"."CARRIER"=PRIOR "L"."CARRIER" AND "L"."FLT_NUM"=PRIOR
                  "L"."FLT_NUM" AND "L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPO
    RT")
           filter("L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPORT" AND
                  INTERNAL_FUNCTION(PRIOR TRUNC(INTERNAL_FUNCTION("L"."FLT_DATE")))=
    TRUNC(
                  INTERNAL_FUNCTION("L"."FLT_DATE")))
      14 - filter( NOT EXISTS (SELECT 0 FROM "INV_LEG_DUMMY" "IFL" WHERE
                  "IFL"."FLT_NUM_SUFFIX"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
                  "IFL"."CARRIER"=:B3 AND "IFL"."DEPARTURE_AIRPORT"=:B4))
      16 - access("IFL"."CARRIER"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
                  "IFL"."FLT_NUM_SUFFIX"=:B3 AND "IFL"."DEPARTURE_AIRPORT"=:B4)
           filter("IFL"."DEPARTURE_AIRPORT"=:B1)
    Note
       - rule based optimizer used (consider using cbo)
    Statistics
              0  recursive calls
              0  db block gets
             33  consistent gets
              0  physical reads
              0  redo size
            877  bytes sent via SQL*Net to client
            886  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
              7  rows processedMy method
    SQL> SELECT
      2      carrier,
      3      flt_num,
      4      flt_num_suffix,
      5      flt_date,
      6      arrival_airport,
      7      departure_airport,
      8      COUNT(*) OVER(PARTITION BY carrier,
      9                                  flt_num
    10                    ) - LEVEL + 1  seq,
    11      COUNT(*) OVER(PARTITION BY carrier,
    12                                  flt_num
    13                    )  - 1        max_seq
    57  /
    CAR FLT_ F FLT_DATE  ARRIV DEPAR        SEQ    MAX_SEQ
    CX  1      22-MAY-11 BNE   CNS            1          2
    CX  1      22-MAY-11 HKG   BNE            2          2
    CX  1      22-MAY-11       HKG            3          2
    KA  1      23-MAY-11 HKG   TPE            1          3
    KA  1      23-MAY-11 DEL   HKG            2          3
    KA  1      23-MAY-11 KOL   DEL            3          3
    KA  1      23-MAY-11       KOL            4          3
    7 rows selected.
    Execution Plan
    Plan hash value: 921778235
    | Id  | Operation                                 | Name          |
    |   0 | SELECT STATEMENT                          |               |
    |   1 |  SORT ORDER BY                            |               |
    |   2 |   WINDOW SORT                             |               |
    |*  3 |    CONNECT BY NO FILTERING WITH START-WITH|               |
    |   4 |     COUNT                                 |               |
    |   5 |      VIEW                                 |               |
    |   6 |       UNION-ALL                           |               |
    |   7 |        TABLE ACCESS FULL                  | INV_LEG_DUMMY |
    |*  8 |        FILTER                             |               |
    |   9 |         TABLE ACCESS FULL                 | INV_LEG_DUMMY |
    |* 10 |         INDEX RANGE SCAN                  | XPKINV_LEG    |
    Predicate Information (identified by operation id):
       3 - access("ARRIVAL_AIRPORT"=PRIOR "DEPARTURE_AIRPORT" AND
                  "CARRIER"=PRIOR "CARRIER" AND "FLT_NUM"=PRIOR "FLT_NUM" AND
                  TRUNC(INTERNAL_FUNCTION("FLT_DATE"))=INTERNAL_FUNCTION(PRIOR
                  TRUNC(INTERNAL_FUNCTION("FLT_DATE"))))
           filter("ARRIVAL_AIRPORT" IS NULL)
       8 - filter( NOT EXISTS (SELECT 0 FROM "INV_LEG_DUMMY" "DL" WHERE
                  "DL"."FLT_NUM"=:B1 AND "DL"."CARRIER"=:B2 AND
                  "DL"."DEPARTURE_AIRPORT"=:B3 AND "DL"."FLT_DATE"=:B4))
      10 - access("DL"."CARRIER"=:B1 AND "DL"."FLT_NUM"=:B2 AND
                  "DL"."FLT_DATE"=:B3 AND "DL"."DEPARTURE_AIRPORT"=:B4)
           filter("DL"."DEPARTURE_AIRPORT"=:B1 AND "DL"."FLT_DATE"=:B2)
    Note
       - rule based optimizer used (consider using cbo)
    Statistics
              1  recursive calls
              0  db block gets
             19  consistent gets
              0  physical reads
              0  redo size
            877  bytes sent via SQL*Net to client
            338  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
              7  rows processed
    SQL> /
    CAR FLT_ F FLT_DATE  ARRIV DEPAR        SEQ    MAX_SEQ
    CX  1      22-MAY-11 BNE   CNS            1          2
    CX  1      22-MAY-11 HKG   BNE            2          2
    CX  1      22-MAY-11       HKG            3          2
    KA  1      23-MAY-11 HKG   TPE            1          3
    KA  1      23-MAY-11 DEL   HKG            2          3
    KA  1      23-MAY-11 KOL   DEL            3          3
    KA  1      23-MAY-11       KOL            4          3
    7 rows selected.
    Execution Plan
    Plan hash value: 921778235
    | Id  | Operation                                 | Name          |
    |   0 | SELECT STATEMENT                          |               |
    |   1 |  SORT ORDER BY                            |               |
    |   2 |   WINDOW SORT                             |               |
    |*  3 |    CONNECT BY NO FILTERING WITH START-WITH|               |
    |   4 |     COUNT                                 |               |
    |   5 |      VIEW                                 |               |
    |   6 |       UNION-ALL                           |               |
    |   7 |        TABLE ACCESS FULL                  | INV_LEG_DUMMY |
    |*  8 |        FILTER                             |               |
    |   9 |         TABLE ACCESS FULL                 | INV_LEG_DUMMY |
    |* 10 |         INDEX RANGE SCAN                  | XPKINV_LEG    |
    Predicate Information (identified by operation id):
       3 - access("ARRIVAL_AIRPORT"=PRIOR "DEPARTURE_AIRPORT" AND
                  "CARRIER"=PRIOR "CARRIER" AND "FLT_NUM"=PRIOR "FLT_NUM" AND
                  TRUNC(INTERNAL_FUNCTION("FLT_DATE"))=INTERNAL_FUNCTION(PRIOR
                  TRUNC(INTERNAL_FUNCTION("FLT_DATE"))))
           filter("ARRIVAL_AIRPORT" IS NULL)
       8 - filter( NOT EXISTS (SELECT 0 FROM "INV_LEG_DUMMY" "DL" WHERE
                  "DL"."FLT_NUM"=:B1 AND "DL"."CARRIER"=:B2 AND
                  "DL"."DEPARTURE_AIRPORT"=:B3 AND "DL"."FLT_DATE"=:B4))
      10 - access("DL"."CARRIER"=:B1 AND "DL"."FLT_NUM"=:B2 AND
                  "DL"."FLT_DATE"=:B3 AND "DL"."DEPARTURE_AIRPORT"=:B4)
           filter("DL"."DEPARTURE_AIRPORT"=:B1 AND "DL"."FLT_DATE"=:B2)
    Note
       - rule based optimizer used (consider using cbo)
    Statistics
              0  recursive calls
              0  db block gets
             19  consistent gets
              0  physical reads
              0  redo size
            877  bytes sent via SQL*Net to client
            338  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
              7  rows processedSalim Chelabi's method
    SQL> WITH t AS
      2       (SELECT     k.*, LEVEL lvl
      3              FROM inv_leg_dummy k
      4        CONNECT BY PRIOR k.arrival_airport = k.departure_airport
      5               AND PRIOR k.flt_date = k.flt_date
      6               AND PRIOR k.carrier = k.carrier
      7               AND PRIOR k.flt_num = k.flt_num)
      8  SELECT   carrier, flt_num, flt_num_suffix, flt_date, arrival_airport,
      9           departure_airport, MAX (lvl) seq,
    10           MAX (MAX (lvl)) OVER (PARTITION BY carrier, flt_num, flt_num_suffix)
    11                                                                        max_seq
    12      FROM t
    13  GROUP BY carrier,
    14           flt_num,
    15           flt_num_suffix,
    16           flt_date,
    17           arrival_airport,
    18           departure_airport
    19  UNION ALL
    20  SELECT   carrier, flt_num, flt_num_suffix, flt_date, NULL,
    21           MAX (arrival_airport), MAX (lvl) + 1 seq, MAX (lvl) max_seq
    22      FROM t
    23  GROUP BY carrier, flt_num, flt_num_suffix, flt_date
    24  ORDER BY 1, 2, 3, seq, arrival_airport NULLS LAST;
    CAR FLT_ F FLT_DATE            ARRIV DEPAR        SEQ    MAX_SEQ
    CX  1      22/05/2011 00:00:00 BNE   CNS            1          2
    CX  1      22/05/2011 00:00:00 HKG   BNE            2          2
    CX  1      22/05/2011 00:00:00       HKG            3          2
    KA  1      23/05/2011 00:00:00 HKG   TPE            1          3
    KA  1      23/05/2011 00:00:00 DEL   HKG            2          3
    KA  1      23/05/2011 00:00:00 KOL   DEL            3          3
    KA  1      23/05/2011 00:00:00       KOL            4          3
    7 rows selected.
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 2360206974
    | Id  | Operation                      | Name                        |
    |   0 | SELECT STATEMENT               |                             |
    |   1 |  TEMP TABLE TRANSFORMATION     |                             |
    |   2 |   LOAD AS SELECT               |                             |
    |*  3 |    CONNECT BY WITHOUT FILTERING|                             |
    |   4 |     TABLE ACCESS FULL          | INV_LEG_DUMMY               |
    |   5 |   SORT ORDER BY                |                             |
    |   6 |    UNION-ALL                   |                             |
    |   7 |     WINDOW BUFFER              |                             |
    |   8 |      SORT GROUP BY             |                             |
    |   9 |       VIEW                     |                             |
    |  10 |        TABLE ACCESS FULL       | SYS_TEMP_0FD9FE280_59EF9B75 |
    |  11 |     SORT GROUP BY              |                             |
    |  12 |      VIEW                      |                             |
    |  13 |       TABLE ACCESS FULL        | SYS_TEMP_0FD9FE280_59EF9B75 |
    Predicate Information (identified by operation id):
       3 - access("K"."DEPARTURE_AIRPORT"=PRIOR "K"."ARRIVAL_AIRPORT" AND
                  "K"."FLT_DATE"=PRIOR "K"."FLT_DATE" AND "K"."CARRIER"=PRIOR
                  "K"."CARRIER" AND "K"."FLT_NUM"=PRIOR "K"."FLT_NUM")
    Note
       - rule based optimizer used (consider using cbo)
    Statistics
             57  recursive calls
             10  db block gets
             25  consistent gets
              1  physical reads
           1556  redo size
            877  bytes sent via SQL*Net to client
            338  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              5  sorts (memory)
              0  sorts (disk)
              7  rows processed
    SQL> /
    CAR FLT_ F FLT_DATE            ARRIV DEPAR        SEQ    MAX_SEQ
    CX  1      22/05/2011 00:00:00 BNE   CNS            1          2
    CX  1      22/05/2011 00:00:00 HKG   BNE            2          2
    CX  1      22/05/2011 00:00:00       HKG            3          2
    KA  1      23/05/2011 00:00:00 HKG   TPE            1          3
    KA  1      23/05/2011 00:00:00 DEL   HKG            2          3
    KA  1      23/05/2011 00:00:00 KOL   DEL            3          3
    KA  1      23/05/2011 00:00:00       KOL            4          3
    7 rows selected.
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 4065363664
    | Id  | Operation                      | Name                        |
    |   0 | SELECT STATEMENT               |                             |
    |   1 |  TEMP TABLE TRANSFORMATION     |                             |
    |   2 |   LOAD AS SELECT               |                             |
    |*  3 |    CONNECT BY WITHOUT FILTERING|                             |
    |   4 |     TABLE ACCESS FULL          | INV_LEG_DUMMY               |
    |   5 |   SORT ORDER BY                |                             |
    |   6 |    UNION-ALL                   |                             |
    |   7 |     WINDOW BUFFER              |                             |
    |   8 |      SORT GROUP BY             |                             |
    |   9 |       VIEW                     |                             |
    |  10 |        TABLE ACCESS FULL       | SYS_TEMP_0FD9FE281_59EF9B75 |
    |  11 |     SORT GROUP BY              |                             |
    |  12 |      VIEW                      |                             |
    |  13 |       TABLE ACCESS FULL        | SYS_TEMP_0FD9FE281_59EF9B75 |
    Predicate Information (identified by operation id):
       3 - access("K"."DEPARTURE_AIRPORT"=PRIOR "K"."ARRIVAL_AIRPORT" AND
                  "K"."FLT_DATE"=PRIOR "K"."FLT_DATE" AND "K"."CARRIER"=PRIOR
                  "K"."CARRIER" AND "K"."FLT_NUM"=PRIOR "K"."FLT_NUM")
    Note
       - rule based optimizer used (consider using cbo)
    Statistics
              2  recursive calls
              8  db block gets
             15  consistent gets
              1  physical reads
            604  redo size
            877  bytes sent via SQL*Net to client
            338  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              5  sorts (memory)
              0  sorts (disk)
              7  rows processed
    SQL> Personally I think Salim's method seems very suiccinct and I had expected there would be more of a difference in terms of performance metrics between it and my attempt but it appears there's not much between the two - although Salim's method is generating redo as a result of the temp table through the subquery factoring. I'd be interested to see the results of a full trace between them.
    Either way though, there are two alternatives which seem a fair bit more optimal than the original SQL so it's quids in I guess! :-)
    David
    Edited by: Bravid on Aug 12, 2011 3:24 PM
    Edited by: Bravid on Aug 12, 2011 3:27 PM
    Updated the comparison with Salims additional column

  • High library cache load lock waits in AWR

    Hi All,
    Today i faced a significant performance problem related to shared pool. I made some observations, thought it would be a nice idea to share them with Oracle experts. Please feel free to add your observations/recommendations and correct me where i am wrong.
    Here are the excerpts from AWR report created for the problem timing. Database server is on 10.2.0.3 and running with 2*16 configuration. DB cache size is 4,000M and shared pool size is of 3008M.
    Snap Id Snap Time Sessions Cursors/Session
    Begin Snap: 9994 29-Jun-09 10:00:07 672 66.3
    End Snap: 10001 29-Jun-09 17:00:49 651 64.4
    Elapsed:   420.70 (mins)    
    DB Time:   4,045.34 (mins)   -- Very poor response time visible from difference between DB time and elapsed time.
    Load Profile
    Per Second Per Transaction
    Redo size: 248,954.70 23,511.82
    Logical reads: 116,107.04 10,965.40
    Block changes: 1,357.13 128.17
    Physical reads: 125.49 11.85
    Physical writes: 51.49 4.86
    User calls: 224.69 21.22
    Parses: 235.22 22.21
    Hard parses: 4.83 0.46
    Sorts: 102.94 9.72
    Logons: 1.12 0.11
    Executes: 821.11 77.55
    Transactions: 10.59   -- User calls and Parse count are almost same, means most of the calls are for parse. Most of the parses are soft. Per transaction 22 parses are very high figure.
    -- Not much disk I/O activity. Most of the reads are being satisfy from memory.
    Instance Efficiency
    Buffer Nowait %: 100.00 Redo NoWait %: 100.00
    Buffer Hit %: 99.92 In-memory Sort %: 100.00
    Library Hit %: 98.92 Soft Parse %: 97.95
    Execute to Parse %: 71.35 Latch Hit %: 99.98
    Parse CPU to Parse Elapsd %: 16.82 % Non-Parse CPU: 91.41 -- Low execute to parse ratio denotes CPU is significantly busy in parsing. Soft Parse% showing, most of the parse are soft parses. It means we should concentrate on soft parsing activity.
    -- Parse CPU to Parse Elapsed % is quite low, means some bottleneck is there related to parsing. It could be a side-effect of huge parsing pressure. Like CPU cycles are not available.
    Shared Pool Statistics
    Begin End
    Memory Usage %: 81.01 81.92
    % SQL with executions>1: 88.51 86.93
    % Memory for SQL w/exec>1: 86.16 86.76 -- Shared Pool memory seems ok (in 80% range)
    -- 88% of the SQLs are repeating ones. It's a good sign.
    Top 5 Timed Events
    Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
    library cache load lock 24,243 64,286 2,652 26.5 Concurrency
    db file sequential read 1,580,769 42,267 27 17.4 User I/O
    CPU time   33,039   13.6  
    latch: library cache 53,013 29,194 551 12.0 Concurrency
    db file scattered read 151,669 13,550 89 5.6 User I/O Problem-1: Contention on Library cache: May be due to under-sized shared pool, incorrect parameters, poor application design, But since we already observed that most of the parses are soft parses and shared pool usgae in 80%, seems problem related to holding cursors. open_cursors/session_cached_cursors are red flags.
    Problem-2: User I/O, may be due to poor SQLs, I/O sub-system, or poor physical design (wrong indexes are being used as DB file seq reads)
    Wait Class
    Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
    Concurrency 170,577 44.58 109,020 639 0.64
    User I/O 2,001,978 0.00 59,662 30 7.49
    System I/O 564,771 0.00 8,069 14 2.11
    Application 145,106 1.25 6,352 44 0.54
    Commit 176,671 0.37 4,528 26 0.66
    Other 27,557 6.31 2,532 92 0.10
    Network 6,862,704 0.00 696 0 25.68
    Configuration 3,858 3.71 141 37 0.01
    Wait Events
    Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
    library cache load lock 24,243 83.95 64,286 2652 0.09
    db file sequential read 1,580,769 0.00 42,267 27 5.91
    latch: library cache 53,013 0.00 29,194 551 0.20
    db file scattered read 151,669 0.00 13,550 89 0.57
    latch: shared pool 25,403 0.00 12,969 511 0.10
    log file sync 176,671 0.37 4,528 26 0.66
    enq: TM - contention 1,455 90.93 3,975 2732 0.01 Instance Activity Stats
    opened cursors cumulative 5,290,760 209.60 19.80
    parse count (failures) 6,181 0.24 0.02
    parse count (hard) 121,841 4.83 0.46
    parse count (total) 5,937,336 235.22 22.21
    parse time cpu 283,787 11.24 1.06
    parse time elapsed 1,687,096 66.84 6.31 Latch Activity
    library cache 85,042,375 0.15 0.43 29194 304,831 7.16
    library cache load lock 257,089 0.00 1.20 0 69,065 0.00
    library cache lock 41,467,300 0.02 0.07 6 2,714 0.07
    library cache lock allocation 730,422 0.00 0.44 0 0  
    library cache pin 28,453,986 0.01 0.16 8 167 0.00
    library cache pin allocation 509,000 0.00 0.38 0 0 Init.ora parameters
    cursor_sharing= EXACT
    open_cursors= 3000
    session_cached_cursors= 0
    -- open_cursors value is too high. I have checked that maximum usage by a single session is 12%.
    -- session_cached_cursors are 0 causing soft parsing. 500/600 is good number to start with.
    cursor_sharing exact may cause hard parses. But here, hard parsing is comparatively small, we can ignore this.
    From v$librarycache
    NAMESPACE             GETS    GETHITS GETHITRATIO       PINS PINHITRATIO    RELOADS INVALIDATIONS
    SQL AREA            162827      25127  .154317159  748901435  .999153087     107941         81886-- high invalidation count due to DDL like activities.
    -- high reloads due to small library cache.
    -- hit ratio too small.
    -- Need to pin frequently executed objects into library cache.
    P.S. Same question asked on Oracle_L, but due to formatting reasons, pasing duplicate contents here.
    Regards,
    Neeraj Bhatia
    Edited by: Neeraj.Bhatia2 on Jul 13, 2009 6:51 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Thanks Charles. I really appreciate your efforts to diagnose the issue.
    I agree with you performance issue is caused by soft parsing, which can be solved by holding cursors (session_cached_cursors). It may be due to oversized shared pool, which is causing delay in searching child cursors.
    My second thought is, there is large number of reloads, which can be due to under-sized shared pool, if invalidation activities are not going (CBO statistics collection, DDL etc), cursors are being flushed frequently.
    CPU utilization is continuously high (above 90%). Pasting additional information from same AWR report.
    Namespace                Get Requests       Pct Miss        Pin Requests         Pct Miss      Reloads        Invalidations
    BODY                       225,345               0.76            4,965,541            0.15           5,533           0
    CLUSTER                   1,278                  1.41            2,542                  1.73           26                0
    INDEX                       5,982                  9.31            13,922                7.35           258               0
    SQL AREA                  141,465              54.10           27,831,235         1.21           69,863          19,085 Latch Miss Sources
    Latch Name             Where                                         NoWait Misses                 Sleeps             Waiter Sleeps
    library cache lock       kgllkdl: child: no lock handle             0                                   8,250                   5,792 Time Model Statistics
    Statistic Name                                                                           Time (s)                               % of DB Time
    sql execute elapsed time                                                           206,979.31                                      85.27
    PL/SQL execution elapsed time                                                    94,651.78                                      39.00
    DB CPU                                                                                     33,039.29                                      13.61
    parse time elapsed                                                                      22,635.47                                       9.33
    inbound PL/SQL rpc elapsed time                                                  14,763.48                                       6.08
    hard parse elapsed time                                                               14,136.77                                       5.82
    connection management call elapsed time                                        1,625.07                                       0.67
    PL/SQL compilation elapsed time                                                        760.76                                       0.31
    repeated bind elapsed time                                                               664.81                                       0.27
    hard parse (sharing criteria) elapsed time                                             500.11                                       0.21
    Java execution elapsed time                                                              252.95                                       0.10
    failed parse elapsed time                                                                   167.23                                       0.07
    hard parse (bind mismatch) elapsed time                                             124.11                                       0.05
    sequence load elapsed time                                                                23.34                                        0.01
    DB time                                                                                   242,720.12  
    background elapsed time                                                             11,645.52  
    background cpu time                                                                      247.25 According to this DB CPU is 65% utilization (DB CPU + Background CPU / Total Available CPU seconds). While at the same time DB host was 95% utilized (confirmed from DBA_HIST_SYSMETRIC_SUMMARY).
    Operating System Statistics
    Statistic                                         Total
    BUSY_TIME                             3,586,030
    IDLE_TIME                              1,545,064
    IOWAIT_TIME                              22,237
    NICE_TIME                                           0
    SYS_TIME                                  197,661
    USER_TIME                              3,319,452
    LOAD                                                 11
    RSRC_MGR_CPU_WAIT_TIME                  0
    PHYSICAL_MEMORY_BYTES          867,180
    NUM_CPUS                                           2

  • How to replace sapdba with brtools after upgrade database to 10.2

    All,
        I have update our database to oracle 10.2,my brtools version is 7.0 now ,
    but can't run analyze table and dbcheck in db13,
    seems to still use SQLDBA when run analyze table and dbcheck.
    please refer to the below informations.
    Detail log:                    0810080300.aly
    *****                 SAPDBA - SAP Database Administration for ORACLE     *****
    SAPDBA V6.10    Analyze tables
    SAPDBA release: 6.10
    Patch level   : 1
    Patch date    : 2001-05-25
    ORACLE_SID    : PRD
    ORACLE_HOME   : /oracle/PRD/102_64
    ORACLE VERSION: 10.2.0.2.0
    Database state: 'open'
    SAPPRD       : 46C
    SAPDBA DB USER:  (-u option)
    OS login user : prdadm
    OS eff.  user : prdadm
    SYSDBA   priv.: not checked
    SYSOPER  priv.: not checked
    Command line  : sapdba -u / -analyze DBSTATCO
    HOST NAME     : sapprd1
    OS SYSTEM     : HP-UX
    OS RELEASE    : B.11.31
    OS VERSION    : U
    MACHINE       : ia64
    Log file      : '/oracle/PRD/sapcheck/0810080300.aly'
    Log start date: '2008-10-08'
    Log start time: '03.00.09'
    ----- Start of deferred log ---
    SAPDBA: Can't find the executable for SQLDBA/SVRMGR. Please, install one of
            them or enter one of them in the SAPDBA profile (parameter
            sqldba_path).
            (2008-10-08 03.00.06)
    SAPDBA: Error - running temporary sql script
            '/oracle/PRD/sapreorg/dbacmd.sql' with contents:
    CONNECT /******** AS SYSDBA
    SAPDBA: Couldn't check SYSDBA privilege.
    SAPDBA: Can't find the executable for SQLDBA/SVRMGR. Please, install one of
            them or enter one of them in the SAPDBA profile (parameter
            sqldba_path).
            (2008-10-08 03.00.06)
    SAPDBA: Error - running temporary sql script
            '/oracle/PRD/sapreorg/dbacmd.sql' with contents:
    CONNECT /******** AS SYSOPER
    SAPDBA: Couldn't check SYSOPER privilege.
    ----- End of deferred log ---
    Analyze parameters:
    Object: All tables in table DBSTATC ( for DB optimization run )
    Method: E ( Default )
    Option: P10 ( Default )
    Time frame: 100 hours
    Refresh   : All objects
    Option: DBSTATCO ( for the DB optimizer: Tables with Flag DBSTATC-TOBDO = 'X' )
    ** Refresh Statistics according control table DBSTATC **
    Total Number of Tables in DBSTATC to be analyzed:                           170
    Number of Tables with forced statistics update (ACTIV = 'U'):                 0
    SAPDBA: SELECT USER# FROM SYS.USER$ WHERE NAME='SAPPRD'
    ORA-00942: table or view does not exist
               (2008-10-08 03.00.09)
    SAPDBA: Error - getting size of segment 'SAPPRD.D010INC'
    SAPDBA: Error - during table analysis - table name: ->D010INC
    SAPDBA: No tables analyzed ( No entries in DBSTATC with TOBDO = X or errors ).
    SAPDBA: 0 table(s) out of 170 was (were) analyzed
            Difference may be due to:
               - Statistics not allowed ( see DBSTATC in CCMS )
               - Tables do not exist on database and were skipped
    Detailed summary of Step 1:
    Number of Tables that needed new statistics according to DBSTATC:             1
    Number of Tables marked in DBSTATC, but non-existent on the Database:         0
    Number of Tables where the statistics flag was resetted:                      0
    ******* Creating statistics for all tables without optimizer statistics *******
    SAPDBA: Using control table DBSTATC
            for taking optimizer settings into account
    SAPDBA: 0 table(s) without statistics were found.
    SAPDBA: 0 table(s) ( out of 0 ) was (were) analyzed/refreshed.
            0 table(s) was (were) explicitely excluded or pool/cluster table(s).
    SAPDBA: 0 index(es) without statistics was (were) found.
    SAPDBA: 0 index(es) ( out of 0 ) was (were) analyzed/refreshed.
            0 index(es) was (were) explicitely excluded or pool/cluster indexe(s).
    SAPDBA: 157 table statistics from 157 tables were dropped.
            They are either explicitely excluded in DBSTATC,
            or R/3 Pool- or Cluster- tables
            that must not have CBO Statistics
    SAPDBA: The whole operation took 10 sec
    SAPDBA: Step 1 was finished unsuccessfully
    SAPDBA: Step 2 was finished successfully
    Date:   2008-10-08
    Time:   03.00.19
    *********************** End of SAPDBA statistics report ****************
    How to replace sapdba by brtools?please give me support,thanks.
    Best Regards,
    Mr.chen

    >     I have update our database to oracle 10.2,my brtools version is 7.0 now ,
    > but can't run analyze table and dbcheck in db13,
    > seems to still use SQLDBA when run analyze table and dbcheck.
    Yes, it does so, because somebody forgot to upgrade the BASIS SP as well...
    What BASIS SP are you using?
    regards
    Lars

  • Oracle 11g with OPTIMIZER_MODE=RULE go faster!!

    I recently migrated Oracle 9.2.0.8 to Oracle 11g but the querys doesn't work as I hope.
    The same query takes 3:20 min aprox using optimizer_mode=ALL_ROWS and 0:20 using optimizer_mode=RULE or using RULE hint.
    The query in CBO makes a cartesian product between the indexes of the table.
    This is one query and the "autrotrace on" log on Oracle 11g:
    SELECT /*+ NO_INDEX (PK0004111303310) */MIN(BASE.ID_SCHED_TASK)+1 I
    FROM M4RJS_SCHED_TASKS BASE
    WHERE NOT EXISTS
    (SELECT BASE2.ID_SCHED_TASK
    FROM M4RJS_SCHED_TASKS BASE2
    WHERE BASE2.ID_SCHED_TASK>BASE.ID_SCHED_TASK
    AND BASE2.ID_SCHED_TASK<BASE.ID_SCHED_TASK+2)
    ORDER BY 1 ASC
    Execution Plan
    Plan hash value: 3937517195
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 14 | | 328 (2)| 00:00:04 |
    | 1 | SORT AGGREGATE | | 1 | 14 | | | |
    | 2 | MERGE JOIN ANTI | | 495 | 6930 | | 328 (2)| 00:00:04 |
    | 3 | INDEX FULL SCAN | PK0004111303310 | 49487 | 338K| | 119 (1)| 00:00:02 |
    |* 4 | FILTER | | | | | | |
    |* 5 | SORT JOIN | | 49487 | 338K| 1576K| 209 (2)| 00:00:03 |
    | 6 | INDEX FAST FULL SCAN| PK0004111303310 | 49487 | 338K| | 33 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - filter("BASE2"."ID_SCHED_TASK"<"BASE"."ID_SCHED_TASK"+2)
    5 - access("BASE2"."ID_SCHED_TASK">"BASE"."ID_SCHED_TASK")
    filter("BASE2"."ID_SCHED_TASK">"BASE"."ID_SCHED_TASK")
    Statistics
    1 recursive calls
    0 db block gets
    242 consistent gets
    8 physical reads
    0 redo size
    519 bytes sent via SQL*Net to client
    524 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Thanks to all !

    Sorry Mschnatt, I posted the wrong query, i was testing with HINTS, the correct query is your posted query.
    1* I analyzed the tables and the result is the same:
    This is the query and "autorace on" log using OPTIMIZER_MODE=RULE on Oracle 11g:
    SQL> R
    1 SELECT MIN(BASE.ID_SCHED_TASK)+1 I
    2 FROM M4RJS_SCHED_TASKS BASE
    3 WHERE NOT EXISTS
    4 (SELECT BASE2.ID_SCHED_TASK
    5 FROM M4RJS_SCHED_TASKS BASE2
    6 WHERE BASE2.ID_SCHED_TASK>BASE.ID_SCHED_TASK
    7 AND BASE2.ID_SCHED_TASK<BASE.ID_SCHED_TASK+2)
    8* ORDER BY 1 ASC
    I
    2
    Elapsed: 00:00:00.33
    Execution Plan
    Plan hash value: 795265574
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    | 1 | SORT AGGREGATE | |
    |* 2 | FILTER | |
    | 3 | TABLE ACCESS FULL | M4RJS_SCHED_TASKS |
    |* 4 | INDEX RANGE SCAN | PK0004111303310 |
    Predicate Information (identified by operation id):
    2 - filter( NOT EXISTS (SELECT 0 FROM "M4RJS_SCHED_TASKS" "BASE2"
    WHERE "BASE2"."ID_SCHED_TASK"<:B1+2 AND "BASE2"."ID_SCHED_TASK">:B2))
    4 - access("BASE2"."ID_SCHED_TASK">:B1 AND
    "BASE2"."ID_SCHED_TASK"<:B2+2)
    Note
    - rule based optimizer used (consider using cbo)
    Statistics
    0 recursive calls
    0 db block gets
    101509 consistent gets
    0 physical reads
    0 redo size
    519 bytes sent via SQL*Net to client
    524 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    This is the query and "autorace on" log using OPTIMIZER_MODE=ALL_ROWA on Oracle 11g:
    Elapsed: 00:03:14.78
    Execution Plan
    Plan hash value: 3937517195
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 12 | | 317 (2)| 00:00:04 |
    | 1 | SORT AGGREGATE | | 1 | 12 | | | |
    | 2 | MERGE JOIN ANTI | | 495 | 5940 | | 317 (2)| 00:00:04 |
    | 3 | INDEX FULL SCAN | PK0004111303310 | 49487 | 289K| | 119 (1)| 00:00:02 |
    |* 4 | FILTER | | | | | | |
    |* 5 | SORT JOIN | | 49487 | 289K| 1176K| 198 (3)| 00:00:03 |
    | 6 | INDEX FAST FULL SCAN| PK0004111303310 | 49487 | 289K| | 33 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - filter("BASE2"."ID_SCHED_TASK"<"BASE"."ID_SCHED_TASK"+2)
    5 - access("BASE2"."ID_SCHED_TASK">"BASE"."ID_SCHED_TASK")
    filter("BASE2"."ID_SCHED_TASK">"BASE"."ID_SCHED_TASK")
    Statistics
    0 recursive calls
    0 db block gets
    242 consistent gets
    0 physical reads
    0 redo size
    519 bytes sent via SQL*Net to client
    524 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    1 rows processed
    3* This is an example query, the problem persist in other bigger queries.
    Thanks for you help

  • Scaleability with Functions in SQL queries

    Hi,
    In one of our applications we have many views that use a packaged function in the where clause to filter data. This function uses a SYS_CONTEXT() to set and get values. There are couple of issues while using this approach:
    1/ The deterministic function doesn't allow any scability with PQ-server.
    2/ Another issue with this function and also the SYS_CONTEXT-function, they manuipulate the estimated CBO-statistics.
      CREATE TABLE TAB_I
      COLUMN1 NUMBER(16, 0) NOT NULL
    , COLUMN2 VARCHAR2(20)
    , CONSTRAINT TAB_I_PK PRIMARY KEY
        COLUMN1
      ENABLE
    CREATE TABLE TAB_V
        I_COL1     NUMBER(16,0) NOT NULL ENABLE,
        VERSION_ID NUMBER(16,0) NOT NULL ENABLE,
        CRE_DATIM TIMESTAMP (6) NOT NULL ENABLE,
        TERM_DATIM TIMESTAMP (6) NOT NULL ENABLE,
        VERSION_VALID_FROM DATE NOT NULL ENABLE,
        VERSION_VALID_TILL DATE NOT NULL ENABLE,
        CONSTRAINT TAB_V_PK PRIMARY KEY (I_COL1, VERSION_ID) USING INDEX NOCOMPRESS LOGGING ENABLE,
        CONSTRAINT COL1_FK FOREIGN KEY (I_COL1) REFERENCES TAB_I (COLUMN1) ENABLE
    CREATE OR REPLACE
    PACKAGE      app_bitemporal_rules IS
    FUNCTION f_knowledge_time RETURN TIMESTAMP DETERMINISTIC;
    END app_bitemporal_rules;
    create or replace
    PACKAGE BODY      app_bitemporal_rules IS
    FUNCTION f_knowledge_time RETURN TIMESTAMP DETERMINISTIC IS
    BEGIN
         RETURN TO_TIMESTAMP(SYS_CONTEXT ('APP_USR_CTX', 'KNOWLEDGE_TIME'),'DD.MM.YYYY HH24.MI.SSXFF');
    END f_knowledge_time;
    END app_bitemporal_rules;
    explain plan for select *
    FROM tab_i
    JOIN tab_v
    ON tab_i.column1 = tab_v.i_col1
    AND           app_bitemporal_rules.f_knowledge_time BETWEEN tab_v.CRE_DATIM AND tab_v.TERM_DATIM
    where tab_i.column1 = 11111;
    select * from table(dbms_xplan.display);
    Plan hash value: 621902595
    | Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |          |     1 |    95 |     5   (0)| 00:00:06 |
    |   1 |  NESTED LOOPS                |          |     1 |    95 |     5   (0)| 00:00:06 |
    |   2 |   TABLE ACCESS BY INDEX ROWID| TAB_I    |     1 |    25 |     1   (0)| 00:00:02 |
    |*  3 |    INDEX UNIQUE SCAN         | TAB_I_PK |     1 |       |     1   (0)| 00:00:02 |
    |*  4 |   TABLE ACCESS FULL          | TAB_V    |     1 |    70 |     4   (0)| 00:00:05 |
    Predicate Information (identified by operation id):
       3 - access("TAB_I"."COLUMN1"=11111)
       4 - filter("TAB_V"."I_COL1"=11111 AND
                  "TAB_V"."CRE_DATIM"<="APP_BITEMPORAL_RULES"."F_KNOWLEDGE_TIME"() AND
                  "TAB_V"."TERM_DATIM">="APP_BITEMPORAL_RULES"."F_KNOWLEDGE_TIME"())
    Note
       - 'PLAN_TABLE' is old version
       - dynamic sampling used for this statement (level=2)
    explain plan for select *
    FROM tab_i
    JOIN tab_v
    ON tab_i.column1 = tab_v.i_col1
    AND           '10-OCT-2011' BETWEEN tab_v.CRE_DATIM AND tab_v.TERM_DATIM
    where tab_i.column1 = 11111;
    select * from table(dbms_xplan.display);  
    Plan hash value: 621902595
    | Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |          |   256 | 24320 |     5   (0)| 00:00:06 |
    |   1 |  NESTED LOOPS                |          |   256 | 24320 |     5   (0)| 00:00:06 |
    |   2 |   TABLE ACCESS BY INDEX ROWID| TAB_I    |     1 |    25 |     1   (0)| 00:00:02 |
    |*  3 |    INDEX UNIQUE SCAN         | TAB_I_PK |     1 |       |     1   (0)| 00:00:02 |
    |*  4 |   TABLE ACCESS FULL          | TAB_V    |   256 | 17920 |     4   (0)| 00:00:05 |
    Predicate Information (identified by operation id):
       3 - access("TAB_I"."COLUMN1"=11111)
       4 - filter("TAB_V"."I_COL1"=11111 AND "TAB_V"."CRE_DATIM"<=TIMESTAMP'
                  2011-10-10 00:00:00.000000000' AND "TAB_V"."TERM_DATIM">=TIMESTAMP' 2011-10-10
                  00:00:00.000000000')
    Note
       - 'PLAN_TABLE' is old version
       - dynamic sampling used for this statement (level=2)   As can be seen in the second plan the cardinality has been guessed correctly, but not in the first case.
    I have also tried with:
    ASSOCIATE STATISTICS WITH packages app_bitemporal_rules DEFAULT COST (1000000/*246919*/,1000,0) DEFAULT SELECTIVITY 50;
    But, this just leads to a increased cost, but no change in cardinality.
    The (1) problem gets solved if I directly use "TO_TIMESTAMP(SYS_CONTEXT ('APP_USR_CTX', 'KNOWLEDGE_TIME'),'DD.MM.YYYY HH24.MI.SSXFF')" in the where clause. But am not able to find a solution for the (2) issue.
    Can you please help.
    Regards,
    Vikram R

    Hi Vikram,
    On the subject of using [url http://download.oracle.com/docs/cd/E11882_01/server.112/e26088/statements_4006.htm#i2115932]ASSOCIATE STATISTICS, having done a little investigation on 11.2.0.2, I'm having trouble adjusting selectivity via "associate statististics ... default selectivity" but no problems with adjusting default cost.
    I've also tried to do the same using an interface type and am running into other issues.
    It's not functionality that I'm overly familiar with as I try to avoid/eliminate using functions in predicates.
    Further analysis/investigation required.
    Including test case of what I've done so far in case anyone else wants to chip in.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    SQL> drop table t1;
    Table dropped.
    SQL>
    SQL> create table t1
      2  as
      3  select rownum col1
      4  from   dual
      5  connect by rownum <= 100000;
    Table created.
    SQL>
    SQL> exec dbms_stats.gather_table_stats(USER,'T1');
    PL/SQL procedure successfully completed.
    SQL>
    SQL> create or replace function f1
      2  return number
      3  as
      4  begin
      5   return 1;
      6  end;
      7  /
    Function created.
    SQL>
    SQL> create or replace function f2 (
      2   i_col1 in number
      3  )
      4  return number
      5  as
      6  begin
      7   return 1;
      8  end;
      9  /
    Function created.
    SQL> Created one table with 100000 rows.
    Two functions - one without arguments, one with (for later).
    With no associations:
    SQL> select * from user_associations;
    no rows selected
    SQL> Run a statement that uses the function:
    SQL> select count(*) from t1 where col1 >= f1;
      COUNT(*)
        100000
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  gm7ppkbzut114, child number 0
    select count(*) from t1 where col1 >= f1
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |       |       |   139 (100)|          |
    |   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
    |*  2 |   TABLE ACCESS FULL| T1   |  5000 | 25000 |   139  (62)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("COL1">="F1"())
    19 rows selected.
    SQL> Shows that default selectivity of 5% for an equality predicate against function.
    Let's try to adjust the selectivity using associate statistics - the argument for selectivity should be a percentage between 0 and 100:
    (turning off cardinality feedback for clarity/simplicity)
    SQL> alter session set "_optimizer_use_feedback" = false;
    Session altered.
    SQL>
    SQL> ASSOCIATE STATISTICS WITH FUNCTIONS f1 default selectivity 100;
    Statistics associated.
    SQL> select count(*) from t1 where col1 >= f1;
      COUNT(*)
        100000
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  gm7ppkbzut114, child number 1
    select count(*) from t1 where col1 >= f1
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |       |       |   139 (100)|          |
    |   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
    |*  2 |   TABLE ACCESS FULL| T1   |  5000 | 25000 |   139  (62)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("COL1">="F1"())
    19 rows selected.
    SQL> Didn't make any difference to selectivity.
    An excerpt from a 10053 trace file had the following:
    ** Performing dynamic sampling initial checks. **
    ** Dynamic sampling initial checks returning FALSE.
      No statistics type defined for function F1
      No default cost defined for function F1So, crucially what's missing here is a clause saying:
    No default selectivity defined for function F1But there's no other information that I could see to indicate why it should be discarded.
    Moving on, adjusting the cost does happen:
    SQL>exec spflush;
    PL/SQL procedure successfully completed.
    SQL> disassociate statistics from functions f1;
    Statistics disassociated.
    SQL>
    SQL> ASSOCIATE STATISTICS WITH FUNCTIONS f1 default selectivity 100 default cost (100,5,0);
    Statistics associated.
    SQL> select count(*) from t1 where col1 >= f1;
      COUNT(*)
        100000
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  gm7ppkbzut114, child number 0
    select count(*) from t1 where col1 >= f1
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |       |       |   500K(100)|          |
    |   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
    |*  2 |   TABLE ACCESS FULL| T1   |  5000 | 25000 |   500K  (1)| 00:41:41 |
    Predicate Information (identified by operation id):
       2 - filter("COL1">="F1"())
    19 rows selected.
    SQL> And we see the following in a 10053:
      No statistics type defined for function F1
      Default costs for function F1 CPU: 100, I/O: 5So, confirmation that default costs for function were found and applied but nothing else about selectivity again.
    I wondered whether the lack of arguments for function F1 made any difference, hence function F2.
    Didn't seem to:
    Vanilla:
    SQL> select count(*) from t1 where col1 >= f2(col1);
      COUNT(*)
        100000
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  2wxw32wadgc1v, child number 0
    select count(*) from t1 where col1 >= f2(col1)
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |       |       |   139 (100)|          |
    |   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
    |*  2 |   TABLE ACCESS FULL| T1   |  5000 | 25000 |   139  (62)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("COL1">="F2"("COL1"))
    19 rows selected.
    SQL> Plus association:
    SQL>exec spflush;
    PL/SQL procedure successfully completed.
    SQL>
    SQL> associate statistics with functions f2 default selectivity 90 default cost (100,5,0);
    Statistics associated.
    SQL> select count(*) from t1 where col1 >= f2(col1);
      COUNT(*)
        100000
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  2wxw32wadgc1v, child number 0
    select count(*) from t1 where col1 >= f2(col1)
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |       |       |   500K(100)|          |
    |   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
    |*  2 |   TABLE ACCESS FULL| T1   |  5000 | 25000 |   500K  (1)| 00:41:41 |
    Predicate Information (identified by operation id):
       2 - filter("COL1">="F2"("COL1"))
    19 rows selected.
    SQL> Just to confirm associations:
    SQL> select * from user_associations;
    OBJECT_OWNER                   OBJECT_NAME                    COLUMN_NAME                    OBJECT_TY
    STATSTYPE_SCHEMA               STATSTYPE_NAME                 DEF_SELECTIVITY DEF_CPU_COST DEF_IO_COST DEF_NET_COST
    INTERFACE_VERSION MAINTENANCE_TY
    RIMS                           F2                                                            FUNCTION
                                                                               90          100           5
                    0 USER_MANAGED
    RIMS                           F1                                                            FUNCTION
                                                                              100          100           5
                    0 USER_MANAGED
    SQL> So.... started thinking about whether using an interface type would help?
    SQL> CREATE OR REPLACE TYPE test_stats_ot AS OBJECT
      2  (dummy_attribute NUMBER
      3  ,STATIC FUNCTION ODCIGetInterfaces (
      4     ifclist                OUT SYS.ODCIObjectList
      5   ) RETURN NUMBER
      6  ,STATIC FUNCTION ODCIStatsSelectivity (
      7      pred                   IN  SYS.ODCIPredInfo,
      8      sel                    OUT NUMBER,
      9      args                   IN  SYS.ODCIArgDescList,
    10      strt                   IN  NUMBER,
    11      stop                   IN  NUMBER,
    12      --i_col1                 in  NUMBER,
    13      env                    IN  SYS.ODCIEnv
    14   ) RETURN NUMBER
    15  --,STATIC FUNCTION ODCIStatsFunctionCost (
    16  --    func                   IN  SYS.ODCIPredInfo,
    17  --    cost                   OUT SYS.ODCICost,
    18  --    args                   IN  SYS.ODCIArgDescList,
    19  --    i_col1                 in  NUMBER,
    20  --    env                    IN  SYS.ODCIEnv
    21  -- ) RETURN NUMBER
    22  );
    23  /
    Type created.
    SQL> CREATE OR REPLACE TYPE BODY test_stats_ot
      2  AS
      3   STATIC FUNCTION ODCIGetInterfaces (
      4    ifclist                OUT SYS.ODCIObjectList
      5   ) RETURN NUMBER
      6   IS
      7   BEGIN
      8    ifclist := sys.odciobjectlist(sys.odciobject('SYS','ODCISTATS2'));
      9    RETURN odciconst.success;
    10   END;
    11   STATIC FUNCTION ODCIStatsSelectivity
    12   (pred                   IN  SYS.ODCIPredInfo,
    13    sel                    OUT NUMBER,
    14    args                   IN  SYS.ODCIArgDescList,
    15    strt                   IN  NUMBER,
    16    stop                   IN  NUMBER,
    17    --i_col1                 in  NUMBER,
    18    env                    IN  SYS.ODCIEnv)
    19   RETURN NUMBER
    20   IS
    21   BEGIN
    22     sel := 90;
    23     RETURN odciconst.success;
    24   END;
    25  -- STATIC FUNCTION ODCIStatsFunctionCost (
    26  --  func                   IN  SYS.ODCIPredInfo,
    27  --  cost                   OUT SYS.ODCICost,
    28  --  args                   IN  SYS.ODCIArgDescList,
    29  --  i_col1                 in  NUMBER,
    30  --  env                    IN  SYS.ODCIEnv
    31  -- ) RETURN NUMBER
    32  -- IS
    33  -- BEGIN
    34  --  cost := sys.ODCICost(10000,5,0,0);
    35  --  RETURN odciconst.success;
    36  -- END;
    37  END;
    38  /
    Type body created.
    SQL> But this approach is not happy - perhaps not liking the function with no arguments?
    SQL> disassociate statistics from functions f1;
    Statistics disassociated.
    SQL> ASSOCIATE STATISTICS WITH FUNCTIONS f1 USING test_stats_ot;
    Statistics associated.
    SQL> select count(*) from t1 where col1 >= f1;
    select count(*) from t1 where col1 >= f1
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-06550: line 12, column 22:
    PLS-00103: Encountered the symbol "ÀÄ" when expecting one of the following:
    ) , * & = - + < / > at in is mod remainder not rem =>
    <an exponent (**)> <> or != or ~= >= <= <> and or like like2
    like4 likec between || multiset member submultiset
    SQL> So, back to F2 again (uncommenting argument i_col1 in ODCIStatsSelectivity):
    SQL> disassociate statistics from functions f1;
    Statistics disassociated.
    SQL> disassociate statistics from functions f2;
    Statistics disassociated.
    SQL> ASSOCIATE STATISTICS WITH FUNCTIONS f2 USING test_stats_ot;
    Statistics associated.
    SQL> select count(*) from t1 where col1 >= f2(col1);
      COUNT(*)
        100000
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  2wxw32wadgc1v, child number 0
    select count(*) from t1 where col1 >= f2(col1)
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |       |       |   139 (100)|          |
    |   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
    |*  2 |   TABLE ACCESS FULL| T1   |  5000 | 25000 |   139  (62)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("COL1">="F2"("COL1"))
    19 rows selected.
    SQL> Nothing obviously happening.
    You'll note also in my interface type implementation that I commented out a declaration of ODCIStatsFunctionCost.
    This post is probably already too long already so I've skipped some of the detail.
    But when ODCIStatsFunctionCost was used with function F2, I presume I've made a mistake in the implementation because I had an error in the 10053 trace as follows:
      Calling user-defined function cost function...
        predicate: "RIMS"."F2"("T1"."COL1")
      declare
         cost sys.ODCICost := sys.ODCICost(NULL, NULL, NULL, NULL);
         arg0 NUMBER := null;
        begin
          :1 := "RIMS"."TEST_STATS_OT".ODCIStatsFunctionCost(
                         sys.ODCIFuncInfo('RIMS',
                                'F2',
                                NULL,
                                1),
                         cost,
                         sys.ODCIARGDESCLIST(sys.ODCIARGDESC(2, 'T1', 'RIMS', '"COL1"', NULL, NULL, NULL))
                         , arg0,
                         sys.ODCIENV(:5,:6,:7,:8));
          if cost.CPUCost IS NULL then
            :2 := -1.0;
          else
            :2 := cost.CPUCost;
          end if;
          if cost.IOCost IS NULL then
            :3 := -1.0;
          else
            :3 := cost.IOCost;
          end if;
          if cost.NetworkCost IS NULL then
            :4 := -1.0;
          else
            :4 := cost.NetworkCost;
          end if;
          exception
            when others then
              raise;
        end;
    ODCIEnv Bind :5 Value 0
    ODCIEnv Bind :6 Value 0
    ODCIEnv Bind :7 Value 0
    ODCIEnv Bind :8 Value 4
      ORA-6550 received when calling RIMS.TEST_STATS_OT.ODCIStatsFunctionCost -- method ignoredThere was never any such feedback about ODCIStatsSelectivity.
    So, in summary, more questions than answers.
    I'll try to have another look later.

  • Mail server is too slow to deliver the mail to internal domain

    Hi,
    My mail server faster enough to send the mails to other domains, but when i try to send mail to my own domain it too slow some time it take 30 t0 40 minutes to deliver the mail.
    Please help
    Thanks,
    Gulab Pasha

    You should use statspack to check what are the main waits.
    Some indicators to check :
    - too many fts/excessive IO => check sql statements (missing index, wrong where clause)
    - explain plan for most important queries : using cbo or rbo ? If cbo, statistics should be up to date. If rbo, check acces path.
    -excessive logfile switch (> 5 per hour) : increase logfile or disable logging
    - undo waits => not enough rollback segments (if you don't set AUM)
    - data waits => alter initrans, pctfree, pctused
    - too many chaining rows => rebuild set of datas or rebuild table
    - too many levels in indexes => rebuild index
    - excessive parsing : use bind variable or alter parameter cursor_sharing
    - too many sort on disks => increase sort_area_size and create others temporary tablespace on separate disks
    - too many blocks reads for a row => db_block_size too few or too many chaining rows
    - too many lru contention => increase latches
    - OS swapping/paging ?
    Too improve performance :
    - alter and tune some parameters : optimizer_mode, sort_area_size, shared_pool_size, optimizer_index_cost_adj, db_file_multiblock_read_count...
    - keep most useful packages in memory
    - gather regularly statistics (if using cbo)
    How do your users access to the db ?
    Jean-François Léguillier
    Consultant DBA

Maybe you are looking for

  • Upgrade Stopped - Unable to logon system

    Windows 2003 Server 64bit , Sql 2005, Upgrade erp2004 -> erp2005. Upgrade has stopped in phase startsap_nbas, and although the system will start you are unable to logon due to the error "error when initialising the work area SYST". On investigation t

  • Dispute Case - Write off multiple cost centers

    Hi Experts, I have a different requirement on automatic writing off of dispute cases. There is one collection specialist who handles multiple location customers. Each location/branch is created as profit center and in each location/branch there are m

  • Qosmio G15R - Built in HDD error

    Hello I am getting an error message that says "built in hdd error" when booting up my laptop. The laptop has 2 HDD's. One 40GB and one 60GB. I have tried both in the HDD1 slot and I am still getting the same message. I was able to use the recovery di

  • Not setting selected value in column of table control

    Hi, I have MATNR field in table control. In PAI code is as follow. PROCESS AFTER INPUT.   MODULE CANCEL AT EXIT-COMMAND.   MODULE VALIDATE.   LOOP WITH CONTROL TC_FORMV.   FIELD /SISLEMCU/CNVADTL-MATNR SELECT *                       FROM  /SISLEMCU/C

  • How to enable AMD Radeon R9 M295X on 5K iMac with Premiere Pro CC

    Hello everyone, I realize that the graphics card which comes with the 5K iMac (AMD Radeon M290X and M295X) is not one of the recommended cards for Adobe Premiere according to the System Requirements page on Adobes website.  At this point, I have not