Extend Statistics: ThroughputInbound, ThroughputOutbound

Hi,
We have a cluster that has 2 Extend proxies with 8 storage-enabled nodes that write-though to an RDBMS. Cache data is MACHINE-SAFE. The cluster is running on 1Gb LAN; I believe the DB is also. Extend clients may run on either 1Gb or 100Mb. I see the following Statistics for the ExtendTcpProxyService MBean and I am unsure just how to interpret the values.
Presumably the throughput numbers I'm seeing are some kind of averages over time rather than peak throughput? I'm concerned that the throughput numbers seem low and therefore might indicate a problem.
Throughput inbound/outbound measured relative to what? The proxy node relative to both Extend clients plus the storage-enabled cluster, or the proxy node relative to Extend clients only?
How large is a message? Is a message a "piece of string" depending upon the size of a cache entry, or are messages of fixed size?
Connections=98, Cpu=214826ms (0.0%), Messages=15565844, Throughput=72457.914msg/sec, AverageActiveThreadCount=0.02479306, Tasks=14267255, AverageTaskDuration=3.5923638ms, MaximumBacklog=5, BytesReceived=29.8GB, BytesSent=268GB, ThroughputInbound=117Kbps, ThroughputOutbound=1.05Mbps
Connections=90, Cpu=194426ms (0.0%), Messages=8823167, Throughput=45380.594msg/sec, AverageActiveThreadCount=0.0100068785, Tasks=8539157, AverageTaskDuration=2.4227173ms, MaximumBacklog=5, BytesReceived=13.9GB, BytesSent=185GB, ThroughputInbound=54.6Kbps, ThroughputOutbound=750Kbps
Many Thanks,
Simon

Hi Vishy,
Here's a sample code snippet. I tested this with Coherence 3.7:
INamedCache cache = CacheFactory.GetCache("near-cache");
ICacheStatistics cacheStats = ((NearCache) cache).CacheStatistics;
cache.Add("foo", "bar");
System.Console.WriteLine("Total puts: " + cacheStats.TotalPuts);Hope this helps,
Patrick

Similar Messages

  • Problem dropping extended statistics

    I created some extended statistics on a table, but now I can't delete them. I want to delete the statistics/virtual column represented by SYS_STUDDJ802H#C$E$W27NW$#B7DO.
    SQL> SELECT *
          2 FROM   dba_stat_extensions
          3 WHERE  table_name = 'T_E2_TRANS_INFO'
    OWNER      TABLE_NAME           EXTENSION_NAME                 EXTENSION                                     CREATO DRO
    INTERFACE  T_E2_TRANS_INFO      SYS_NC00063$                   (TRUNC("TRANSACTION_DATE"))                   SYSTEM NO
    INTERFACE  T_E2_TRANS_INFO      SYS_STUDDJ802H#C$E$W27NW$#B7DO ("PARTITION_ID",TRUNC("TRANSACTION_DATE"))    USER   YES
    SQL> exec dbms_stats.drop_extended_stats('INTERFACE','T_E2_TRANS_INFO','(partition_id, trunc(transaction_date))');
    BEGIN dbms_stats.drop_extended_stats('INTERFACE','T_E2_TRANS_INFO','(partition_id, trunc(transaction_date))'); END;
    ERROR at line 1:
    ORA-20000: extension "(partition_id, trunc(transaction_date))" does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 8478
    ORA-06512: at "SYS.DBMS_STATS", line 31747
    ORA-06512: at line 1
    SQL> exec dbms_stats.drop_extended_stats('INTERFACE','T_E2_TRANS_INFO','(PARTITION_ID, trunc(TRANSACTION_DATE))');
    BEGIN dbms_stats.drop_extended_stats('INTERFACE','T_E2_TRANS_INFO','(PARTITION_ID, trunc(TRANSACTION_DATE))'); END;
    ERROR at line 1:
    ORA-20000: extension "(PARTITION_ID, trunc(TRANSACTION_DATE))" does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 8478
    ORA-06512: at "SYS.DBMS_STATS", line 31747
    ORA-06512: at line 1
    SQL> exec dbms_stats.drop_extended_stats('INTERFACE','T_E2_TRANS_INFO','(PARTITION_ID, TRUNC(TRANSACTION_DATE))');
    BEGIN dbms_stats.drop_extended_stats('INTERFACE','T_E2_TRANS_INFO','(PARTITION_ID, TRUNC(TRANSACTION_DATE))'); END;
    ERROR at line 1:
    ORA-20000: extension "(PARTITION_ID, TRUNC(TRANSACTION_DATE))" does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 8478
    ORA-06512: at "SYS.DBMS_STATS", line 31747
    ORA-06512: at line 1
    SQL> exec dbms_stats.drop_extended_stats('INTERFACE','T_E2_TRANS_INFO','("PARTITION_ID", TRUNC("TRANSACTION_DATE"))');
    BEGIN dbms_stats.drop_extended_stats('INTERFACE','T_E2_TRANS_INFO','("PARTITION_ID", TRUNC("TRANSACTION_DATE"))'); END;
    ERROR at line 1:
    ORA-20000: extension "("PARTITION_ID", TRUNC("TRANSACTION_DATE"))" does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 8478
    ORA-06512: at "SYS.DBMS_STATS", line 31747
    ORA-06512: at line 1Anybody run into this?
    Thanks!

    I tried to get such a situation with no luck. How did you get the ("PARTITION_ID",TRUNC("TRANSACTION_DATE"))?
    I tried on a partition table but I never get a "PARTITION_ID" into the function.
    drop table ex_stats_test purge;
    create table ex_stats_test
    (mnr number,
    mvol number)
    PARTITION BY range (mnr)
    (PARTITION PARTITION_000000  VALUES LESS THAN (MAXVALUE));
    insert into EX_STATS_TEST values (1,2);
    commit;
    SELECT DBMS_STATS.CREATE_EXTENDED_STATS('ANDY','EX_STATS_TEST','(TRUNC("MVOL"))') FROM DUAL;
    select * from  dba_stat_extensions where owner='ANDY';
    exec DBMS_STATS.DROP_EXTENDED_STATS ('ANDY','EX_STATS_TEST','(TRUNC("MVOL"))');

  • Extended statistics issue

    Hello!
    I have a problem with extended statistics on 11.2.0.3
    Here is the script I run
    drop table col_stats;
    create table col_stats as
    select  1 a, 2 b,
    from dual
    connect by level<=100000;
    insert into col_stats (
    select  2, 1,
    from dual
    connect by level<=100000);
    -- check the a,b distribution
    A
        B
    COUNT(1)
    2
        1
      100000
    1
        2
      100000
    -- extended stats DEFINITION
    select dbms_stats.create_extended_stats('A','COL_STATS','(A,B)') name
    from dual;
    -- set estimate_percent to 100%
    EXEC dbms_stats.SET_TABLE_prefs ('A','COL_STATS','ESTIMATE_PERCENT',100);
    -- check the changes
    select dbms_stats.get_prefs ('ESTIMATE_PERCENT','A','COL_STATS')
    from dual;
    -- NOW GATHER COLUMN STATS
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS (
        OWNNAME    => 'A',
        TABNAME    => 'COL_STATS',
        METHOD_OPT => 'FOR ALL COLUMNS' );
    END;
    set autotrace traceonly explain
    select * from col_stats where a=1 and b=1;
    SQL> select * from col_stats where a=1 and b=1;
    Execution Plan
    Plan hash value: 1829175627
    | Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |           | 50000 |   683K|   177 (2)| 00:00:03 |
    |*  1 |  TABLE ACCESS FULL| COL_STATS | 50000 |   683K|   177 (2)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter("A"=1 AND "B"=1)
    How come the optimizer expects 50000 rows?
    Thanks in advance.
    Rob

    RobK wrote:
    SQL> select * from col_stats where a=1 and b=1;
    Execution Plan
    Plan hash value: 1829175627
    | Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |           | 50000 |   683K|   177 (2)| 00:00:03 |
    |*  1 |  TABLE ACCESS FULL| COL_STATS | 50000 |   683K|   177 (2)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter("A"=1 AND "B"=1)
    How come the optimizer expects 50000 rows?
    Thanks in advance.
    Rob
    This is an expected behavior.
    When you create extended statistics then it creates histogram for column groups(and it is virtual column).
    The query predicate "where a=1 and b=1" is actually out-of-range predicate for that virtual column. In such cases optimizer should be estimate
    selectivity as 0 (so cardinality 1) but in such cases optimizer uses density(in our case actually used Newdensity) of column(or your virtual columns).
    Let see following information(this is exact your case)
    NUM_ROWS
      200000
    COLUMN_NAME                                           NUM_DISTINCT     DENSITY            HISTOGRAM
    A                                                                  2                         .00000250            FREQUENCY
    B                                                                  2                          .00000250           FREQUENCY
    SYS_STUNA$6DVXJXTP05EH56DTIR0X          2                          .00000250           FREQUENCY
    COLUMN_NAME                    ENDPOINT_NUMBER               ENDPOINT_VALUE
    A                                          100000                                      1
    A                                          200000                                      2
    B                                          100000                                      1
    B                                          200000                                      2
    SYS_STUNA$6DVXJXTP05EH56DTIR0X          100000             1977102303
    SYS_STUNA$6DVXJXTP05EH56DTIR0X          200000             7894566276
    Your predicate is "where a=1 and b=1" and it is equivalent with "where SYS_STUNA$6DVXJXTP05EH56DTIR0X = sys_op_combined_hash (1, 1)"
    As you know with frequency histogram selectivity for equ(=) predicate is (E_endpoint-B_Endpoint)/num_rows. Here predicate value has located
    between E_endpoint and B_Endpoint histogram buckets(endpoint numbers). But sys_op_combined_hash (1, 1) = 7026129190895635777. So then how can
    i compare this value and according histogram endpoint values?. Answer is when creating histogram oracle do not use exact sys_op_combined_hash(x,y)
    but it also apply MOD function, so you have to compare MOD (sys_op_combined_hash (1, 1), 9999999999)(which is equal 1598248696) with endpoint values
    . So 1598248696 this is not locate between any endpoint number. Due to optimizer use NewDensity as density(in this case can not endpoint inf)  
    In below trace file you clearly can see that
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table: COL_STATS  Alias: COL_STATS
        #Rows: 200000  #Blks:  382  AvgRowLen:  18.00
    Access path analysis for COL_STATS
    SINGLE TABLE ACCESS PATH
      Single Table Cardinality Estimation for COL_STATS[COL_STATS]
      Column (#1):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      Column (#2):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      Column (#3):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      ColGroup (#1, VC) SYS_STUNA$6DVXJXTP05EH56DTIR0X
        Col#: 1 2    CorStregth: 2.00
      ColGroup Usage:: PredCnt: 2  Matches Full: #1  Partial:  Sel: 0.2500
      Table: COL_STATS  Alias: COL_STATS
        Card: Original: 200000.000000  Rounded: 50000  Computed: 50000.00  Non Adjusted: 50000.00
      Access Path: TableScan
        Cost:  107.56  Resp: 107.56  Degree: 0
          Cost_io: 105.00  Cost_cpu: 51720390
          Resp_io: 105.00  Resp_cpu: 51720390
      Best:: AccessPath: TableScan
             Cost: 107.56  Degree: 1  Resp: 107.56  Card: 50000.00  Bytes: 0
    Note that NewDensity calculated as 1/(2*num_distinct)= 1/4=0.25 for frequency histogram!.
    CBO used column groups statistic and estimated cardinality was 200000*0.25=50000.
    Remember that they are permanent statistics and RDBMS gathered they by analyzing actual table data(Even correlation columns data).
    But dynamic sampling can be good in your above situation, due to it is calculate selectivity in run time using sampling method together real predicate.
    For other situation you can see extends statistics is great help for estimation like where a=2 and b=1 because this is actual data and according information(stats/histograms) stored in dictionary.
    SQL>  select * from col_stats where a=2 and b=1;
    Execution Plan
    Plan hash value: 1829175627
    | Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |           |   100K|   585K|   108   (3)| 00:00:02 |
    |*  1 |  TABLE ACCESS FULL| COL_STATS |   100K|   585K|   108   (3)| 00:00:02 |
    Predicate Information (identified by operation id):
       1 - filter("A"=2 AND "B"=1)
    and from trace file
    Table Stats::
      Table: COL_STATS  Alias: COL_STATS
        #Rows: 200000  #Blks:  382  AvgRowLen:  18.00
    Access path analysis for COL_STATS
    SINGLE TABLE ACCESS PATH
      Single Table Cardinality Estimation for COL_STATS[COL_STATS]
      Column (#1):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      Column (#2):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      Column (#3):
        NewDensity:0.250000, OldDensity:0.000003 BktCnt:200000, PopBktCnt:200000, PopValCnt:2, NDV:2
      ColGroup (#1, VC) SYS_STUNA$6DVXJXTP05EH56DTIR0X
        Col#: 1 2    CorStregth: 2.00
    ColGroup Usage:: PredCnt: 2  Matches Full: #1  Partial:  Sel: 0.5000
      Table: COL_STATS  Alias: COL_STATS
        Card: Original: 200000.000000  Rounded: 100000  Computed: 100000.00  Non Adjusted: 100000.00
      Access Path: TableScan
        Cost:  107.56  Resp: 107.56  Degree: 0
          Cost_io: 105.00  Cost_cpu: 51720365
          Resp_io: 105.00  Resp_cpu: 51720365
      Best:: AccessPath: TableScan
             Cost: 107.56  Degree: 1  Resp: 107.56  Card: 100000.00  Bytes: 0
    Lets calculate:
    MOD (sys_op_combined_hash (2, 1), 9999999999)=1977102303 and for it (e_endpoint-b_enpoint)/num_rows=(200000-100000)/200000=0.5
    and result card=sel*num_rows(or just e_endpoint-b_enpoint)=100000.

  • Extended stats partial implementation (multi-column stats) in 10.2.0.4

    Hi everyone,
    I saw a note today on ML, which says that new 11g's feature - [Extended statistics|http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/stats.htm#sthref1177] - is partially available in 10.2.0.4. See [Bug #5040753 Optimal index is not picked on simple query / Column group statistics|https://metalink2.oracle.com/metalink/plsql/f?p=130:14:3330964537507892972::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,5040753.8,1,0,1,helvetica] for details. The note suggests to turn on this feature using fixcontrol, but it does not say how actually to employ it. Because dbms_stats.create_extended_stats function is not available in 10.2.0.4, I'm curious how it is actually supposed to work in 10.2.0.4? I wrote a simple test:
    drop table t cascade constraints purge;
    create table t as select rownum id, mod(rownum, 100) x, mod(rownum, 100) y from dual connect by level <= 100000;
    exec dbms_stats.gather_table_stats(user, 't');
    explain plan for select * from t where x = :1 and y = :2;
    select * from table(dbms_xplan.display);
    disc
    conn tim/t
    drop table t cascade constraints purge;
    create table t as select rownum id, mod(rownum, 100) x, mod(rownum, 100) y from dual connect by level <= 100000;
    exec dbms_stats.gather_table_stats(user, 't');
    alter session set "_fix_control"='5765456:7';
    explain plan for select * from t where x = :1 and y = :2;
    select * from table(dbms_xplan.display);
    disc
    conn tim/t
    drop table t cascade constraints purge;
    create table t as select rownum id, mod(rownum, 100) x, mod(rownum, 100) y from dual connect by level <= 100000;
    alter session set "_fix_control"='5765456:7';
    exec dbms_stats.gather_table_stats(user, 't');
    explain plan for select * from t where x = :1 and y = :2;
    select * from table(dbms_xplan.display);In alll cases cardinality estimate was 10, as usually without extended statistics:
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |    10 |   100 |    53   (6)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| T    |    10 |   100 |    53   (6)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("X"=TO_NUMBER(:1) AND "Y"=TO_NUMBER(:2))10053 trace confirmed that fix is enabled and considered by CBO:
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
      optimizer_secure_view_merging       = false
      _optimizer_connect_by_cost_based    = false
      _fix_control_key                    = -113
      Bug Fix Control Environment
      fix  5765456 = 7        *
    ...But calculations were typical:
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 100000  #Blks:  220  AvgRowLen:  10.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#2): X(NUMBER)
        AvgLen: 3.00 NDV: 101 Nulls: 0 Density: 0.009901 Min: 0 Max: 99
      Column (#3): Y(NUMBER)
        AvgLen: 3.00 NDV: 101 Nulls: 0 Density: 0.009901 Min: 0 Max: 99
      Table:  T  Alias: T    
        Card: Original: 100000  Rounded: 10  Computed: 9.80  Non Adjusted: 9.80
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  53.19  Resp: 53.19  Degree: 0
          Cost_io: 50.00  Cost_cpu: 35715232
          Resp_io: 50.00  Resp_cpu: 35715232
      Best:: AccessPath: TableScan
             Cost: 53.19  Degree: 1  Resp: 53.19  Card: 9.80  Bytes: 0Any thoughts?

    Jonathan,
    thanks for stopping by. Yes, forcing INDEX access with a hint, disabled fix for bug and more correctly gathered statistics shows INDEX access cardinality 1000:
    SQL> alter session set "_fix_control"='5765456:0';
    Session altered
    SQL> exec dbms_stats.gather_table_stats(user, 't', estimate_percent=>null,method_opt=>'for all columns size 1', cascade=>true);
    PL/SQL procedure successfully completed
    SQL> explain plan for select /*+ index(t t_indx) */ * from t where x = :1 and y = :2;
    Explained
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4155885868
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |    10 |   100 |   223   (0)| 00:00:03 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T      |    10 |   100 |   223   (0)| 00:00:03 |
    |*  2 |   INDEX RANGE SCAN          | T_INDX |  1000 |       |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("X"=TO_NUMBER(:1) AND "Y"=TO_NUMBER(:2))but it still isn't correct for TABLE ACCESS. With default statistics gathering settings dbms_stats gathered FREQUENCY histogram on both X & Y columns and computations for cardinality estimates were not correct:
    SQL> alter session set "_fix_control"='5765456:0';
    Session altered
    SQL> exec dbms_stats.gather_table_stats(user, 't', cascade=>true);
    PL/SQL procedure successfully completed
    SQL> explain plan for select /*+ index(t t_indx) */ * from t where x = :1 and y = :2;
    Explained
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4155885868
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |    10 |   100 |     4   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T      |    10 |   100 |     4   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | T_INDX |    10 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("X"=TO_NUMBER(:1) AND "Y"=TO_NUMBER(:2))I believe this happens due to CBO computes INDEX selectivity using different formula for that case:
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 100000  #Blks:  220  AvgRowLen:  10.00
    Index Stats::
      Index: T_INDX  Col#: 2 3
        LVLS: 1  #LB: 237  #DK: 100  LB/K: 2.00  DB/K: 220.00  CLUF: 22000.00
        User hint to use this index
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#2): X(NUMBER)
        AvgLen: 3.00 NDV: 100 Nulls: 0 Density: 0.003424 Min: 0 Max: 99
        Histogram: Freq  #Bkts: 100  UncompBkts: 5403  EndPtVals: 100
      Column (#3): Y(NUMBER)
        AvgLen: 3.00 NDV: 100 Nulls: 0 Density: 0.003424 Min: 0 Max: 99
        Histogram: Freq  #Bkts: 100  UncompBkts: 5403  EndPtVals: 100
      Table:  T  Alias: T    
        Card: Original: 100000  Rounded: 10  Computed: 10.00  Non Adjusted: 10.00
      END   Single Table Cardinality Estimation
      Access Path: index (AllEqRange)
        Index: T_INDX
        resc_io: 4.00  resc_cpu: 33236
        ix_sel: 1.0000e-004  ix_sel_with_filters: 1.0000e-004
        Cost: 4.00  Resp: 4.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: T_INDX
             Cost: 4.00  Degree: 1  Resp: 4.00  Card: 10.00  Bytes: 0To give a complete picture, this is 10053 excerpt for "normal stats" + disabled bug fix + forced index:
      BEGIN Single Table Cardinality Estimation
      Column (#2): X(NUMBER)
        AvgLen: 3.00 NDV: 100 Nulls: 0 Density: 0.01 Min: 0 Max: 99
      Column (#3): Y(NUMBER)
        AvgLen: 3.00 NDV: 100 Nulls: 0 Density: 0.01 Min: 0 Max: 99
      Table:  T  Alias: T    
        Card: Original: 100000  Rounded: 10  Computed: 10.00  Non Adjusted: 10.00
      END   Single Table Cardinality Estimation
      Access Path: index (AllEqRange)
        Index: T_INDX
        resc_io: 223.00  resc_cpu: 1978931
        ix_sel: 0.01  ix_sel_with_filters: 0.01
        Cost: 223.18  Resp: 223.18  Degree: 1
      Best:: AccessPath: IndexRange  Index: T_INDX
             Cost: 223.18  Degree: 1  Resp: 223.18  Card: 10.00  Bytes: 0And this one for "normal stats" + enabled bug fix:
      BEGIN Single Table Cardinality Estimation
      Column (#2): X(NUMBER)
        AvgLen: 3.00 NDV: 100 Nulls: 0 Density: 0.01 Min: 0 Max: 99
      Column (#3): Y(NUMBER)
        AvgLen: 3.00 NDV: 100 Nulls: 0 Density: 0.01 Min: 0 Max: 99
      ColGroup (#1, Index) T_INDX
        Col#: 2 3    CorStregth: 100.00
      ColGroup Usage:: PredCnt: 2  Matches Full: #0  Partial:  Sel: 0.0100
      Table:  T  Alias: T    
        Card: Original: 100000  Rounded: 1000  Computed: 1000.00  Non Adjusted: 1000.00
      END   Single Table Cardinality Estimation
      ColGroup Usage:: PredCnt: 2  Matches Full: #0  Partial:  Sel: 0.0100
      ColGroup Usage:: PredCnt: 2  Matches Full: #0  Partial:  Sel: 0.0100
      Access Path: index (AllEqRange)
        Index: T_INDX
        resc_io: 223.00  resc_cpu: 1978931
        ix_sel: 0.01  ix_sel_with_filters: 0.01
        Cost: 223.18  Resp: 223.18  Degree: 1
      Best:: AccessPath: IndexRange  Index: T_INDX
             Cost: 223.18  Degree: 1  Resp: 223.18  Card: 1000.00  Bytes: 0Tests were performed on the same machine (10.2.0.4).

  • Data pump + statistics + dba_tab_modifications

    Database 11.2, persons involved with replicating data are using data pump and importing the statistics (that's all I know about their process). When I look at this query:
    select /*mods.table_owner,
           mods.table_name,*/
           mods.inserts,
           mods.updates,
           mods.deletes,
           mods.timestamp,
           tabs.num_rows,
           tabs.table_lock,
           tabs.monitoring,
           tabs.sample_size,
           tabs.last_analyzed
    from   sys.dba_tab_modifications mods join dba_tables tabs
           on mods.table_owner = tabs.owner and mods.table_name = tabs.table_name
    where  mods.table_name = &tab;I see this:
    INSERTS          UPDATES     DELETES     TIMESTAMP          NUM_ROWS     TABLE_LOCK     MONITORING     SAMPLE_SIZE     LAST_ANALYZED
    119333320     0     0     11/22/2011 19:27     116022939     ENABLED          YES          116022939     10/24/2011 23:10As we can see, the source database last gathered stats on 10/24 and the data was loaded in the destination on 11/22.
    The database is giving bad execution plans as indicated in previous thread: Re: Understanding results from dbms_xplan.display_cursor
    My first inclination is to run the following, but since they imported the stats, should they already be "good" and it's a matter of dba_tab_modifications getting out of sync? What gives?
    exec dbms_stats.gather_schema_stats(
         ownname          => 'SCHEMA_NAME',
         options          => 'GATHER AUTO'
       )

    In your previous post you mentioned that the explain plan has 197 records. That is one big SQL statement, so the CBO has plenty of opportunity to mess it up.
    That said, it is a good idea to verify that your statistics are fine. One way to accomplish that is to gather statistics into a pending area and compare the pending area stats with what is currently in the dictionary using DBMS_STATS.DIFF_TABLE_STATS_IN_PENDING.
    As mentioned by Tubby in your previous post, extended stats are powerful and easy way to improve the quality of CBO’s plans. Note that in 11.2.0.2 you can create extended stats specifically tailored to your SQLs (based on AWR or from live system) - http://iiotzov.wordpress.com/2011/11/01/get-the-max-of-oracle-11gr2-right-from-the-start-create-relevant-extended-statistics-as-a-part-of-the-upgrade/
    Iordan Iotzov
    http://iiotzov.wordpress.com/
    Edited by: Iordan Iotzov on Jan 6, 2012 12:45 PM

  • Package.......StatsPack...Error..

    Hi.. i m tuning my Oracle Database, for that i need to install Statspack package, when i compiled package... it returned following error.
    Warning: Package Body created with compilation errors.
    SQL> show error
    Errors for PACKAGE BODY STATSPACK:
    LINE/COL ERROR
    2045/3 PLS-00201: identifier 'SYS.DBMS_SHARED_POOL' must be declared
    2045/3 PL/SQL: Statement ignored
    HOw to solve above error..Please check the following Package, and do help me.
    Note: I m connected perfstat user, no as SYS user.
    create or replace package body STATSPACK as
    /* Define package variables.
    Variables prefixed with p_ are package variables.
    p_snap_id integer; /* snapshot id */
    p_instance_number number; /* instance number */
    p_instance_name varchar2(16); /* instance name */
    p_startup_time date; /* instance startup time */
    p_parallel varchar2(3); /* parallel server */
    p_version varchar2(17); /* Oracle release */
    p_dbid number; /* database id */
    p_host_name varchar2(64); /* host instance is on */
    p_name varchar2(9); /* database name */
    p_new_sga integer; /* Instance bounced since last snap? */
    tmp_int integer; /* initialise defaults */
    p_def_snap_level number default 5; /* default snapshot lvl */
    p_def_session_id number default 0; /* default session id */
    p_def_ucomment varchar2(160) default null;
    p_def_pin_statspack varchar2(10) default 'TRUE';
    p_def_last_modified date default SYSDATE;
    /* Below are the default threshold (_th) values for choosing SQL statements
    to store in the stats$sqlsummary table - these statements will typically
    be the statements using the most resources.
    p_def_num_sql number default 50; /* Num. SQL statements */
    p_def_executions_th number default 100; /* Num. executions */
    p_def_parse_calls_th number default 1000; /* Num. parse calls */
    p_def_disk_reads_th number default 1000; /* Num. disk reads */
    p_def_buffer_gets_th number default 10000; /* Num. buf gets */
    p_def_sharable_mem_th number default 1048576; /* Sharable memory */
    p_def_version_count_th number default 20; /* Child Cursors */
    p_def_all_init varchar2(10) default 'FALSE';
    cursor get_instance is
    select instance_number, instance_name
    , startup_time, parallel, version
    , host_name
    from v$instance;
    cursor get_db is
    select dbid, name
    from v$database;
    procedure SNAP
    (i_snap_level in number default null
    ,i_session_id in number default null
    ,i_ucomment in varchar2 default null
    ,i_num_sql in number default null
    ,i_executions_th in number default null
    ,i_parse_calls_th in number default null
    ,i_disk_reads_th in number default null
    ,i_buffer_gets_th in number default null
    ,i_sharable_mem_th in number default null
    ,i_version_count_th in number default null
    ,i_all_init in varchar2 default null
    ,i_pin_statspack in varchar2 default null
    ,i_modify_parameter in varchar2 default 'FALSE'
    is
    /* Takes a snapshot by calling the SNAP function, and discards
    the snapshot id. This is useful when automating taking
    snapshots from dbms_job
    l_snap_id number;
    begin
    l_snap_id := statspack.snap ( i_snap_level, i_session_id, i_ucomment
    , i_num_sql
    , i_executions_th
    , i_parse_calls_th
    , i_disk_reads_th
    , i_buffer_gets_th
    , i_sharable_mem_th
    , i_version_count_th
    , i_all_init
    , i_pin_statspack
    , i_modify_parameter);
    end SNAP;
    procedure MODIFY_STATSPACK_PARAMETER
    ( i_dbid in number default null
    , i_instance_number in number default null
    , i_snap_level in number default null
    , i_session_id in number default null
    , i_ucomment in varchar2 default null
    , i_num_sql in number default null
    , i_executions_th in number default null
    , i_parse_calls_th in number default null
    , i_disk_reads_th in number default null
    , i_buffer_gets_th in number default null
    , i_sharable_mem_th in number default null
    , i_version_count_th in number default null
    , i_all_init in varchar2 default null
    , i_pin_statspack in varchar2 default null
    , i_modify_parameter in varchar2 default 'TRUE'
    is
    /* Calls QAM with the modify flag, and discards the
    output variables
    l_snap_level number;
    l_session_id number;
    l_ucomment varchar2(160);
    l_num_sql number;
    l_executions_th number;
    l_parse_calls_th number;
    l_disk_reads_th number;
    l_buffer_gets_th number;
    l_sharable_mem_th number;
    l_version_count_th number;
    l_all_init varchar2(5);
    l_pin_statspack varchar2(10);
    begin
    statspack.qam_statspack_parameter( i_dbid
    , i_instance_number
    , i_snap_level
    , i_session_id
    , i_ucomment
    , i_num_sql
    , i_executions_th
    , i_parse_calls_th
    , i_disk_reads_th
    , i_buffer_gets_th
    , i_sharable_mem_th
    , i_version_count_th
    , i_all_init
    , i_pin_statspack
    , 'TRUE'
    , l_snap_level
    , l_session_id
    , l_ucomment
    , l_num_sql
    , l_executions_th
    , l_parse_calls_th
    , l_disk_reads_th
    , l_buffer_gets_th
    , l_sharable_mem_th
    , l_version_count_th
    , l_all_init
    , l_pin_statspack);
    /* As we have explicity been requested to change the parameters,
    independently of taking a snapshot, commit
    commit;
    end MODIFY_STATSPACK_PARAMETER;
    procedure QAM_STATSPACK_PARAMETER
    ( i_dbid in number default null
    , i_instance_number in number default null
    , i_snap_level in number default null
    , i_session_id in number default null
    , i_ucomment in varchar2 default null
    , i_num_sql in number default null
    , i_executions_th in number default null
    , i_parse_calls_th in number default null
    , i_disk_reads_th in number default null
    , i_buffer_gets_th in number default null
    , i_sharable_mem_th in number default null
    , i_version_count_th in number default null
    , i_all_init in varchar2 default null
    , i_pin_statspack in varchar2 default null
    , i_modify_parameter in varchar2 default 'FALSE'
    , o_snap_level out number
    , o_session_id out number
    , o_ucomment out varchar2
    , o_num_sql out number
    , o_executions_th out number
    , o_parse_calls_th out number
    , o_disk_reads_th out number
    , o_buffer_gets_th out number
    , o_sharable_mem_th out number
    , o_version_count_th out number
    , o_all_init out varchar2
    , o_pin_statspack out varchar2
    is
    /* Query And Modify statspack parameter procedure, allows query
    and/or user modification of the statistics collection parameters
    for an instance. If there are no pre-existing parameters for
    an instance, insert the Oracle defaults.
    l_instance_number number;
    l_dbid number;
    ui_all_init varchar2(5);
    l_params_exist varchar2(1);
    begin
    if ((i_dbid is null ) or (i_instance_number is null)) then
    l_dbid := p_dbid;
    l_instance_number := p_instance_number;
    else
    l_dbid := i_dbid;
    l_instance_number := i_instance_number;
    end if;
    /* Upper case any input vars which are inserted */
    ui_all_init := upper(i_all_init);
    if ( (i_modify_parameter is null)
    or (upper(i_modify_parameter) = 'FALSE') ) then
    /* Query values, if none exist, insert the defaults tempered
    with variables supplied */
    begin
    select nvl(i_session_id, session_id)
    , nvl(i_snap_level, snap_level)
    , nvl(i_ucomment, ucomment)
    , nvl(i_num_sql, num_sql)
    , nvl(i_executions_th, executions_th)
    , nvl(i_parse_calls_th, parse_calls_th)
    , nvl(i_disk_reads_th, disk_reads_th)
    , nvl(i_buffer_gets_th, buffer_gets_th)
    , nvl(i_sharable_mem_th, sharable_mem_th)
    , nvl(i_version_count_th, version_count_th)
    , nvl(ui_all_init, all_init)
    , nvl(i_pin_statspack, pin_statspack)
    into o_session_id
    , o_snap_level
    , o_ucomment
    , o_num_sql
    , o_executions_th
    , o_parse_calls_th
    , o_disk_reads_th
    , o_buffer_gets_th
    , o_sharable_mem_th
    , o_version_count_th
    , o_all_init
    , o_pin_statspack
    from stats$statspack_parameter
    where instance_number = l_instance_number
    and dbid = l_dbid;
    exception
    when NO_DATA_FOUND then
    insert into stats$statspack_parameter
    ( dbid
    , instance_number
    , session_id
    , snap_level
    , ucomment
    , num_sql
    , executions_th
    , parse_calls_th
    , disk_reads_th
    , buffer_gets_th
    , sharable_mem_th
    , version_count_th
    , all_init
    , pin_statspack
    , last_modified
    values
    ( l_dbid
    , l_instance_number
    , p_def_session_id
    , p_def_snap_level
    , p_def_ucomment
    , p_def_num_sql
    , p_def_executions_th
    , p_def_parse_calls_th
    , p_def_disk_reads_th
    , p_def_buffer_gets_th
    , p_def_sharable_mem_th
    , p_def_version_count_th
    , p_def_all_init
    , p_def_pin_statspack
    , SYSDATE
    returning nvl(i_session_id, p_def_session_id)
    , nvl(i_snap_level, p_def_snap_level)
    , nvl(i_ucomment, p_def_ucomment)
    , nvl(i_num_sql, p_def_num_sql)
    , nvl(i_executions_th, p_def_executions_th)
    , nvl(i_parse_calls_th, p_def_parse_calls_th)
    , nvl(i_disk_reads_th, p_def_disk_reads_th)
    , nvl(i_buffer_gets_th, p_def_buffer_gets_th)
    , nvl(i_sharable_mem_th, p_def_sharable_mem_th)
    , nvl(i_version_count_th, p_def_version_count_th)
    , nvl(ui_all_init, p_def_all_init)
    , nvl(i_pin_statspack, p_def_pin_statspack)
    into o_session_id
    , o_snap_level
    , o_ucomment
    , o_num_sql
    , o_executions_th
    , o_parse_calls_th
    , o_disk_reads_th
    , o_buffer_gets_th
    , o_sharable_mem_th
    , o_version_count_th
    , o_all_init
    , o_pin_statspack;
    end; /* don't modify parameter values */
    elsif upper(i_modify_parameter) = 'TRUE' then
    /* modify values, if none exist, insert the defaults tempered
    with the variables supplied */
    begin
    update stats$statspack_parameter
    set session_id = nvl(i_session_id, session_id)
    , snap_level = nvl(i_snap_level, snap_level)
    , ucomment = nvl(i_ucomment, ucomment)
    , num_sql = nvl(i_num_sql, num_sql)
    , executions_th = nvl(i_executions_th, executions_th)
    , parse_calls_th = nvl(i_parse_calls_th, parse_calls_th)
    , disk_reads_th = nvl(i_disk_reads_th, disk_reads_th)
    , buffer_gets_th = nvl(i_buffer_gets_th, buffer_gets_th)
    , sharable_mem_th = nvl(i_sharable_mem_th, sharable_mem_th)
    , version_count_th = nvl(i_version_count_th, version_count_th)
    , all_init = nvl(ui_all_init, all_init)
    , pin_statspack = nvl(i_pin_statspack, pin_statspack)
    where instance_number = l_instance_number
    and dbid = l_dbid
    returning session_id
    , snap_level
    , ucomment
    , num_sql
    , executions_th
    , parse_calls_th
    , disk_reads_th
    , buffer_gets_th
    , sharable_mem_th
    , version_count_th
    , all_init
    , pin_statspack
    into o_session_id
    , o_snap_level
    , o_ucomment
    , o_num_sql
    , o_executions_th
    , o_parse_calls_th
    , o_disk_reads_th
    , o_buffer_gets_th
    , o_sharable_mem_th
    , o_version_count_th
    , o_all_init
    , o_pin_statspack;
    if SQL%ROWCOUNT = 0 then
    insert into stats$statspack_parameter
    ( dbid
    , instance_number
    , session_id
    , snap_level
    , ucomment
    , num_sql
    , executions_th
    , parse_calls_th
    , disk_reads_th
    , buffer_gets_th
    , sharable_mem_th
    , version_count_th
    , all_init
    , pin_statspack
    , last_modified
    values
    ( l_dbid
    , l_instance_number
    , nvl(i_session_id, p_def_session_id)
    , nvl(i_snap_level, p_def_snap_level)
    , nvl(i_ucomment, p_def_ucomment)
    , nvl(i_num_sql, p_def_num_sql)
    , nvl(i_executions_th, p_def_executions_th)
    , nvl(i_parse_calls_th, p_def_parse_calls_th)
    , nvl(i_disk_reads_th, p_def_disk_reads_th)
    , nvl(i_buffer_gets_th, p_def_buffer_gets_th)
    , nvl(i_sharable_mem_th, p_def_sharable_mem_th)
    , nvl(i_version_count_th, p_def_version_count_th)
    , nvl(ui_all_init, p_def_all_init)
    , nvl(i_pin_statspack, p_def_pin_statspack)
    , SYSDATE
    returning session_id
    , snap_level
    , ucomment
    , num_sql
    , executions_th
    , parse_calls_th
    , disk_reads_th
    , buffer_gets_th
    , sharable_mem_th
    , version_count_th
    , all_init
    , pin_statspack
    into o_session_id
    , o_snap_level
    , o_ucomment
    , o_num_sql
    , o_executions_th
    , o_parse_calls_th
    , o_disk_reads_th
    , o_buffer_gets_th
    , o_sharable_mem_th
    , o_version_count_th
    , o_all_init
    , o_pin_statspack;
    end if;
    end; /* modify values */
    else
    /* error */
    raise_application_error
    (-20100,'QAM_STATSPACK_PARAMETER i_modify_parameter value is invalid');
    end if; /* modify */
    end QAM_STATSPACK_PARAMETER;
    procedure STAT_CHANGES
    /* Returns a set of differences of the values from corresponding pairs
    of rows in STATS$SYSSTAT, STATS$LIBRARYCACHE and STATS$WAITSTAT,
    based on the begin and end (bid, eid) snapshot id's specified.
    This procedure is the only call to STATSPACK made by the statsrep
    report.
    Modified to include multi-db support.
    ( bid IN number
    , eid IN number
    , db_ident IN number
    , inst_num IN number
    , parallel IN varchar2
    , lhtr OUT number, bfwt OUT number
    , tran OUT number, chng OUT number
    , ucal OUT number, urol OUT number
    , rsiz OUT number
    , phyr OUT number, phyrd OUT number
    , phyrdl OUT number
    , phyw OUT number, ucom OUT number
    , prse OUT number, hprse OUT number
    , recr OUT number, gets OUT number
    , rlsr OUT number, rent OUT number
    , srtm OUT number, srtd OUT number
    , srtr OUT number, strn OUT number
    , lhr OUT number, bc OUT varchar2
    , sp OUT varchar2, lb OUT varchar2
    , bs OUT varchar2, twt OUT number
    , logc OUT number, prscpu OUT number
    , tcpu OUT number, exe OUT number
    , prsela OUT number
    , bspm OUT number, espm OUT number
    , bfrm OUT number, efrm OUT number
    , blog OUT number, elog OUT number
    , bocur OUT number, eocur OUT number
    , dmsd OUT number, dmfc OUT number -- begin OPS
    , dfcms OUT number, dfcmr OUT number
    , dmsi OUT number, dmrv OUT number
    , dynal OUT number, dynares OUT number
    , pmrv OUT number, pmpt OUT number
    , npmrv OUT number, npmpt OUT number
    , scma OUT number, scml OUT number
    , pinc OUT number, picrnc OUT number
    , picc OUT number, picrcc OUT number
    , pbc OUT number, pbcrc OUT number
    , pcba OUT number, pccrba OUT number
    , pcrbpi OUT number
    , dynapres OUT number, dynapshl OUT number
    , prcma OUT number, prcml OUT number
    , pwrm OUT number, pfpim OUT number
    , pwnm OUT number
    , dpms OUT number, dnpms OUT number
    , glsg OUT number, glag OUT number
    , glgt OUT number, glsc OUT number
    , glac OUT number, glct OUT number
    , glrl OUT number
    , gcge OUT number, gcgt OUT number
    , gccv OUT number, gcct OUT number
    , gccrrv OUT number, gccrrt OUT number
    , gccurv OUT number, gccurt OUT number
    , gccrsv OUT number
    , gccrbt OUT number, gccrft OUT number
    , gccrst OUT number, gccusv OUT number
    , gccupt OUT number, gccuft OUT number
    , gccust OUT number -- end OPS
    ) is
    bval number;
    eval number;
    l_b_session_id number; /* begin session id */
    l_b_serial# number; /* begin serial# */
    l_e_session_id number; /* end session id */
    l_e_serial# number; /* end serial# */
    function LIBRARYCACHE_HITRATIO RETURN number is
    /* Returns Library cache hit ratio for the begin and end (bid, eid)
    snapshot id's specified
    cursor LH (i_snap_id number) is
    select sum(pins), sum(pinhits)
    from stats$librarycache
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num;
    bpsum number;
    bhsum number;
    epsum number;
    ehsum number;
    begin
    if not LH%ISOPEN then open LH (bid); end if;
    fetch LH into bpsum, bhsum;
    if LH%NOTFOUND then
    raise_application_error
    (-20100,'Missing start value for stats$librarycache');
    end if; close LH;
    if not LH%ISOPEN then open LH (eid); end if;
    fetch LH into epsum, ehsum;
    if LH%NOTFOUND then
    raise_application_error
    (-20100,'Missing end value for stats$librarycache');
    end if; close LH;
    return (ehsum - bhsum) / (epsum - bpsum);
    end LIBRARYCACHE_HITRATIO;
    function GET_PARAM (i_name varchar2) RETURN varchar2 is
    /* Returns the value for the init.ora parameter for the snapshot
    specified.
    cursor PARAMETER is
    select value
    from stats$parameter
    where snap_id = eid
    and dbid = db_ident
    and instance_number = inst_num
    and name = i_name;
    par_value varchar2(512);
    begin
    if not PARAMETER%ISOPEN then open PARAMETER; end if;
    fetch PARAMETER into par_value;
    if PARAMETER%NOTFOUND then
    raise_application_error
    (-20100,'Missing Init.ora parameter '|| i_name);
    end if; close PARAMETER;
    return par_value;
    end GET_PARAM;
    function GET_SYSSTAT (i_name varchar2, i_beid number) RETURN number is
    /* Returns the value for the System Statistic for the snapshot
    specified.
    cursor SYSSTAT is
    select value
    from stats$sysstat
    where snap_id = i_beid
    and dbid = db_ident
    and instance_number = inst_num
    and name = i_name;
    stat_value varchar2(512);
    begin
    if not SYSSTAT%ISOPEN then open SYSSTAT; end if;
    fetch SYSSTAT into stat_value;
    if SYSSTAT%NOTFOUND then
    raise_application_error
    (-20100,'Missing System Statistic '|| i_name);
    end if; close SYSSTAT;
    return stat_value;
    end GET_SYSSTAT;
    function BUFFER_WAITS RETURN number is
    /* Returns the total number of waits for all buffers in the interval
    specified by the begin and end snapshot id's (bid, eid)
    cursor BW (i_snap_id number) is
    select sum(wait_count)
    from stats$waitstat
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num;
    bbwsum number; ebwsum number;
    begin
    if not BW%ISOPEN then open BW (bid); end if;
    fetch BW into bbwsum;
    if BW%NOTFOUND then
    raise_application_error
    (-20100,'Missing start value for stats$waitstat');
    end if; close BW;
    if not BW%ISOPEN then open BW (eid); end if;
    fetch BW into ebwsum;
    if BW%NOTFOUND then
    raise_application_error
    (-20100,'Missing end value for stats$waitstat');
    end if; close BW;
    return ebwsum - bbwsum;
    end BUFFER_WAITS;
    function TOTAL_EVENT_TIME RETURN number is
    /* Returns the total amount of time waited for events for
    the interval specified by the begin and end snapshot id's
    (bid, eid) by foreground processes. This excludes idle
    wait events.
    cursor WAITS (i_snap_id number) is
    select sum(time_waited_micro)
    from stats$system_event
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num
    and event not in (select event from stats$idle_event);
    bwaittime number;
    ewaittime number;
    begin
    if not WAITS%ISOPEN then open WAITS (bid); end if;
    fetch WAITS into bwaittime;
    if WAITS%NOTFOUND then
    raise_application_error
    (-20100,'Missing start value for stats$system_event');
    end if; close WAITS;
    if not WAITS%ISOPEN then open WAITS (eid); end if;
    fetch WAITS into ewaittime;
    if WAITS%NOTFOUND then
    raise_application_error
    (-20100,'Missing end value for stats$system_event');
    end if; close WAITS;
    return ewaittime - bwaittime;
    end TOTAL_EVENT_TIME;
    function LATCH_HITRATIO return NUMBER is
    /* Returns the latch hit ratio specified by the begin and
    end snapshot id's (bid, eid)
    cursor GETS_MISSES (i_snap_id number) is
    select sum(gets), sum(misses)
    from stats$latch
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num;
    blget number; -- beginning latch gets
    blmis number; -- beginning latch misses
    elget number; -- end latch gets
    elmis number; -- end latch misses
    begin
    if not GETS_MISSES%ISOPEN then open GETS_MISSES (bid); end if;
    fetch GETS_MISSES into blget, blmis;
    if GETS_MISSES%NOTFOUND then
    raise_application_error
    (-20100,'Missing start value for STATS$LATCH gets and misses');
    end if; close GETS_MISSES;
    if not GETS_MISSES%ISOPEN then open GETS_MISSES (eid); end if;
    fetch GETS_MISSES into elget, elmis;
    if GETS_MISSES%NOTFOUND then
    raise_application_error
    (-20100,'Missing end value for STATS$LATCH gets and misses');
    end if; close GETS_MISSES;
    return ( ( elmis - blmis ) / ( elget - blget ) );
    end LATCH_HITRATIO;
    function SGASTAT (i_name varchar2, i_beid number) RETURN number is
    /* Returns the bytes used by i_name in the shared pool
    for the begin or end snapshot (bid, eid) specified
    cursor bytes_used is
    select bytes
    from stats$sgastat
    where snap_id = i_beid
    and dbid = db_ident
    and instance_number = inst_num
    and pool in ('shared pool', 'all pools')
    and name = i_name;
    total_bytes number;
    begin
    if i_name = 'total_shared_pool' then
    select sum(bytes)
    into total_bytes
    from stats$sgastat
    where snap_id = i_beid
    and dbid = db_ident
    and instance_number = inst_num
    and pool in ('shared pool','all pools');
    else
    open bytes_used; fetch bytes_used into total_bytes;
    if bytes_used%notfound then
    raise_application_error
    (-20100,'Missing value for SGASTAT: '||i_name);
    end if;
    close bytes_used;
    end if;
    return total_bytes;
    end SGASTAT;
    function SYSDIF (i_name varchar2) RETURN number is
    /* Returns the difference between statistics for the statistic
    name specified for the interval between the begin and end
    snapshot id's (bid, eid)
    cursor SY (i_snap_id number) is
    select value
    from stats$sysstat
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num
    and name = i_name;
    begin
    /* Get start value */
    open SY (bid); fetch SY into bval;
    if SY%notfound then
    raise_application_error
    (-20100,'Missing start value for statistic: '||i_name);
    end if; close SY;
    /* Get end value */
    open SY (eid); fetch SY into eval;
    if SY%notfound then
    raise_application_error
    (-20100,'Missing end value for statistic: '||i_name);
    end if; close SY;
    /* Return difference */
    return eval - bval;
    end SYSDIF;
    function SESDIF (st_name varchar2) RETURN number is
    /* Returns the difference between statistics values for the
    statistic name specified for the interval between the begin and end
    snapshot id's (bid, eid), for the session monitored for that
    snapshot
    cursor SE (i_snap_id number) is
    select ses.value
    from stats$sysstat sys
    , stats$sesstat ses
    where sys.snap_id = i_snap_id
    and ses.snap_id = i_snap_id
    and ses.dbid = db_ident
    and sys.dbid = db_ident
    and ses.instance_number = inst_num
    and sys.instance_number = inst_num
    and ses.statistic# = sys.statistic#
    and sys.name = st_name;
    begin
    /* Get start value */
    open SE (bid); fetch SE into bval;
    if SE%notfound then
    eval :=0;
    end if; close SE;
    /* Get end value */
    open SE (eid); fetch SE into eval;
    if SE%notfound then
    eval :=0;
    end if; close SE;
    /* Return difference */
    return eval - bval;
    end SESDIF;
    function DLMDIF (i_name varchar2) RETURN number is
    /* Returns the difference between statistics for the statistic
    name specified for the interval between the begin and end
    snapshot id's (bid, eid)
    cursor DLM (i_snap_id number) is
    select value
    from stats$dlm_misc
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num
    and name = i_name;
    begin
    /* Get start value */
    open DLM (bid); fetch DLM into bval;
    if DLM%notfound then
    raise_application_error
    (-20100,'Missing start value for statistic: '||i_name);
    end if; close DLM;
    /* Get end value */
    open DLM (eid); fetch DLM into eval;
    if DLM%notfound then
    raise_application_error
    (-20100,'Missing end value for statistic: '||i_name);
    end if; close DLM;
    /* Return difference */
    return eval - bval;
    end DLMDIF;
    begin /* main procedure body of STAT_CHANGES */
    lhtr := LIBRARYCACHE_HITRATIO;
    bfwt := BUFFER_WAITS;
    lhr := LATCH_HITRATIO;
    chng := SYSDIF('db block changes');
    ucal := SYSDIF('user calls');
    urol := SYSDIF('user rollbacks');
    ucom := SYSDIF('user commits');
    tran := ucom + urol;
    rsiz := SYSDIF('redo size');
    phyr := SYSDIF('physical reads');
    phyrd := SYSDIF('physical reads direct');
    phyrdl := SYSDIF('physical reads direct (lob)');
    phyw := SYSDIF('physical writes');
    hprse := SYSDIF('parse count (hard)');
    prse := SYSDIF('parse count (total)');
    gets := SYSDIF('session logical reads');
    recr := SYSDIF('recursive calls');
    rlsr := SYSDIF('redo log space requests');
    rent := SYSDIF('redo entries');
    srtm := SYSDIF('sorts (memory)');
    srtd := SYSDIF('sorts (disk)');
    srtr := SYSDIF('sorts (rows)');
    logc := SYSDIF('logons cumulative');
    prscpu := SYSDIF('parse time cpu');
    prsela := SYSDIF('parse time elapsed');
    tcpu := SYSDIF('CPU used by this session');
    exe := SYSDIF('execute count');
    bs := GET_PARAM('db_block_size');
    bc := GET_PARAM('db_block_buffers') * bs;
    if bc = 0 then
    bc := GET_PARAM('db_cache_size')
    + GET_PARAM('db_keep_cache_size')
    + GET_PARAM('db_recycle_cache_size')
    + GET_PARAM('db_2k_cache_size')
    + GET_PARAM('db_4k_cache_size')
    + GET_PARAM('db_8k_cache_size')
    + GET_PARAM('db_16k_cache_size')
    + GET_PARAM('db_32k_cache_size');
    end if;
    sp := GET_PARAM('shared_pool_size');
    lb := GET_PARAM('log_buffer');
    twt := TOTAL_EVENT_TIME; -- total wait time for all non-idle events
    bspm := SGASTAT('total_shared_pool', bid);
    espm := SGASTAT('total_shared_pool', eid);
    bfrm := SGASTAT('free memory', bid);
    efrm := SGASTAT('free memory', eid);
    blog := GET_SYSSTAT('logons current', bid);
    elog := GET_SYSSTAT('logons current', eid);
    bocur := GET_SYSSTAT('opened cursors current', bid);
    eocur := GET_SYSSTAT('opened cursors current', eid);
    /* Do we want to report on cluster-specific statistics? Check
    in procedure variable "parallel".
    if parallel = 'YES' then
    dmsd := DLMDIF('messages sent directly');
    dmfc := DLMDIF('messages flow controlled');
    dmsi := DLMDIF('messages sent indirectly');
    dmrv := DLMDIF('messages received');
    dfcms := DLMDIF('flow control messages sent');
    dfcmr := DLMDIF('flow control messages received');
    dynal := DLMDIF('dynamically allocated enqueues');
    dynares := DLMDIF('dynamically allocated resources');
    pmrv := DLMDIF('gcs msgs received');
    pmpt := DLMDIF('gcs msgs process time(ms)');
    npmrv := DLMDIF('ges msgs received');
    npmpt := DLMDIF('ges msgs process time(ms)');
    scma := DLMDIF('gcs side channel msgs actual');
    scml := DLMDIF('gcs side channel msgs logical');
    pinc := DLMDIF('gcs immediate (null) converts');
    picrnc := DLMDIF('gcs immediate cr (null) converts');
    picc := DLMDIF('gcs immediate (compatible) converts');
    picrcc := DLMDIF('gcs immediate cr (compatible) converts');
    pbc := DLMDIF('gcs blocked converts');
    pbcrc := DLMDIF('gcs blocked cr converts');
    pcba := DLMDIF('gcs compatible basts');
    pccrba := DLMDIF('gcs compatible cr basts');
    pcrbpi := DLMDIF('gcs cr basts to PIs');
    dynapres := DLMDIF('dynamically allocated gcs resources');
    dynapshl := DLMDIF('dynamically allocated gcs shadows');
    prcma := DLMDIF('gcs recovery claim msgs actual');
    prcml := DLMDIF('gcs recovery claim msgs logical');
    pwrm := DLMDIF('gcs write request msgs');
    pfpim := DLMDIF('gcs flush pi msgs');
    pwnm := DLMDIF('gcs write notification msgs');
    dpms := SYSDIF('gcs messages sent');
    dnpms := SYSDIF('ges messages sent');
    glsg := SYSDIF('global lock sync gets');
    glag := SYSDIF('global lock async gets');
    glgt := SYSDIF('global lock get time');
    glsc := SYSDIF('global lock sync converts');
    glac := SYSDIF('global lock async converts');
    glct := SYSDIF('global lock convert time');
    glrl := SYSDIF('global lock releases');
    gcge := SYSDIF('global cache gets');
    gcgt := SYSDIF('global cache get time');
    gccv := SYSDIF('global cache converts');
    gcct := SYSDIF('global cache convert time');
    gccrrv := SYSDIF('global cache cr blocks received');
    gccrrt := SYSDIF('global cache cr block receive time');
    gccurv := SYSDIF('global cache current blocks received');
    gccurt := SYSDIF('global cache current block receive time');
    gccrsv := SYSDIF('global cache cr blocks served');
    gccrbt := SYSDIF('global cache cr block build time');
    gccrft := SYSDIF('global cache cr block flush time');
    gccrst := SYSDIF('global cache cr block send time');
    gccusv := SYSDIF('global cache current blocks served');
    gccupt := SYSDIF('global cache current block pin time');
    gccuft := SYSDIF('global cache current block flush time');
    gccust := SYSDIF('global cache current block send time');
    end if;
    /* Determine if we want to report on session-specific statistics.
    Check that the session is the same one for both snapshots.
    select session_id
    , serial#
    into l_b_session_id
    , l_b_serial#
    from stats$snapshot
    where snap_id = bid
    and dbid = db_ident
    and instance_number = inst_num;
    select session_id
    , serial#
    into l_e_session_id
    , l_e_serial#
    from stats$snapshot
    where snap_id = eid
    and dbid = db_ident
    and instance_number = inst_num;
    if ( (l_b_session_id = l_e_session_id)
    and (l_b_serial# = l_e_serial#)
    and (l_b_session_id != 0) ) then
    /* we have a valid comparison - it is the
    same session - get number of tx performed
    by this session */
    strn := SESDIF('user rollbacks') + SESDIF('user commits');
    if strn = 0 then
    /* No new transactions */
    strn := 1;
    end if;
    else
    /* No valid comparison can be made */
    strn :=1;
    end if;
    end STAT_CHANGES;
    function SNAP
    (i_snap_level in number default null
    ,i_session_id in number default null
    ,i_ucomment in varchar2 default null
    ,i_num_sql in number default null
    ,i_executions_th in number default null
    ,i_parse_calls_th in number default null
    ,i_disk_reads_th in number default null
    ,i_buffer_gets_th in number default null
    ,i_sharable_mem_th in number default null
    ,i_version_count_th in number default null
    ,i_all_init in varchar2 default null
    ,i_pin_statspack in varchar2 default null
    ,i_modify_parameter in varchar2 default 'FALSE'
    RETURN integer IS
    /* This function performs a snapshot of the v$ views into the
    stats$ tables, and returns the snapshot id.
    If parameters are passed, these are the values used, otherwise
    the values stored in the stats$statspack_parameter table are used.
    l_snap_id integer;
    l_snap_level number;
    l_session_id number;
    l_serial# number;
    l_ucomment varchar2(160);
    l_num_sql number;
    l_executions_th number;
    l_parse_calls_th number;
    l_disk_reads_th number;
    l_buffer_gets_th number;
    l_sharable_mem_th number;
    l_version_count_th number;
    l_all_init varchar2(5);
    l_pin_statspack varchar2(10);
    l_sql_stmt varchar2(3000);
    l_slarti varchar2(20);
    l_threshold number;
    l_total_sql number := 0;
    l_total_sql_mem number := 0;
    l_single_use_sql number := 0;
    l_single_use_sql_mem number := 0;
    l_text_subset varchar2(31);
    l_sharable_mem number;
    l_version_count number;
    l_sorts number;
    l_module varchar2(64);
    l_loaded_versions number;
    l_executions number;
    l_loads number;
    l_invalidations number;
    l_parse_calls number;
    l_disk_reads number;
    l_buffer_gets number;
    l_rows_processed number;
    l_address raw(8);
    l_hash_value number;
    l_version_count number;
    l_max_begin_time date;
    cursor GETSERIAL is
    select serial#
    from v$session
    where sid = l_session_id;
    PROCEDURE snap_sql IS
    begin
    /* Gather summary statistics */
    insert into stats$sql_statistics
    ( snap_id
    , dbid
    , instance_number
    , total_sql
    , total_sql_mem
    , single_use_sql
    , single_use_sql_mem
    select l_snap_id
    , p_dbid
    , p_instance_number
    , count(1)
    , sum(sharable_mem)
    , sum(decode(executions, 1, 1, 0))
    , sum(decode(executions, 1, sharable_mem, 0))
    from stats$v$sqlxs
    where is_obsolete = 'N';
    /* Gather SQL statements which exceed any threshold,
    excluding obsolete parent cursors
    insert into stats$sql_summary
    ( snap_id
    , dbid
    , instance_number
    , text_subset
    , sharable_mem
    , sorts
    , module
    , loaded_versions
    , executions
    , loads
    , invalidations
    , parse_calls
    , disk_reads
    , buffer_gets
    , rows_processed
    , command_type
    , address
    , hash_value
    , version_count
    , cpu_time
    , elapsed_time
    , outline_sid
    , outline_category
    select l_snap_id
    , p_dbid
    , p_instance_number
    , substr(sql_text,1,31)
    , sharable_mem
    , sorts
    , module
    , loaded_versions
    , executions
    , loads
    , invalidations
    , parse_calls
    , disk_reads
    , buffer_gets
    , rows_processed
    , command_type
    , address
    , hash_value
    , version_count
    , cpu_time
    , elapsed_time
    , outline_sid
    , outline_category
    from stats$v$sqlxs
    where is_obsolete = 'N'
    and ( buffer_gets > l_buffer_gets_th
    or disk_reads > l_disk_reads_th
    or parse_calls > l_parse_calls_th
    or executions > l_executions_th
    or sharable_mem > l_sharable_mem_th
    or version_count > l_version_count_th
    /* Insert the SQL Text for hash_values captured in the snapshot
    into stats$sqltext if it's not already there. Identify SQL which
    execeeded the threshold by querying stats$sql_summary for this
    snapid and database instance
    insert into stats$sqltext
    ( hash_value
    , text_subset
    , piece
    , sql_text
    , address
    , command_type
    , last_snap_id
    select st1.hash_value
    , ss.text_subset
    , st1.piece
    , st1.sql_text
    , st1.address
    , st1.command_type
    , ss.snap_id
    from v$sqltext st1
    , stats$sql_summary ss
    where ss.snap_id = l_snap_id
    and ss.dbid = p_dbid
    and ss.instance_number = p_instance_number
    and st1.hash_value = ss.hash_value
    and st1.address = ss.address
    and not exists (select 1
    from stats$sqltext st2
    where st2.hash_value = ss.hash_value
    and st2.text_subset = ss.text_subset
    IF l_snap_level >= 6 THEN
    /* Identify SQL which execeeded the threshold by querying
    stats$sql_summary for this snapid and database instance.
    Capture the plans which were used for the high-load SQL if
    don't already have this data.
    Omit capturing plan usage information for cursors which
    have a zero plan hash value.
    Currently this is captured in a level 6 (or greater)
    snapshot, however this may be integrated into level 5
    snapshot at a later date.
    hl - high load
    insert into stats$sql_plan_usage
    ( hash_value
    , text_subset
    , plan_hash_value
    , cost
    , snap_id
    , address
    , optimizer
    select hl.hash_value
    , hl.text_subset
    , hl.plan_hash_value
    , hl.cost
    , max(hl.snap_id)
    , max(hl.address)
    , max(hl.optimizer)
    from (select /*+ ordered use_nl(sq) index(sq) */
    ss.hash_value
    , ss.text_subset
    , sq.plan_hash_value
    , nvl(sq.optimizer_cost,-9) cost
    , ss.snap_id snap_id
    , ss.address
    , sq.optimizer_mode optimizer
    from stats$sql_summary ss
    , v$sql sq
    where ss.snap_id = l_snap_id
    and ss.dbid = p_dbid
    and ss.instance_number = p_instance_number
    and sq.hash_value = ss.hash_value
    and sq.address = ss.address
    and sq.plan_hash_value > 0
    ) hl
    where not exists (select /*+ no_unnest */
    from stats$sql_plan_usage spu
    where spu.hash_value = hl.hash_value
    and spu.text_subset = hl.text_subset
    and spu.plan_hash_value
    = hl.plan_hash_value
    and spu.cost = hl.cost
    group by hl.hash_value
    , hl.text_subset
    , hl.plan_hash_value
    , hl.cost
    , hl.optimizer;
    /* For all new hash_value, plan_hash_value, cost combinations
    just captured, get the optimizer plans, if we don't already
    have them. Note that the plan (and hence the plan hash value)
    comprises the access path and the join order (and not
    variable factors such as the cardinality).
    insert into stats$sql_plan
    ( plan_hash_value
    , id
    , operation
    , options
    , object_node
    , object#
    , object_owner
    , object_name
    , optimizer
    , parent_id
    , depth
    , position
    , cost
    , cardinality
    , bytes
    , other_tag
    , partition_start
    , partition_stop
    , partition_id
    , other
    , distribution
    , cpu_cost
    , io_cost
    , temp_space
    , snap_id
    select /*+ ordered use_nl(s) use_nl(sp.p) */
    new_plan.plan_hash_value
    , sp.id
    , max(sp.operation)
    , max(sp.options)
    , max(sp.object_node)
    , max(sp.object#)
    , max(sp.object_owner)
    , max(sp.object_name)
    , max(sp.optimizer)
    , max(sp.parent_id)
    , max(sp.depth)
    , max(sp.position)
    , max(sp.cost)
    , max(sp.cardinality)
    , max(sp.bytes)
    , max(sp.other_tag)
    , max(sp.partition_start)
    , max(sp.partition_stop)
    , max(sp.partition_id)
    , max(sp.other)
    , max(sp.distribution)
    , max(sp.cpu_cost)
    , max(sp.io_cost)
    , max(sp.temp_space)
    , max(new_plan.snap_id)
    from (select /*+ index(spu) */
    distinct
    spu.plan_hash_value
    , spu.hash_value
    , spu.address
    , spu.text_subset
    , spu.snap_id
    from stats$sql_plan_usage spu
    where spu.snap_id = l_snap_id
    and not exists (select /*+ nl_aj */ *
    from stats$sql_plan ssp
    where ssp.plan_hash_value
    = spu.plan_hash_value
    ) new_plan
    , v$sql s
    , v$sql_plan sp
    where sp.hash_value = new_plan.hash_value
    and sp.address = new_plan.address
    and s.hash_value = new_plan.hash_value
    and s.address = new_plan.address
    and s.hash_value = sp.hash_value
    and s.address = sp.address
    and s.child_number = sp.child_number
    group by
    new_plan.plan_hash_value
    , sp.id;
    END IF; /* snap level >=6 */
    END snap_sql;
    begin /* Function SNAP */
    /* Get instance parameter defaults from stats$statspack_parameter,
    or use supplied parameters.
    If all parameters are specified, use them, otherwise get values
    from the parameters not specified from stats$statspack_parameter.
    statspack.qam_statspack_parameter
    ( p_dbid
    , p_instance_number
    , i_snap_level, i_session_id, i_ucomment, i_num_sql
    , i_executions_th, i_parse_calls_th
    , i_disk_reads_th, i_buffer_gets_th, i_sharable_mem_th
    , i_version_count_th, i_all_init
    , i_pin_statspack
    , i_modify_parameter
    , l_snap_level, l_session_id, l_ucomment, l_num_sql
    , l_executions_th, l_parse_calls_th
    , l_disk_reads_th, l_buffer_gets_th, l_sharable_mem_th
    , l_version_count_th, l_all_init
    , l_pin_statspack);
    /* Generate a snapshot id */
    select stats$snapshot_id.nextval
    into l_snap_id
    from dual
    where rownum = 1;
    /* Determine the serial# of the session to maintain stats for,
    if this was requested.
    if l_session_id > 0 then
    if not GETSERIAL%ISOPEN then open GETSERIAL; end if;
    fetch GETSERIAL into l_serial#;
    if GETSERIAL%NOTFOUND then
    /* Session has already disappeared - don't gather
    statistics for this session in this snapshot */
    l_session_id := 0;
    l_serial# := 0;
    end if; close GETSERIAL;
    else
    l_serial# := 0;
    end if;
    /* The instance has been restarted since the last snapshot */
    if p_new_sga = 0
    then
    begin
    p_new_sga := 1;
    /* Get the instance startup time, and other characteristics */
    insert into stats$database_instance
    ( dbid
    , instance_number
    , startup_time
    , snap_id
    , parallel
    , version
    , db_name
    , instance_name
    , host_name
    select p_dbid
    , p_instance_number
    , p_startup_time
    , l_snap_id
    , p_parallel
    , p_version
    , p_name
    , p_instance_name
    , p_host_name
    from sys.dual;
    commit;
    end;
    end if; /* new SGA */
    /* Work out the max undo stat time, used for gathering undo stat data */
    select nvl(max(begin_time), to_date('01011900','DDMMYYYY'))
    into l_max_begin_time
    from stats$undostat
    where dbid = p_dbid
    and instance_number = p_instance_number;
    /* Save the snapshot characteristics */
    insert into stats$snapshot
    ( snap_id, dbid, instance_number
    , snap_time, startup_time
    , session_id, snap_level, ucomment
    , executions_th, parse_calls_th, disk_reads_th
    , buffer_gets_th, sharable_mem_th
    , version_count_th, serial#, all_init)
    values
    ( l_snap_id, p_dbid, p_instance_number
    , SYSDATE, p_startup_time
    , l_session_id, l_snap_level, l_ucomment
    , l_executions_th, l_parse_calls_th, l_disk_reads_th
    , l_buffer_gets_th, l_sharable_mem_th
    , l_version_count_th, l_serial#, l_all_init);
    /* Begin gathering statistics */
    insert into stats$filestatxs
    ( snap_id
    , dbid
    , instance_number
    , tsname
    , filename
    , phyrds
    , phywrts
    , singleblkrds
    , readtim
    , writetim
    , singleblkrdtim
    , phyblkrd
    , phyblkwrt
    , wait_count
    , time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , tsname
    , filename
    , phyrds
    , phywrts
    , singleblkrds
    , readtim
    , writetim
    , singleblkrdtim
    , phyblkrd
    , phyblkwrt
    , wait_count
    , time
    from stats$v$filestatxs;
    insert into stats$tempstatxs
    ( snap_id
    , dbid
    , instance_number
    , tsname
    , filename
    , phyrds
    , phywrts
    , singleblkrds
    , readtim
    , writetim
    , singleblkrdtim
    , phyblkrd
    , phyblkwrt
    , wait_count
    , time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , tsname
    , filename
    , phyrds
    , phywrts
    , singleblkrds
    , readtim
    , writetim
    , singleblkrdtim
    , phyblkrd
    , phyblkwrt
    , wait_count
    , time
    from stats$v$tempstatxs;
    insert into stats$librarycache
    ( snap_id
    , dbid
    , instance_number
    , namespace
    , gets
    , gethits
    , pins
    , pinhits
    , reloads
    , invalidations
    , dlm_lock_requests
    , dlm_pin_requests
    , dlm_pin_releases
    , dlm_invalidation_requests
    , dlm_invalidations
    select l_snap_id
    , p_dbid
    , p_instance_number
    , namespace
    , gets
    , gethits
    , pins
    , pinhits
    , reloads
    , invalidations
    , dlm_lock_requests
    , dlm_pin_requests
    , dlm_pin_releases
    , dlm_invalidation_requests
    , dlm_invalidations
    from v$librarycache;
    insert into stats$buffer_pool_statistics
    ( snap_id
    , dbid
    , instance_number
    , id
    , name
    , block_size
    , set_msize
    , cnum_repl
    , cnum_write
    , cnum_set
    , buf_got
    , sum_write
    , sum_scan
    , free_buffer_wait
    , write_complete_wait
    , buffer_busy_wait
    , free_buffer_inspected
    , dirty_buffers_inspected
    , db_block_change
    , db_block_gets
    , consistent_gets
    , physical_reads
    , physical_writes
    select l_snap_id
    , p_dbid
    , p_instance_number
    , id
    , name
    , block_size
    , set_msize
    , cnum_repl
    , cnum_write
    , cnum_set
    , buf_got
    , sum_write
    , sum_scan
    , free_buffer_wait
    , write_complete_wait
    , buffer_busy_wait
    , free_buffer_inspected
    , dirty_buffers_inspected
    , db_block_change
    , db_block_gets
    , consistent_gets
    , physical_reads
    , physical_writes
    from v$buffer_pool_statistics;
    insert into stats$rollstat
    ( snap_id
    , dbid
    , instance_number
    , usn
    , extents
    , rssize
    , writes
    , xacts
    , gets
    , waits
    , optsize
    , hwmsize
    , shrinks
    , wraps
    , extends
    , aveshrink
    , aveactive
    select l_snap_id
    , p_dbid
    , p_instance_number
    , usn
    , extents
    , rssize
    , writes
    , xacts
    , gets
    , waits
    , optsize
    , hwmsize
    , shrinks
    , wraps
    , extends
    , aveshrink
    , aveactive
    from v$rollstat;
    insert into stats$rowcache_summary
    ( snap_id
    , dbid
    , instance_number
    , parameter
    , total_usage
    , usage
    , gets
    , getmisses
    , scans
    , scanmisses
    , scancompletes
    , modifications
    , flushes
    , dlm_requests
    , dlm_conflicts
    , dlm_releases
    select l_snap_id
    , p_dbid
    , p_instance_number
    , parameter
    , sum("COUNT")
    , sum(usage)
    , sum(gets)
    , sum(getmisses)
    , sum(scans)
    , sum(scanmisses)
    , sum(scancompletes)
    , sum(modifications)
    , sum(flushes)
    , sum(dlm_requests)
    , sum(dlm_conflicts)
    , sum(dlm_releases)
    from v$rowcache
    group by l_snap_id, p_dbid, p_instance_number, parameter;
    /* Collect parameters every snapshot, to cater for dynamic
    parameters changable while instance is running
    if l_all_init = 'FALSE' then
    insert into stats$parameter
    ( snap_id
    , dbid
    , instance_number
    , name
    , value
    , isdefault
    , ismodified
    select l_snap_id
    , p_dbid
    , p_instance_number
    , name
    , value
    , isdefault
    , ismodified
    from v$system_parameter;
    else
    insert into stats$parameter
    ( snap_id
    , dbid
    , instance_number
    , name
    , value
    , isdefault
    , ismodified
    select l_snap_id
    , p_dbid
    , p_instance_number
    , i.ksppinm
    , sv.ksppstvl
    , sv.ksppstdf
    , decode(bitand(sv.ksppstvf,7),1,'MODIFIED',4,'SYSTEM_MOD','FALSE')
    from stats$x$ksppi i
    , stats$x$ksppsv sv
    where i.indx = sv.indx;
    end if;
    /* To cater for variable size SGA - insert on each snapshot */
    insert into stats$sga
    ( snap_id
    , dbid
    , instance_number
    , name
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , name
    , value
    from v$sga;
    /* Get current allocation of memory in the SGA */
    insert into stats$sgastat
    ( snap_id
    , dbid
    , instance_number
    , pool
    , name
    , bytes
    select l_snap_id
    , p_dbid
    , p_instance_number
    , pool
    , name
    , bytes
    from v$sgastat;
    insert into stats$system_event
    ( snap_id
    , dbid
    , instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    select l_snap_id
    , p_dbid
    , p_instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    from v$system_event;
    insert into stats$bg_event_summary
    ( snap_id
    , dbid
    , instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    select l_snap_id
    , p_dbid
    , p_instance_number
    , e.event
    , sum(e.total_waits)
    , sum(e.total_timeouts)
    , sum(e.time_waited_micro)
    from v$session_event e
    where e.sid in (select s.sid from v$session s where s.type = 'BACKGROUND')
    group by l_snap_id, p_dbid, p_instance_number, e.event;
    insert into stats$sysstat
    ( snap_id
    , dbid
    , instance_number
    , statistic#
    , name
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , statistic#
    , name
    , value
    from v$sysstat;
    insert into stats$waitstat
    ( snap_id
    , dbid
    , instance_number
    , class
    , wait_count
    , time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , class
    , "COUNT"
    , time
    from v$waitstat;
    insert into stats$enqueue_stat
    ( snap_id
    , dbid
    , instance_number
    , eq_type
    , total_req#
    , total_wait#
    , succ_req#
    , failed_req#
    , cum_wait_time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , eq_type
    , total_req#
    , total_wait#
    , succ_req#
    , failed_req#
    , cum_wait_time
    from v$enqueue_stat
    where total_req# != 0;
    insert into stats$latch
    ( snap_id
    , dbid
    , instance_number
    , name
    , latch#
    , level#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , name
    , latch#
    , level#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    from v$latch;
    insert into stats$latch_misses_summary
    ( snap_id
    , dbid
    , instance_number
    , parent_name
    , where_in_code
    , nwfail_count
    , sleep_count
    , wtr_slp_count
    select l_snap_id
    , p_dbid
    , p_instance_number
    , parent_name
    , "WHERE"
    , sum(nwfail_count)
    , sum(sleep_count)
    , sum(wtr_slp_count)
    from v$latch_misses
    where sleep_count > 0
    group by l_snap_id, p_dbid, p_instance_number
    , parent_name, "WHERE";
    insert into stats$resource_limit
    ( snap_id
    , dbid
    , instance_number
    , resource_name
    , current_utilization
    , max_utilization
    , initial_allocation
    , limit_value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , resource_name
    , current_utilization
    , max_utilization
    , initial_allocation
    , limit_value
    from v$resource_limit
    where limit_value != ' UNLIMITED'
    and max_utilization > 0;
    insert into stats$undostat
    ( begin_time
    , end_time
    , dbid
    , instance_number
    , snap_id
    , undotsn
    , undoblks
    , txncount
    , maxquerylen
    , maxconcurrency
    , unxpstealcnt
    , unxpblkrelcnt
    , unxpblkreucnt
    , expstealcnt
    , expblkrelcnt
    , expblkreucnt
    , ssolderrcnt
    , nospaceerrcnt
    select begin_time
    , end_time
    , p_dbid
    , p_instance_number
    , l_snap_id
    , undotsn
    , undoblks
    , txncount
    , maxquerylen
    , maxconcurrency
    , unxpstealcnt
    , unxpblkrelcnt
    , unxpblkreucnt
    , expstealcnt
    , expblkrelcnt
    , expblkreucnt
    , ssolderrcnt
    , nospaceerrcnt
    from v$undostat
    where begin_time > l_max_begin_time
    and begin_time + (1/(24*6)) <= end_time;
    insert into stats$db_cache_advice
    ( snap_id
    , dbid
    , instance_number
    , id
    , name
    , block_size
    , buffers_for_estimate
    , advice_status
    , size_for_estimate
    , estd_physical_read_factor
    , estd_physical_reads
    select l_snap_id
    , p_dbid
    , p_instance_number
    , id
    , name
    , block_size
    , buffers_for_estimate
    , advice_status
    , size_for_estimate
    , estd_physical_read_factor
    , estd_physical_reads
    from v$db_cache_advice
    where advice_status = 'ON';
    insert into stats$pgastat
    ( snap_id
    , dbid
    , instance_number
    , name
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , name
    , value
    from v$pgastat;
    insert into stats$instance_recovery
    ( snap_id
    , dbid
    , instance_number
    , recovery_estimated_ios
    , actual_redo_blks
    , target_redo_blks
    , log_file_size_redo_blks
    , log_chkpt_timeout_redo_blks
    , log_chkpt_interval_redo_blks
    , fast_start_io_target_redo_blks
    , target_mttr
    , estimated_mttr
    , ckpt_block_writes
    select l_snap_id
    , p_dbid
    , p_instance_number
    , recovery_estimated_ios
    , actual_redo_blks
    , target_redo_blks
    , log_file_size_redo_blks
    , log_chkpt_timeout_redo_blks
    , log_chkpt_interval_redo_blks
    , fast_start_io_target_redo_blks
    , target_mttr
    , estimated_mttr
    , ckpt_block_writes
    from v$instance_recovery;
    if p_parallel = 'YES' then
    insert into stats$dlm_misc
    ( snap_id
    , dbid
    , instance_number
    , statistic#
    , name
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , statistic#
    , name
    , value
    from v$dlm_misc;
    end if; /* parallel */
    /* Begin gathering Extended Statistics */
    IF l_snap_level >= 5 THEN
    snap_sql;
    END IF; /* snap level >=5 */
    IF l_snap_level >= 10 THEN
    insert into stats$latch_children
    ( snap_id
    , dbid
    , instance_number
    , latch#
    , child#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , latch#
    , child#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    from v$latch_children;
    insert into stats$latch_parent
    ( snap_id
    , dbid
    , instance_number
    , latch#
    , level#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , latch#
    , level#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    from v$latch_parent;
    END IF; /* snap level >=10 */
    /* Record level session-granular statistics if a specific session
    has been requested
    if l_session_id > 0
    then
    insert into stats$sesstat
    ( snap_id
    , dbid
    , instance_number
    , statistic#
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , statistic#
    , value
    from v$sesstat
    where sid = l_session_id;
    insert into stats$session_event
    ( snap_id
    , dbid
    , instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    , max_wait
    select l_snap_id
    , p_dbid
    , p_instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    , max_wait
    from v$session_event
    where sid = l_session_id;
    end if;
    commit work;
    RETURN l_snap_id;
    end SNAP; /* Function SNAP */
    begin /* STATSPACK body */
    /* Query the database id, instance_number, database name, instance
    name and startup time for the instance we are working on
    /* Get information about the current instance */
    open get_instance;
    fetch get_instance into
    p_instance_number, p_instance_name
    , p_startup_time, p_parallel, p_version
    , p_host_name;
    close get_instance;
    /* Select the database info for the db connected to */
    open get_db;
    fetch get_db into p_dbid, p_name;
    close get_db;
    /* Keep the package
    sys.dbms_shared_pool.keep('PERFSTAT.STATSPACK', 'P');
    /* Determine if the instance has been restarted since the previous snapshot
    begin
    select 1
    into p_new_sga
    from stats$database_instance
    where startup_time = p_startup_time
    and dbid = p_dbid
    and instance_number = p_instance_number;
    exception
    when NO_DATA_FOUND then
    p_new_sga := 0;
    end;
    end STATSPACK;
    Warning: Package Body created with compilation errors.
    SQL> show error
    Errors for PACKAGE BODY STATSPACK:
    LINE/COL ERROR
    2045/3 PLS-00201: identifier 'SYS.DBMS_SHARED_POOL' must be declared
    2045/3 PL/SQL: Statement ignored

    When i compiled the package, it compiled with following error.
    Warning: Package Body created with compilation errors.
    SQL> show error
    Errors for PACKAGE BODY STATSPACK:
    LINE/COL ERROR
    2045/3 PLS-00201: identifier 'SYS.DBMS_SHARED_POOL' must be declared
    2045/3 PL/SQL: Statement ignored
    SQL>

  • Oracle Performance tunning genral question

    Hi,
    Below is the list of Areas of Oracle db for which tunning activities are done. You are invited to comment to it weather this is complete list or need some addition or deletion. As I'm learning PT for Oracle now a days, therefore I want to expand my knowledge by sharing what I'm learning and what I need to learn.
    So comment with Open hearts on it. Espically from experts and Gurus.
    Here is the List
    1-Planning for Performance, include Storage consideration( Weather it is SAN, NAS, DAS), Network planning and host OS planning with proper configuration for running Oracle.
    2-Database desining (Not under-Normalized and not Over-Normalized with proper usage of Indexes, views and Stored Procedures)
    3- Instance tunning (Memory structure + B.g Processes)
    4- Session tunning.
    5- Segment Space tunning.
    6- SQL tunning.
    This is what uptill what I've learned. If it needs addition kindly tell me what are these. Please also provide me links(good and precise one) for PT tutorials on web.Also note that I'm discussing this w.r.t Single instance non-rac db.
    Looking for Good sugessions
    Regards,
    Abbasi

    Hello,
    This is the oracle course contents:
    Contents
    Preface
    1 Introduction
    Course Objectives 1-2
    Organization 1-3
    Agenda 1-4
    What Is Not Included 1-6
    Who Tunes? 1-7
    What Does the DBA Tune? 1-8
    How to Tune 1-10
    Tuning Methodology 1-11
    Effective Tuning Goals 1-13
    General Tuning Session 1-15
    Summary 1-17
    2 Basic Tuning Tools
    Objectives 2-2
    Performance Tuning Diagnostics 2-3
    Performance Tuning Tools 2-4
    Tuning Objectives 2-5
    Top Wait Events 2-6
    DB Time 2-7
    CPU and Wait Time Tuning Dimensions 2-8
    Time Model: Overview 2-9
    Time Model Statistics Hierarchy 2-10
    Time Model Example 2-12
    Dynamic Performance Views 2-13
    Dynamic Performance Views: Usage Examples 2-14
    Dynamic Performance Views: Considerations 2-15
    Statistic Levels 2-16
    Statistics and Wait Events 2-18
    System Statistic Classes 2-19
    Displaying Statistics 2-20
    Displaying SGA Statistics 2-22
    Wait Events 2-23
    Using the V$EVENT_NAME View 2-24
    Wait Classes 2-25
    Displaying Wait Event Statistics 2-26
    Oracle Internal & Oracle Academy Use Only
    iv
    Commonly Observed Wait Events 2-28
    Using the V$SESSION_WAIT View 2-29
    Precision of System Statistics 2-31
    Using Features of the Packs 2-32
    Accessing the Database Home Page 2-34
    Enterprise Manager Performance Pages 2-35
    Viewing the Alert Log 2-37
    Using Alert Log Information as an Aid in Tuning 2-38
    User Trace Files 2-40
    Background Processes Trace Files 2-41
    Summary 2-42
    Practice 2 Overview: Using Basic Tools 2-43
    3 Using Automatic Workload Repository
    Objectives 3-2
    Automatic Workload Repository: Overview 3-3
    Automatic Workload Repository Data 3-4
    Workload Repository 3-5
    Database Control and AWR 3-6
    AWR Snapshot Purging Policy 3-7
    AWR Snapshot Settings 3-8
    Manual AWR Snapshots 3-9
    Managing Snapshots with PL/SQL 3-10
    Generating AWR Reports in EM 3-11
    Generating AWR Reports in SQL*Plus 3-12
    Reading the AWR Report 3-13
    Snapshots and Periods Comparisons 3-14
    Compare Periods: Benefits 3-15
    Compare Periods: Results 3-16
    Compare Periods: Report 3-17
    Compare Periods: Load Profile 3-18
    Compare Periods: Top Events 3-19
    Summary 3-20
    Practice 3 Overview: Using AWR-Based Tools 3-21
    4 Defining Problems
    Objectives 4-2
    Defining the Problem 4-3
    Limit the Scope 4-4
    Setting the Priority 4-5
    Top Wait Events 4-6
    Oracle Internal & Oracle Academy Use Only
    v
    Setting the Priority: Example 4-7
    Top SQL Reports 4-8
    Common Tuning Problems 4-9
    Tuning Life Cycle Phases 4-11
    Tuning During the Life Cycle 4-12
    Application Design and Development 4-13
    Testing: Database Configuration 4-14
    Deployment 4-15
    Production 4-16
    Migration, Upgrade, and Environment Changes 4-17
    ADDM Tuning Session 4-18
    Performance Versus Business Requirements 4-19
    Performance Tuning Resources 4-20
    Filing a Performance Service Request 4-21
    RDA Report 4-22
    Monitoring and Tuning Tool: Overview 4-23
    Summary 4-25
    Practice 4 Overview: Identifying the Problem 4-26
    5 Using Metrics and Alerts
    Objectives 5-2
    Metrics, Alerts, and Baselines 5-3
    Limitation of Base Statistics 5-4
    Typical Delta Tools 5-5
    Oracle Database 11g Solution: Metrics 5-6
    Benefits of Metrics 5-7
    Viewing Metric History Information 5-8
    Using EM to View Metric Details 5-9
    Statistic Histograms 5-10
    Histogram Views 5-11
    Server-Generated Alerts 5-12
    Database Control Usage Model 5-13
    Setting Thresholds 5-14
    Creating and Testing an Alert 5-15
    Metric and Alert Views 5-16
    View User-Defined SQL Metrics 5-17
    Create User-Defined SQL Metrics 5-18
    View User-Defined Host Metrics 5-19
    Create User-Defined Host Metrics 5-20
    Summary 5-21
    Practice Overview 5: Working with Metrics 5-22
    Oracle Internal & Oracle Academy Use Only
    vi
    6 Baselines
    Objectives 6-2
    Comparative Performance Analysis with AWR Baselines 6-3
    Automatic Workload Repository Baselines 6-4
    Moving Window Baseline 6-5
    Baselines in Performance Page Settings 6-6
    Baseline Templates 6-7
    AWR Baselines 6-8
    Creating AWR Baselines 6-9
    Single AWR Baseline 6-10
    Creating a Repeating Baseline Template 6-11
    Managing Baselines with PL/SQL 6-12
    Generating a Baseline Template for a Single Time Period 6-13
    Creating a Repeating Baseline Template 6-14
    Baseline Views 6-15
    Performance Monitoring and Baselines 6-17
    Defining Alert Thresholds Using a Static Baseline 6-19
    Using EM to Quickly Configure Adaptive Thresholds 6-20
    Changing Adaptive Threshold Settings 6-22
    Summary 6-23
    Practice 6: Overview Using AWR Baselines 6-24
    7 Using AWR-Based Tools
    Objectives 7-2
    Automatic Maintenance Tasks 7-3
    Maintenance Windows 7-4
    Default Maintenance Plan 7-5
    Automated Maintenance Task Priorities 7-6
    Tuning Automatic Maintenance Tasks 7-7
    ADDM Performance Monitoring 7-8
    ADDM and Database Time 7-9
    DBTime-Graph and ADDM Methodology 7-10
    Top Performance Issues Detected 7-12
    Database Control and ADDM Findings 7-13
    ADDM Analysis Results 7-14
    ADDM Recommendations 7-15
    Database Control and ADDM Task 7-16
    Changing ADDM Attributes 7-17
    Retrieving ADDM Reports by Using SQL 7-18
    Active Session History: Overview 7-19
    Active Session History: Mechanics 7-20
    Oracle Internal & Oracle Academy Use Only
    vii
    ASH Sampling: Example 7-21
    Accessing ASH Data 7-22
    Dump ASH to File 7-23
    Analyzing the ASH Data 7-24
    Generating ASH Reports 7-25
    ASH Report Script 7-26
    ASH Report: General Section 7-27
    ASH Report Structure 7-28
    ASH Report: Activity Over Time 7-29
    Summary 7-30
    Practice 7 Overview: Using AWR-Based Tools 7-31
    8 Monitoring an Application
    Objectives 8-2
    What Is a Service? 8-3
    Service Attributes 8-4
    Service Types 8-5
    Creating Services 8-6
    Managing Services in a Single-Instance Environment 8-7
    Everything Switches to Services 8-8
    Using Services with Client Applications 8-9
    Using Services with the Resource Manager 8-10
    Services and Resource Manager with EM 8-11
    Services and the Resource Manager: Example 8-12
    Using Services with the Scheduler 8-13
    Services and the Scheduler with EM 8-14
    Services and the Scheduler: Example 8-16
    Using Services with Parallel Operations 8-17
    Using Services with Metric Thresholds 8-18
    Changing Service Thresholds by Using EM 8-19
    Services and Metric Thresholds: Example 8-20
    Service Aggregation and Tracing 8-21
    Top Services Performance Page 8-22
    Service Aggregation Configuration 8-23
    Service Aggregation: Example 8-24
    Client Identifier Aggregation and Tracing 8-25
    trcsess Utility 8-26
    Service Performance Views 8-27
    Summary 8-29
    Practice 8 Overview: Using Services 8-30
    Oracle Internal & Oracle Academy Use Only
    viii
    9 Identifying Problem SQL Statements
    Objectives 9-2
    SQL Statement Processing Phases 9-3
    Parse Phase 9-4
    SQL Storage 9-5
    Cursor Usage and Parsing 9-6
    SQL Statement Processing Phases: Bind 9-8
    SQL Statement Processing Phases: Execute and Fetch 9-9
    Processing a DML Statement 9-10
    COMMIT Processing 9-12
    Role of the Oracle Optimizer 9-13
    Identifying Bad SQL 9-15
    TOP SQL Reports 9-16
    What Is an Execution Plan? 9-17
    Methods for Viewing Execution Plans 9-18
    Uses of Execution Plans 9-19
    DBMS_XPLAN Package: Overview 9-20
    EXPLAIN PLAN Command 9-22
    EXPLAIN PLAN Command: Example 9-23
    EXPLAIN PLAN Command: Output 9-24
    Reading an Execution Plan 9-25
    Using the V$SQL_PLAN View 9-26
    V$SQL_PLAN Columns 9-27
    Querying V$SQL_PLAN 9-28
    V$SQL_PLAN_STATISTICS View 9-29
    Querying the AWR 9-30
    SQL*Plus AUTOTRACE 9-32
    Using SQL*Plus AUTOTRACE 9-33
    SQL*Plus AUTOTRACE: Statistics 9-34
    SQL Trace Facility 9-35
    How to Use the SQL Trace Facility 9-37
    Initialization Parameters 9-38
    Enabling SQL Trace 9-40
    Disabling SQL Trace 9-41
    Formatting Your Trace Files 9-42
    TKPROF Command Options 9-43
    Output of the TKPROF Command 9-45
    TKPROF Output with No Index: Example 9-50
    TKPROF Output with Index: Example 9-51
    Generate an Optimizer Trace 9-52
    Oracle Internal & Oracle Academy Use Only
    ix
    Summary 9-53
    Practice Overview 9: Using Execution Plan Utilities 9-54
    10 Influencing the Optimizer
    Objectives 10-2
    Functions of the Query Optimizer 10-3
    Selectivity 10-5
    Cardinality and Cost 10-6
    Changing Optimizer Behavior 10-7
    Using Hints 10-8
    Optimizer Statistics 10-9
    Extended Statistics 10-10
    Controlling the Behavior of the Optimizer with Parameters 10-11
    Enabling Query Optimizer Features 10-13
    Influencing the Optimizer Approach 10-14
    Optimizing SQL Statements 10-15
    Access Paths 10-16
    Choosing an Access Path 10-17
    Full Table Scans 10-18
    Row ID Scans 10-20
    Index Operations 10-21
    B*Tree Index Operations 10-22
    Bitmap Indexes 10-23
    Bitmap Index Access 10-24
    Combining Bitmaps 10-25
    Bitmap Operations 10-26
    Join Operations 10-27
    Join Methods 10-28
    Nested Loop Joins 10-29
    Hash Joins 10-31
    Sort-Merge Joins 10-32
    Join Performance 10-34
    How the Query Optimizer Chooses Execution Plans for Joins 10-35
    Sort Operations 10-37
    Tuning Sort Performance 10-38
    Reducing the Cost 10-39
    Index Maintenance 10-40
    Dropping Indexes 10-42
    Creating Indexes 10-43
    SQL Access Advisor 10-44
    Table Maintenance for Performance 10-45
    Oracle Internal & Oracle Academy Use Only
    x
    Table Reorganization Methods 10-46
    Summary 10-47
    Practice 10 Overview: Influencing the Optimizer 10-48
    11 Using SQL Performance Analyzer
    Objectives 11-2
    Real Application Testing: Overview 11-3
    Real Application Testing: Use Cases 11-4
    SQL Performance Analyzer: Process 11-5
    Capturing the SQL Workload 11-7
    Creating a SQL Performance Analyzer Task 11-8
    SQL Performance Analyzer: Tasks 11-9
    Optimizer Upgrade Simulation 11-10
    SQL Performance Analyzer Task Page 11-11
    Comparison Report 11-12
    Comparison Report SQL Detail 11-13
    Tuning Regressing Statements 11-14
    Preventing Regressions 11-16
    Parameter Change Analysis 11-17
    Guided Workflow Analysis 11-18
    SQL Performance Analyzer: PL/SQL Example 11-19
    SQL Performance Analyzer: Data Dictionary Views 11-21
    Summary 11-22
    Practice 11: Overview 11-23
    12 SQL Performance Management
    Objectives 12-2
    Maintaining SQL Performance 12-3
    Maintaining Optimizer Statistics 12-4
    Automated Maintenance Tasks 12-5
    Statistic Gathering Options 12-6
    Setting Statistic Preferences 12-7
    Restore Statistics 12-9
    Deferred Statistics Publishing: Overview 12-10
    Deferred Statistics Publishing: Example 12-12
    Automatic SQL Tuning: Overview 12-13
    SQL Statement Profiling 12-14
    Plan Tuning Flow and SQL Profile Creation 12-15
    SQL Tuning Loop 12-16
    Using SQL Profiles 12-17
    SQL Tuning Advisor: Overview 12-18
    Oracle Internal & Oracle Academy Use Only
    xi
    Using the SQL Tuning Advisor 12-19
    SQL Tuning Advisor Options 12-20
    SQL Tuning Advisor Recommendations 12-21
    Using the SQL Tuning Advisor: Example 12-22
    Using the SQL Access Advisor 12-23
    View Recommendations 12-25
    View Recommendation Details 12-26
    SQL Plan Management: Overview 12-27
    SQL Plan Baseline: Architecture 12-28
    Loading SQL Plan Baselines 12-30
    Evolving SQL Plan Baselines 12-31
    Important Baseline SQL Plan Attributes 12-32
    SQL Plan Selection 12-34
    Possible SQL Plan Manageability Scenarios 12-36
    SQL Performance Analyzer and SQL Plan Baseline Scenario 12-37
    Loading a SQL Plan Baseline Automatically 12-38
    Purging SQL Management Base Policy 12-39
    Enterprise Manager and SQL Plan Baselines 12-40
    Summary 12-41
    Practice 12: Overview Using SQL Plan Management 12-42
    13 Using Database Replay
    Objectives 13-2
    Using Database Replay 13-3
    The Big Picture 13-4
    System Architecture: Capture 13-5
    System Architecture: Processing the Workload 13-7
    System Architecture: Replay 13-8
    Capture Considerations 13-9
    Replay Considerations: Preparation 13-10
    Replay Considerations 13-11
    Replay Options 13-12
    Replay Analysis 13-13
    Database Replay Workflow in Enterprise Manager 13-15
    Capturing Workload with Enterprise Manager 13-16
    Capture Wizard: Plan Environment 13-17
    Capture Wizard: Options 13-18
    Capture Wizard: Parameters 13-19
    Viewing Capture Progress 13-20
    Viewing Capture Report 13-21
    Export Capture AWR Data 13-22
    Oracle Internal & Oracle Academy Use Only
    xii
    Viewing Workload Capture History 13-23
    Processing Captured Workload 13-24
    Using the Preprocess Captured Workload Wizard 13-25
    Using the Replay Workload Wizard 13-26
    Replay Workload: Prerequisites 13-27
    Replay Workload: Choose Initial Options 13-28
    Replay Workload: Customize Options 13-29
    Replay Workload: Prepare Replay Clients 13-30
    Replay Workload: Client Connections 13-31
    Replay Workload: Replay Started 13-32
    Viewing Workload Replay Progress 13-33
    Viewing Workload Replay Statistics 13-34
    Packages and Procedures 13-36
    Data Dictionary Views: Database Replay 13-37
    Database Replay: PL/SQL Example 13-38
    Calibrating Replay Clients 13-40
    Summary 13-41
    Practice 13: Overview 13-42
    14 Tuning the Shared Pool
    Objectives 14-2
    Shared Pool Architecture 14-3
    Shared Pool Operation 14-4
    The Library Cache 14-5
    Latch and Mutex 14-7
    Latch and Mutex: Views and Statistics 14-9
    Diagnostic Tools for Tuning the Shared Pool 14-11
    AWR/Statspack Indicators 14-13
    Load Profile 14-14
    Instance Efficiencies 14-15
    Top Waits 14-16
    Time Model 14-17
    Library Cache Activity 14-19
    Avoid Hard Parses 14-20
    Are Cursors Being Shared? 14-21
    Sharing Cursors 14-23
    Adaptive Cursor Sharing: Example 14-25
    Adaptive Cursor Sharing Views 14-27
    Interacting with Adaptive Cursor Sharing 14-28
    Avoiding Soft Parses 14-29
    Sizing the Shared Pool 14-30
    Oracle Internal & Oracle Academy Use Only
    xiii
    Shared Pool Advisory 14-31
    Shared Pool Advisor 14-33
    Avoiding Fragmentation 14-34
    Large Memory Requirements 14-35
    Tuning the Shared Pool Reserved Space 14-37
    Keeping Large Objects 14-39
    Data Dictionary Cache 14-41
    Dictionary Cache Misses 14-42
    SQL Query Result Cache: Overview 14-43
    Managing the SQL Query Result Cache 14-44
    Using the RESULT_CACHE Hint 14-46
    Using the DBMS_RESULT_CACHE Package 14-47
    Viewing SQL Result Cache Dictionary Information 14-48
    SQL Query Result Cache: Considerations 14-49
    UGA and Oracle Shared Server 14-50
    Large Pool 14-51
    Tuning the Large Pool 14-52
    Summary 14-53
    Practice Overview 14: Tuning the Shared Pool 14-54
    15 Tuning the Buffer Cache
    Objectives 15-2
    Oracle Database Architecture 15-3
    Buffer Cache: Highlights 15-4
    Database Buffers 15-5
    Buffer Hash Table for Lookups 15-6
    Working Sets 15-7
    Tuning Goals and Techniques 15-9
    Symptoms 15-11
    Cache Buffer Chains Latch Contention 15-12
    Finding Hot Segments 15-13
    Buffer Busy Waits 15-14
    Calculating the Buffer Cache Hit Ratio 15-15
    Buffer Cache Hit Ratio Is Not Everything 15-16
    Interpreting Buffer Cache Hit Ratio 15-17
    Read Waits 15-19
    Free Buffer Waits 15-21
    Solutions 15-22
    Sizing the Buffer Cache 15-23
    Buffer Cache Size Parameters 15-24
    Dynamic Buffer Cache Advisory Parameter 15-25
    Oracle Internal & Oracle Academy Use Only
    xiv
    Buffer Cache Advisory View 15-26
    Using the V$DB_CACHE_ADVICE View 15-27
    Using the Buffer Cache Advisory with EM 15-28
    Caching Tables 15-29
    Multiple Buffer Pools 15-30
    Enabling Multiple Buffer Pools 15-32
    Calculating the Hit Ratio for Multiple Pools 15-33
    Multiple Block Sizes 15-35
    Multiple Database Writers 15-36
    Multiple I/O Slaves 15-37
    Use Multiple Writers or I/O Slaves 15-38
    Private Pool for I/O Intensive Operations 15-39
    Automatically Tuned Multiblock Reads 15-40
    Flushing the Buffer Cache (for Testing Only) 15-41
    Summary 15-42
    Practice 15: Overview Tuning the Buffer Cache 15-43
    16 Tuning PGA and Temporary Space
    Objectives 16-2
    SQL Memory Usage 16-3
    Performance Impact 16-4
    Automatic PGA Memory 16-5
    SQL Memory Manager 16-6
    Configuring Automatic PGA Memory 16-8
    Setting PGA_AGGREGATE_TARGET Initially 16-9
    Monitoring SQL Memory Usage 16-10
    Monitoring SQL Memory Usage: Examples 16-12
    Tuning SQL Memory Usage 16-13
    PGA Target Advice Statistics 16-14
    PGA Target Advice Histograms 16-15
    Automatic PGA and Enterprise Manager 16-16
    Automatic PGA and AWR Reports 16-17
    Temporary Tablespace Management: Overview 16-18
    Temporary Tablespace: Best Practice 16-19
    Configuring Temporary Tablespace 16-20
    Temporary Tablespace Group: Overview 16-22
    Temporary Tablespace Group: Benefits 16-23
    Creating Temporary Tablespace Groups 16-24
    Maintaining Temporary Tablespace Groups 16-25
    View Tablespace Groups 16-26
    Monitoring Temporary Tablespace 16-27
    Oracle Internal & Oracle Academy Use Only
    xv
    Temporary Tablespace Shrink 16-28
    Tablespace Option for Creating Temporary Table 16-29
    Summary 16-30
    Practice Overview 16: Tuning PGA Memory 16-31
    17 Automatic Memory Management
    Objectives 17-2
    Oracle Database Architecture 17-3
    Dynamic SGA 17-4
    Granule 17-5
    Memory Advisories 17-6
    Manually Adding Granules to Components 17-7
    Increasing the Size of an SGA Component 17-8
    Automatic Shared Memory Management: Overview 17-9
    SGA Sizing Parameters: Overview 17-10
    Dynamic SGA Transfer Modes 17-11
    Memory Broker Architecture 17-12
    Manually Resizing Dynamic SGA Parameters 17-13
    Behavior of Auto-Tuned SGA Parameters 17-14
    Behavior of Manually Tuned SGA Parameters 17-15
    Using the V$PARAMETER View 17-16
    Resizing SGA_TARGET 17-17
    Disabling Automatic Shared Memory Management 17-18
    Configuring ASMM 17-19
    SGA Advisor 17-20
    Monitoring ASMM 17-21
    Automatic Memory Management: Overview 17-22
    Oracle Database Memory Parameters 17-24
    Automatic Memory Parameter Dependency 17-25
    Enabling Automatic Memory Management 17-26
    Monitoring Automatic Memory Management 17-27
    DBCA and Automatic Memory Management 17-29
    Summary 17-30
    Practice 17: Overview Using Automatic Memory Tuning 17-31
    Oracle Internal & Oracle Academy Use Only
    xvi
    18 Tuning Segment Space Usage
    Objectives 18-2
    Space Management 18-3
    Extent Management 18-4
    Locally Managed Extents 18-5
    Large Extents: Considerations 18-6
    How Table Data Is Stored 18-8
    Anatomy of a Database Block 18-9
    Minimize Block Visits 18-10
    The DB_BLOCK_SIZE Parameter 18-11
    Small Block Size: Considerations 18-12
    Large Block Size: Considerations 18-13
    Block Allocation 18-14
    Free Lists 18-15
    Block Space Management 18-16
    Block Space Management with Free Lists 18-17
    Automatic Segment Space Management 18-19
    Automatic Segment Space Management at Work 18-20
    Block Space Management with ASSM 18-22
    Creating an Automatic Segment Space Management Segment 18-23
    Migration and Chaining 18-24
    Guidelines for PCTFREE and PCTUSED 18-26
    Detecting Migration and Chaining 18-27
    Selecting Migrated Rows 18-28
    Eliminating Migrated Rows 18-29
    Shrinking Segments: Overview 18-31
    Shrinking Segments: Considerations 18-32
    Shrinking Segments by Using SQL 18-33
    Segment Shrink: Basic Execution 18-34
    Segment Shrink: Execution Considerations 18-35
    Using EM to Shrink Segments 18-36
    Table Compression: Overview 18-37
    Table Compression Concepts 18-38
    Using Table Compression 18-39
    Summary 18-40
    19 Tuning I/O
    Objectives 19-2
    I/O Architecture 19-3
    File System Characteristics 19-4
    I/O Modes 19-5
    Oracle Internal & Oracle Academy Use Only
    xvii
    Direct I/O 19-6
    Bandwidth Versus Size 19-7
    Important I/O Metrics for Oracle Databases 19-8
    I/O Calibration and Enterprise Manager 19-10
    I/O Calibration and the PL/SQL Interface 19-11
    I/O Statistics: Overview 19-13
    I/O Statistics and Enterprise Manager 19-14
    Stripe and Mirror Everything 19-16
    Using RAID 19-17
    RAID Cost Versus Benefits 19-18
    Should I Use RAID 1 or RAID 5? 19-20
    Diagnostics 19-21
    Database I/O Tuning 19-22
    What Is Automatic Storage Management? 19-23
    Tuning ASM 19-24
    How Many Disk Groups per Database 19-25
    Which RAID Configuration for Best Availability? 19-26
    ASM Mirroring Guidelines 19-27
    ASM Striping Granularity 19-28
    What Type of Striping Works Best? 19-29
    ASM Striping Only 19-30
    Hardware RAID Striped LUNs 19-31
    ASM Guidelines 19-32
    ASM Instance Initialization Parameters 19-33
    Dynamic Performance Views 19-34
    Monitoring Long-Running Operations by Using V$ASM_OPERATION 19-36
    ASM Instance Performance Diagnostics 19-37
    ASM Performance Page 19-38
    Database Instance Parameter Changes 19-39
    ASM Scalability 19-40
    Summary 19-41
    20 Performance Tuning Summary
    Objectives 20-2
    Necessary Initialization Parameters with Little Performance Impact 20-3
    Important Initialization Parameters with Performance Impact 20-4
    Sizing Memory Initially 20-6
    Database High Availability: Best Practices 20-7
    Undo Tablespace: Best Practices 20-8
    Temporary Tablespace: Best Practices 20-9
    General Tablespace: Best Practices 20-11
    Internal Fragmentation Considerations 20-12
    Oracle Internal & Oracle Academy Use Only
    xviii
    Block Size: Advantages and Disadvantages 20-13
    Automatic Checkpoint Tuning 20-14
    Sizing the Redo Log Buffer 20-15
    Sizing Redo Log Files 20-16
    Increasing the Performance of Archiving 20-17
    Automatic Statistics Gathering 20-19
    Automatic Statistics Collection: Considerations 20-20
    Commonly Observed Wait Events 20-21
    Additional Statistics 20-22
    Top 10 Mistakes Found in Customer Systems 20-23
    Summary 20-25
    Appendix A: Practices and Solutions
    Appendix B: Using Statspack
    Index

  • Behaviour of default value of METHOD_OPT

    Hello,
    I was trying to test the impact of extended statistics feature of 11g when I was puzzled by another observation.
    I created a table (from ALL_OBJECTS view). The data in this table was such that it had lots of rows where OWNER = 'PUBLIC'
    and lots of rows where OBJECT_TYPE = 'JAVA CLASS' but no rows where OWNER = 'PUBLIC' AND OBJECT_TYPE = 'JAVA CLASS'.
    I also create an index on the combination of (OWNER, OBJECT_TYPE).
    Now, after collecting statistics on table and index, I queried the table for above condition (OWNER = 'PUBLIC' AND OBJECT_TYPE = 'JAVA CLASS').
    To my surprise (or not), the query used the index.
    Then I recollected the statistics on the table and index and now the same query started to do a full table scan.
    Only creation of extended statistics ensured that the plan changed to indexed access subsequently. While this proved the use of extended stats,
    I am not sure how oracle was able to use indexed access path initially but not afterwards.
    Is this due to column usage monitoring info? Can anybody help?
    Here is my test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    5 rows selected.
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_capture_sql_plan_baselines boolean     FALSE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      11.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    optimizer_use_invisible_indexes      boolean     FALSE
    optimizer_use_pending_statistics     boolean     FALSE
    optimizer_use_sql_plan_baselines     boolean     TRUE
    SQL> create table t1 nologging as select * from all_objects ;
    Table created.
    SQL> exec dbms_stats.gather_table_stats(user, 'T1', no_invalidate=>false) ;
    PL/SQL procedure successfully completed.
    SQL> select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS' ;
    no rows selected
    SQL> select * from table(dbms_xplan.display_cursor) ;
    PLAN_TABLE_OUTPUT
    SQL_ID  bnrj3cac3upfd, child number 0
    select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS'
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |   226 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   |   155 | 15190 |   226   (1)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter(("OBJECT_TYPE"='JAVA CLASS' AND "OWNER"='PUBLIC'))
    18 rows selected.
    SQL> create index t1_idx on t1(owner, object_type) nologging ;
    Index created.
    SQL> select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS' ;
    no rows selected
    SQL> select * from table(dbms_xplan.display_cursor) ;
    PLAN_TABLE_OUTPUT
    SQL_ID  bnrj3cac3upfd, child number 0
    select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS'
    Plan hash value: 546753835
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |       |       |    23 (100)|          |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1     |   633 | 62034 |    23   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | T1_IDX |   633 |       |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("OWNER"='PUBLIC' AND "OBJECT_TYPE"='JAVA CLASS')
    19 rows selected.
    SQL> REM This shows that CBO decided to use the index even when there are no extended statistics
    SQL> REM Now, we will gather statistics on the table again and see what happens
    SQL> exec dbms_stats.gather_table_stats(user, 'T1', no_invalidate=>false) ;
    PL/SQL procedure successfully completed.
    SQL> select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS' ;
    no rows selected
    SQL> select * from table(dbms_xplan.display_cursor) ;
    PLAN_TABLE_OUTPUT
    SQL_ID  bnrj3cac3upfd, child number 0
    select * from t1 where owner = 'PUBLIC' and object_type = 'JAVA CLASS'
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |   226 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   | 11170 |  1069K|   226   (1)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter(("OBJECT_TYPE"='JAVA CLASS' AND "OWNER"='PUBLIC'))
    18 rows selected.
    SQL> REM And the plan changes to Full Table scan. Why?

    user503699 wrote:
    Hemant K Chitale wrote:
    A change in statistics drives a change in expected cardinality which drives a change in plan.In that case, how does one explain the same execution plan but huge difference in cardinalities between first and third execution ?1) Oracle sometimes can use index statistics. This most likely explains difference in cardinality estimates between 1st and 2nd statements
    2) You didn't specify estimate_percent - and that is not a good practice. You can get different sets of statistics gathered with different DBMS_STATS runs even when data isn't changed
    3) As already pointed out previously, histograms make CBO behave quite differently. Most likely you have histograms in presence in the 3rd statement, which is quite possible the result of not specifying estimate_percent

  • How can I query signal strength FROM an Aironet 1200 AP?

    We are trying to query the signal strength of each connected client on Aironet 1200 APs from the AP.
    Does anyone know of a way to achieve this?
    We have examined the MIB documentation for an SNMP solution and have not found the reqired OID/MIB. This solution would be preferable, but at this point we are open to ideas.
    Thank you.

    Thank you and a follow up ...
    Why is no data displayed in the Aironet 1200 SNMP MIB browse web interface when querrying this oid?
    When I do a MIB browse from the SNMP web interface on an Aironet 1200 to OID 1.3.6.1.4.1.522.3.12.5.1.9 I do not receive any data in the MIB browse display. The extended statistics in awcTpFdbTable is enabled.
    Thanks in advance
    -Bill

  • DYNAMIC_SAMPLING Hint Ignored when CONCATENATION operation is used (10g)

    Using 10g (10.2.0.3) I ran into an interesting scenario that really messed up query performance. I found that when the optimizer chose to use CONCATENATION it ignores the DYNAMIC_SAMPLING hint. Has anyone else suffered from this or know of a work-around for this.
    We legitimately use the DYNAMIC_SAMPLING hint because have some correlated columns that are used in the WHERE clause and that is how we get the right cardinality.
    The easiest way to test this is to use this pattern in your WHERE clause:
    WHERE
    text_column like 'A%'
    or text_column like 'B%'
    and any_corrleated_column = <some value>
    That pattern makes the query eligible for the CONCATENATION operation. To force it for testing purposes you have to use the USE_CONCAT hint.
    Our specific data had to do with healthcare. Specifically looking at a specific diagnosis code for a given patient type (inpatient, emergency, observation, etc.)
    The diagnosis code was heavily correlated with the patient type so initial cardinality estimates were off. The DYNAMIC_SAMPLING hint fixed that until the optimizer chose to use CONCATENATION in which case it drastically underestimated the cardinality and used nested loops where it should not have and query performance was hurt.
    Of course the work-around is to use the NO_MERGE hint, but I really don't like using hints that "force" specific access paths because at some point the data may change such that a merge would be appropriate.
    Has anyone tried this on 11g and is the DYNAMIC_SAMPLING preserved?

    Dion, Thanks for your reply.
    We have a lot of analysts who execute their queries directly (OLAP) and that is a conscious choice and we accept the lack of control with performance and tuning individual queries with hints..
    I mention that because in my mind if Oracle is the one that transforms the query from a single table query to use a CONCATENATION operation it should pass on the DYNAMIC_SAMPLING hint to each of the factored queries.
    I agree, there is a lot that could be done with hints, but I really try to avoid hints that lock Oracle into a specific execution path. What I like about DYNAMIC_SAMPLING is I can tell Oracle to challenge its assumptions because I know there are heavily correlated columns in the query.
    When Oracle then samples the data it still has all of the options available to it as far as valid access paths and join types. That is a hint that I will not necessarily have to revisit. Now I could use extended statistics with 11g and that should yield the same effect and maybe I'll just have to wait for us to upgrade to 11g. I really don't want to propagate a lot of hints that I have to think about on every version upgrade.
    Again, thanks for adding to this discussion.

  • Looking for upgrade document

    I know many people have done this upgrade successfully but i have yet to find a complete upgrade document for solaris.
    can anyone send me an upgrade document from 10gR2 to 11gR2 (using any method) ?
    thanks

    This is a quite comprehensive upgrade document - http://oukc.oracle.com/static11/opn/core11/oracle9i_database/96804//072011_96804_source/index.htm
    One thing that is rarely included in the upgrade documentation, but it is quite useful for a smooth upgrade, is how to start utilizing the new 11g features from the very start. For instance, 11g introduced extended statistics, a very powerful feature, but the upgrade documentation does not say what to do to effectively use that feature in your upgrade. DBAs eventually get to them, but only after being prompted by slow SQLs. It does not have to be like that.
    You can get step-by-step information on how to start using Oracle Extended stats right after the upgrade to 11gR2 on my blog - http://iiotzov.wordpress.com/2011/11/01/get-the-max-of-oracle-11gr2-right-from-the-start-create-relevant-extended-statistics-as-a-part-of-the-upgrade
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • Oracle Database Upgradation 10g to 11g in HP unix

    Oracle Database Upgradation 10g to 11g (standalone DB & ASM is implemented) in HP unix ?
    Kindly furnish the good links and screen shot documents.
    Edited by: user11937849 on Apr 9, 2012 4:04 AM

    As part of the upgrade, you might research how new Oracle 11g features can help your system run better.
    Your upgraded system can effectively start using extended statistics, a very powerful new 11g feature, immediately after the upgrade. I have a detailed blog post on how to do this with OEM - http://iiotzov.wordpress.com/2011/11/01/get-the-max-of-oracle-11gr2-right-from-the-start-create-relevant-extended-statistics-as-a-part-of-the-upgrade
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • SRM EXTENDED CLASSIC - INVOICE QTY in Statistics Section of Item

    SRM Gurus,
    i require to select the Invoiced quantity of a PO , which is seen on the Statistic section of the item details of a Process Purchase Order SRM Screen. The Invoices are created in ECC and not in SRM . (Extended Classic Scenario)
    Can any one let me know which table has this info or how i can get this in SRM System ?
    WIll definitely assign points for answers. thank you.

    Dear Prasad,
    To work on Extended Classic Scenario, you just have to maintain the following configuration in SPRO:
    >SAP Supplier Relationship Management
    >SRM Server
      >Cross-Application Basic Settings
       >Activate Extended Classic Scenario
    Check the flag "Extended Classic Scenario Active".
    You dont have to manipulate "BE_LOG_SYSTEM" field to choose your scenario.
    Regards
    Thiago Salvador

  • Have you tried statistics? (CS3 Extended)...

    a.k.a. the people remover.
    An article in the March Photoshop user talks of Statistics, (File>Scripts>Statistics).
    In it the author Jim DiVitale took multiple images of an airport corridor/concourse with people randomly walking through the scene. The images, six in all, were put into a folder and the script chosen from the file menu -- in the resulting dialog, the the folder with the six images were chosen from Use: drop down menu and Attempt to Automatically Align Source Images checked and Median was chosen under Stack Mode:
    The images are processed and whatever is different in the images is eliminated (result is a Smart Object). Pretty amazing. As a photographer I can see some practical use for this.
    Here is a web link for a page I found that (better) describes what I'm talking about.
    http://www.creativetechs.com/iq/photoshop_cs3s_automatic_people_remover.html
    Check it out. Pretty cool.
    Edited by ps1: guess the link (href tag still not use usable)

    Try resetting your Photoshop preferences by holding down the control-alt-shift keys simultaneously on the way into Photoshop.
    Try doing a System Restore back to a day and time just prior to when this issue first occured.
    Try uninstalling and reinstalling Photoshop.

  • Selective XML Index feature is not supported for the current database version , SQL Server Extended Events , Optimizing Reading from XML column datatype

    Team , Thanks for looking into this  ..
    As a last resort on  optimizing my stored procedure ( Below ) i wanted to create a Selective XML index  ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
    Index feature is not supported for the current database version.. How ever
    EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
    Is there ANY alternative way i can optimize below stored proc ?
    Thanks in advance for your response(s) !
    /****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    -- EXEC [dbo].[MN_Process_DDLSchema_Changes]
    ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
    AS
    BEGIN
    SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    select getdate() as getdate_0
    DECLARE @XML XML , @Prev_Insertion_time DATETIME
    -- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
    SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
    -- PRINT '1'
    CREATE TABLE #Temp
    EventName VARCHAR(100),
    Time_Stamp_EE DATETIME,
    ObjectName VARCHAR(100),
    ObjectType VARCHAR(100),
    DbName VARCHAR(100),
    ddl_Phase VARCHAR(50),
    ClientAppName VARCHAR(2000),
    ClientHostName VARCHAR(100),
    server_instance_name VARCHAR(100),
    ServerPrincipalName VARCHAR(100),
    nt_username varchar(100),
    SqlText NVARCHAR(MAX)
    CREATE TABLE #XML_Hold
    ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
    BufferXml XML
    select getdate() as getdate_01
    INSERT INTO #XML_Hold (BufferXml)
    SELECT
    CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
    FROM sys.dm_xe_session_targets xet
    INNER JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
    WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
    --RETURN
    --SELECT * FROM #XML_Hold
    select getdate() as getdate_1
    -- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
    CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
    FOR
    PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
    --RETURN
    --CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
    --SELECT GETDATE() AS GETDATE_2
    -- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
    --CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
    --USING XML INDEX [IX_XML_Hold]
    ---- FOR VALUE
    -- --FOR PROPERTY
    -- FOR PATH
    --SELECT GETDATE() AS GETDATE_3
    --PRINT '2'
    -- RETURN
    SELECT GETDATE() GETDATE_3
    INSERT INTO #Temp
    EventName ,
    Time_Stamp_EE ,
    ObjectName ,
    ObjectType,
    DbName ,
    ddl_Phase ,
    ClientAppName ,
    ClientHostName,
    server_instance_name,
    nt_username,
    ServerPrincipalName ,
    SqlText
    SELECT
    p.q.value('@name[1]','varchar(100)') AS eventname,
    p.q.value('@timestamp[1]','datetime') AS timestampvalue,
    p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
    p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
    p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
    p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
    p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
    p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
    p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
    p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
    p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
    p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
    FROM #XML_Hold
    CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
    WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
    p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
    AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
    AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
    SELECT GETDATE() GETDATE_4
    -- SELECT * FROM #TEMP
    -- SELECT COUNT(*) FROM #TEMP
    -- SELECT GETDATE()
    -- RETURN
    -- PRINT '3'
    --RETURN
    INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
    [UserName]
    ,[DbName]
    ,[ObjectName]
    ,[client_app_name]
    ,[ClientHostName]
    ,[ServerName]
    ,[SQL_TEXT]
    ,[EE_Time_Stamp]
    ,[Event_Name]
    SELECT
    CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
    ELSE T.nt_username
    END
    ,T.DbName
    ,T.objectname
    ,T.clientappname
    ,t.ClientHostName
    ,T.server_instance_name
    ,T.sqltext
    ,T.Time_Stamp_EE
    ,T.eventname
    FROM
    #TEMP T
    /** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
    -- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
    WHERE ddl_Phase ='Commit'
    AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
    AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
    WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
    AND MN.[DbName] = T.DbName
    AND MN.[Event_Name] = T.EventName
    AND MN.[ObjectName]= T.ObjectName
    AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
    AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
    -- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
    --SELECT GETDATE()
    --PRINT '4'
    --RETURN
    SELECT
    top 100
    [EE_Time_Stamp]
    ,[ServerName]
    ,[DbName]
    ,[Event_Name]
    ,[ObjectName]
    ,[UserName]
    ,[SQL_TEXT]
    ,[client_app_name]
    ,[Created_Date]
    ,[ClientHostName]
    FROM
    [dbo].[MN_DDLSchema_Changes_log]
    ORDER BY [EE_Time_Stamp] desc
    -- select getdate()
    -- ** DELETE EVENTS after logging into Physical table
    -- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
    -- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
    -- SELECT @XML
    SELECT GETDATE() GETDATE_5
    END
    GO
    Rajkumar Yelugu

    @@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
        May 14 2014 18:34:29
        Copyright (c) Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    (1 row(s) affected)
    Compatibility level is set to 110 .
    One of the limitation states - XML columns with a depth of more than 128 nested nodes
    How do i verify this ? Thanks .
    Rajkumar Yelugu

Maybe you are looking for

  • How can I get the last 18 numbers from this string

    Hi, I'm querying this field from a table. How can I only get the last 18 numbers: The numbers look like this: 500000000818118

  • 1080p HDTV only receiving 720p signal thru HDTV DVR

    I just installed a second HDTV DVR via a 1,000 Mhz 2-way splitter.  Everything works fine except the new VIZIO 1080p HDTV is only receiving a 720p signal from the HDTV DVR.  The VIZIO TV auto senses the incoming signal so there is no setting on the T

  • Still wont open (iTunes 8)

    Since the upgrade, the iTunes icon bounces but wont open. I've tried the different tips posted on the discussions but it still wont open. I have no plugins and I've removed the itunes.plist Can someone help me or direct me to a topic with the solutio

  • How to cleanup / remove RAC installation (CRS,RDBMS,ASM)

    Hi, all How to remove or cleanup RAC installation after ran OUI deinstall it couldn't finish and get a error... because 1 node of 3 was off (the box). ACS

  • Compile error in hidden module: modMainRoutines

    Dear All, When I generate EWA Report from Solution Manager, MS Word generate error message below: compile error in hidden module: modMainRoutines And it just generate up to System Configuration session. Is anyone got this problem before? Best Regards