Bad CBO statistics
Hi All ,
I've below questions regarding the statistics gathering.Could you please try to answer.
If the queries performance were acceptable , then the underlying table's stats should not be gathered ( as it may go either ways i.e improve or decrease the performance of the queries,lock_table_stats is a proof for the same) ?.Do you agree?
How to confirm the queries are performing slow because of the bad CBO statistics.?
Could you please elaborate the "Test with the RULE hint" from [Burleson's post|http://www.dba-oracle.com/t_sql_tuning_tricks.htm] ?
Thanks in advance,
Uday
The last thing I would recommend you read about any Oracle topic is something from dba-oracle.com. To better understand this point google the following:
"Kyte" and "Burleson"My generic advice, because in Oracle there are very few absolutes, is that before you make decisions with respect to stats and stats collection you determine how Oracle is using the stats. Not collecting stats works well right up until the point-in-time when the table changes enough that the plan it is generating because a problem rather than a solution. Collecting stats always works provided you collect them properly and don't hit a bug.
The only persons whose advice I would recommend you take on this question, Exadata or not, is that of Jonathan Lewis, Christian Antognini, Tanel Poder, and a few other members of the Oak Table.
Similar Messages
-
Create new CBO statistics for the tables
Dear All,
I am facing bad performance in server.In SM50 I see that the read and delete process at table D010LINC takes
a longer time.how to create new CBO statistics for the tables D010TAB and D010INC. Please suggest.
Regards,
KumarHi,
I am facing problem in when save/activating any problem ,so sap has told me to create new CBO statistics for the tables D010TAB and D010INC
Now as you have suggest when tx db20
Table D010LINC
there error comes Table D010LINC does not exist in the ABAP Dictionary
Table D010TAB
Statistics are current (|Changes| < 50 %)
New Method C
New Sample Size
Old Method C Date 10.03.2010
Old Sample Size Time 07:39:37
Old Number 51,104,357 Deviation Old -> New 0 %
New Number 51,168,679 Deviation New -> Old 0 %
Inserted Rows 160,770 Percentage Too Old 0 %
Changed Rows 0 Percentage Too Old 0 %
Deleted Rows 96,448 Percentage Too New 0 %
Use O
Active Flag P
Analysis Method C
Sample Size
Please suggest
Regards,
Kumar -
Hi,
In which table the cbo statistics will get stored in oracle?
OS:solaris 10
Version:10.2.0.4In addition (and more importantly):
user_tab_col_statistics
user_part_col_statistics (for partitions)
user_subpart_col_statistics (for subpartitions)
Regards,
Greg Rahn
http://structureddata.org -
Change settings for CBO statistics
Hi ,
Sorry to put this info here but I did´t find the right topic for Oracle issues.
We have a lot of process here that uses 5 indexes and each time that I run those process, we have to delete first the actual indexes and re-created again. As 4 indexes are indexes for an infocube, this re-creation takes parameters from CBO statistics and statistics generated from RSA1 transaction. The last index takes info only from CBO statistics.
What I want to do is generate all indexes using only CBO statistics. I´ve already looked on OSS and the only info that I could use is to put those new parameters on table DDSTORAGE and use transaction SE14 to generate again those indexes. But, my problem is that I have to do this thing 2 times of a day and the info that I have on DDSTORAGE is deleted after the first creation.
My question is:
1. Is it normal that DDSTORAGE do this?
2. How can I change on system to always create those indexes with those parameters that I want to use??? (I only want to change initial, next and maxextents parameter, whatever using SE14 or RSA1)
Many thanks to all !!
Daniela Godoiit depends what app, and it always appear in the menu of the game and depends too what type of setting are you talking about. greetings and hope this answer worke
-
Hi,
in a ECC 6.0 with Oracle 10.2.04 on Solaris 10 SPARC box, AFKO table is without statistics. This system is just installed as hom-sys-copy, during Sapinst statistics was performed and all other tables have got statistics.
We have already try to calculate statistics with RSANAORA report with collect. also try delete and then collect without success. RSANAORA ends in few seconds with collect.
If we use BRTOOLS we have same results... but if we use:
ANALYZE TABLE AFKO COMPUTE STATISTICS;
AFKO have statistics
Have you got any idea?
Regards.Hi,
even when the issue is already "solved" it is possible that the DBSTATC table in your system contains "wrong" information.
In Oracle 10g ALL tables are supposed to have statistics, the Oracle Rule base optimizer is not supported anymore. For that reason, the control table DBSTATC has to be initialized.
I assume that you have an entry in this table that causes BRCONNECT not to calculate statistics on this (and may be other) tables.
Please, review the table, there should not be any entry with the "active" field set to "N" or "R". If there are tables with such status someone should know the reason (or the upgrade to 10g has not been done following the SAP upgrade guide). You can initialize it as mentioned on the 10g upgrade guide with the script updDBSTATC10.sql from note 819830. -
Best practices for gathering statistics in 10g
I would like to get some opinions on what is considered best practice for gathering statistics in 10g. I know that 10g has auto statistics gathering, but that doesn't seem to be very effective as I see some table stats are way out of date.
I have recommended that we have at least a weekly job that generates stats for our schema using DBMS_STATS (DBMS_STATS.gather_schema_stats). Is this the right approach to generate object stats for a schema and keep it up to date? Are index stats included in that using CASCADE?
Is it also necessary to gather system stats? I welcome any thoughts anyone might have. Thanks.Hi,
Is this the right approach to generate object stats for a schema and keep it up to date? The choices of executions plans made by the CBO are only as good as the statistics available to it. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. As we may know, the CBO uses object statistics to choose the best execution plan for all SQL statements.
I spoke with Andrew Holsworth of Oracle Corp SQL Tuning group, and he says that Oracle recommends taking a single, deep sample and keep it, only re-analyzing when there is a chance that would make a difference in execution plans (not the default 20% re-analyze threshold).
I have my detailed notes here:
http://www.dba-oracle.com/art_otn_cbo.htm
As to system stats, oh yes!
By measuring the relative costs of sequential vs. scattered I/O, the CBO can make better decisons. Here are the data items collected by dbms_stats.gather_system_stats:
No Workload (NW) stats:
CPUSPEEDNW - CPU speed
IOSEEKTIM - The I/O seek time in milliseconds
IOTFRSPEED - I/O transfer speed in milliseconds
I have my notes here:
http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm
Hope this helps. . . .
Don Burleson
Oracle Press author
Author of “Oracle Tuning: The Definitive Reference”
http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm -
Hi everybody.
I would like to understand where what we are missing here.
We have several installations running 10g, and it's default value for OPTIMIZER_MODE is ALL_ROWS. Well, if there are no statistcs for the application tables, the RDBMS uses by default RULE BASED OPTIMIZER. Ok.
After the statistics are generated (oracle job, automatic), the RDBMS turns to use COST BASED OPTIMIZER. I can understand that.
The problem is: why do several queries run much slower when using CBO? When we analyze the execution plan, we see the wrong indexes being used.
The solution I have for now is set OPTIMIZER_MODE=RULE. Then everything runs smoothly again.
Why does this happen? Shouldn't CBO, after the statistics are generated, find out the best execution plan possible? I really can't use CBO on our sites, because performance is so much worse...
Thanks in advance.
Carlos InglezHi Carlos,
The solution I have for now is set OPTIMIZER_MODE=RULE. Then everything runs smoothly again.It's almost always an issue with CBO parms or CBO statistics.
There are several issues in 10g CBO, and here are my notes:
http://www.dba-oracle.com/t_slow_performance_after_upgrade.htm
Oracle has improved the cost-based Oracle optimizer in 9.0.5 and again in 10g, so you need to take a close look at your environmental parameter settings (init.ora parms) and your optimizer statistics.
- Check optimizer parameters - Ensure that you are using the proper optimizer_mode (default is all_rows) and check optimal settings for optimizer_index_cost_adj (lower from the default of 100) and optimizer_index_caching (set to a higher value than the default).
- Re-set optimizer costing - Consider unsetting your CPU-based optimizer costing (the 10g default, a change from 9i). CPU costing is best of you see CPU in your top-5 timed events in your STATSPACK/AWR report, and the 10g default of optimizercost_model=cpu will try to minimize CPU by invoking more full scans, especially in tablespaces with large blocksizes. To return to your 9i CBO I/O-based costing, set the hidden parameter "_optimizer_cost_model"=io
- Verify deprecated parameters - you need to set optimizer_features_enable = 10.2.0.2 and optimizer_mode = FIRST_ROWS_n (or ALL_ROWS for a warehouse, but remove the 9i CHOOSE default).
- Verify quality of CBO statistics - Oracle 10g does automatic statistics collection and your original customized dbms_stats job (with your customized parameters) will be overlaid. You may also see a statistics deficiency (i.e. not enough histograms) causing performance issues. Re-analyze object statistics using dbms_stats and make sure that you collect system statistics.
Hope this helps. . .
Donald K. Burleson
Oracle Press author -
Warnings Pool or cluster table selected to check/collect statistics
Dear all,
I am getting error in in db13 backup.
We are using Sap Ecc5 and
oracle 9i on Window 2003.
Production Server I am facing problem suddenly in db13 the UpdateStatsEnded with Return code: 0001 Success with warnings
BR0819I Number of pool and cluster tables found in DDNTT for owner SAPPRD: 169
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXB
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXC
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLSP
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLTP
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KAPOL
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KOCLU
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.M_IFLM
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBCLU
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBFCL
And in db02
Missing in R/3 DDIC 11 index
MARA_MEINS
MARA_ZEINR
MCHA_VFDAT
VBRP_ARKTX
VBRP_CHARG
VBRP_FKIMG
VBRP_KZWI1
VBRP_MATKL
VBRP_MATNR
VBRP_SPART
VBRP_WERKS
Please guide steps how to build index and Pool or cluster table problem.
Thanks,
Kumar> BR0819I Number of pool and cluster tables found in DDNTT for owner SAPPRD: 169
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXB
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXC
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLSP
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLTP
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KAPOL
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KOCLU
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.M_IFLM
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBCLU
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBFCL
Upto Oracle 9i the rulebased optimizer was still used for Pool/Clustertables for reasons of plan stability (e.g. always take the index).
To ensure that this is the case, these tables/indexes mustn't have CBO statistics.
Therefore these tables are usually excluded from getting CBO statistics via an DBSTATC entry. You can modify this setting in transaction DB21.
> And in db02
>
>
Missing in R/3 DDIC 11 index
> MARA_MEINS
> MARA_ZEINR
> MCHA_VFDAT
> VBRP_ARKTX
> VBRP_CHARG
> VBRP_FKIMG
> VBRP_KZWI1
> VBRP_MATKL
> VBRP_MATNR
> VBRP_SPART
> VBRP_WERKS
Well, these indexes have been setup just in the database and not (how it is supposed to be) via the SE11. As the indexes have a naming-scheme, that is not supported by the ABAP Dictionary, the easiest way to get away from the warnings is to check which columns are covered by the indexes, drop the indexes on DB level and recreate them via SE11.
Best regards,
Lars -
One DB instead of two? (performance issue)
Hi guys,
I am currently working on a system that has a not so good architecture (in my opinion) and I am trying to find arguments to convince the others that my way is better (or convince myself that my way is not so good :)).
the big questions is like this: in case we have two DBs, one is used by an OLTP system and the other one is used by a batch processing system with huge amount of data and with full table scans, the first one does not need any data from the second but the second needs almost all data from the first (FTS), should we have these two system on the same DB or we should separate this on separate DBs. so the huge FTS will not interact with the OLTS system, it will extract the needed data through db links and than work by itself? (here I think I may be wrong. any FTS done from the batch DB to the OLTP DB, will first load the blocks in the OLTP buffer cache than move the data over DB links, and put the data in batch DBs buffer cache. correct?)
Current situation:
We have two databases: CUST (customers) and RATE (calls).
- CUST has contracts table, their details, addresses, a lot of configuration tales (ref... system_settings), payments. There are around 200 k customers with their related details and the configuration tables have around 200 rows.
- RATE DB has the CALLS table that has around 100 000 mil records
The DBs communicate with each other using DB links and they are on the same machine
- On CUST DB we have a OLTP system like (this is not using any data from RATE): contracts creations, payments..etc.
- On RATE db we have the RATING module that is running continuously using big amounts of data. the RATE DB needs almost all tables from CUST: full table scans over DB link.
This is a telephony billing system.
This architecture was done way back and the idea was to have multiple CUST DBs and for each CUST to have a RATE DB. we did not reach to the point to have multiple CUST DBs and I believe that now we have other ways to scale: RAC.
Would not be a better idea to group these two so we will not have issues with DB links waits, with CBO statistics over DB links, with exec plans over DB links with the fact the some data(the entire CUST DB) is used for both DBs.
I believe that the FTS on CUST tables done by RATE process will not be such a big issue, because the buffers are already in buffer cache because of the OLTP actions. But what about the FTS on the huge CALLS table?. Will this interfere with the rest of the system; it will be a good idea to put the huge table in different caches?
What do you think?
Edited by: alinux on Jan 9, 2012 5:33 PMalinux wrote:
I am not quite sure that I understand how a full table scan works over a db link. let’s say I go to RATE and I issue a FTS on a table from CUST DB
are those blocks put in memory on RATE or on CUST and RATE or just in CUST. I thought that they are only put in RATE and the buffer cache for CUST is not affected at all. I think I am wrong and the CUST buffer is affected. So you are right, the CUST is already affected by this and grouping the DBs will help in this case because we will not duplicate that table in two memories. We will use some data from buffer cache that is already used by OLTP system. Correct?A single large cache is better than multiple smaller caches. It scales and performs better.
If for example, a FTS was "bad" for other sessions, then why does Oracle not implement a buffer cache per session? Or why does the kernel not implement a file system buffer cache per process?
How many SQL shared pool caches does an Oracle instance use? One for all sessions.
I have yet to see server s/w designed to create a small private or dedicate cache per client - as this does not scale. Also, clients often deal with the same data and can benefit by using each other caches. For that basic reason, servers implements a single cache for all clients.
If someone says that this does not suffice and there can be conflict of some kind - then in my mind, that needs to be proven beyond any shadow of a doubt.
And at the same time, reasons must also be supplied that if 2 instances are required due to potential buffer cache conflict, why 2 servers are also not needed to address CPU conflict, memory conflict, conflict for server resources such as mutexes and semaphores and so on.
Why would the conflict (if it exists), be limited to only the buffer cache?
Merging these two DBs means that I need to increase memory structures.In what way? It means fewer memory structures. It means a single shared pool. Single buffer cache. Single library pool.
Multiple instances mean duplicating these (fairly sophisticated) memory structures - and as there are now more of these, it means using smaller structures. And a smaller buffer cache for example, is less capable than a larger one. Multiple ones mean more overheads are required to manage access to the memory structure (e.g. semaphores and mutexes).
in the past I remember that I had issues with OLTP system in case of to much memory. Do you see an issue here?Only if there are conflicts in the Oracle instance configuration settings. The buffer cache does not care if the SQL is OLAP or OLTP. It serves as a buffer to the physical data on disk.
I'm not saying that beyond a shadow of a doubt, you should be using a single instance.
I'm saying that a single instance is the norm and is what makes technical sense ito server performance and scalability. And that if 2 instances are to be used, then there must be sound and factual reasons for that exception. -
Hello,
The customer wants to know whether it's time that we do a reorganization of our SAP Netweaver 7.0 database.
The thing is, the growth of teh database is not much, hardly 1 GB per month, but maybe internally the database is fragmented (windows term).
So how is it that we decide, if it is necessary to reorg the database? I know we need to look in Tx db02. But what exactly in Db02?
Kindly suggest.
Thanks.>But there is no mention of Database Reorganization anywhere
Early-Watch Report does mention about bad storage qualities. both for tables and indexes.
It lists top 20 tables and indexes with bad storage qualities. I am not sure why your EWA report does not show that. but in our report ( Both for BW 3.5, and BI7 ) we get these information.
By the way, what is your ST-PI and ST-A/PI version? There is a bug in 2008_1_640 but after applying one note correction now again we get these information.
Some portion of my latest EWA is given below.
0.0.1 Auxiliary Storage Quality Information
The following table shows the top "regular" (not partitioned, empty, index organized) tables (max. 20) that have more than 1000 blocks with the lowest storage quality (based on the available CBO statistics).
TABLES WITH FRAGMENTATION
Table Name Rows Average Row Length Blocks Kb Used Kb Needed Kb Wasted
BALDAT 10050500 196 10873585 76235087 1933543 74301543
UCL2040 2707167 218 1286381 9018862 578974 8439888
DBTABLOG 56460000 271 2617504 18351413 14997188 3354225
Action: Reorganization
The last column (Kb Wasted) shows how much disk space can be recovered through reorganization.
Caution:
The Oracle bug 5842686 (detailed in SAP Note 821687) may cause calculations to be incorrect for tables with long raw fields. This bug can be fixed as of Oracle Release 10.2.0.2.
The following table shows the top "type normal" indexes (max. 20) that have more than 1000 blocks with the lowest storage quality (based on the available CBO statistics).
Even if your EWA does not provide these information, in my last post there is one link. Click on that, you will get one SQL, by which you can get this information
Edited by: Anindya Bose on Aug 25, 2009 6:07 PM -
Oracle SQL Select query takes long time than expected.
Hi,
I am facing a problem in SQL select query statement. There is a long time taken in select query from the Database.
The query is as follows.
select /*+rule */ f1.id,f1.fdn,p1.attr_name,p1.attr_value from fdnmappingtable f1,parametertable p1 where p1.id = f1.id and ((f1.object_type ='ne_sub_type.780' )) and ( (f1.id in(select id from fdnmappingtable where fdn like '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#%')))order by f1.id asc
This query is taking more than 4 seconds to get the results in a system where the DB is running for more than 1 month.
The same query is taking very few milliseconds (50-100ms) in a system where the DB is freshly installed and the data in the tables are same in both the systems.
Kindly advice what is going wrong??
Regards,
PurushothamSQL> @/alcatel/omc1/data/query.sql
2 ;
9 rows selected.
Execution Plan
Plan hash value: 3745571015
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT ORDER BY | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
| 4 | TABLE ACCESS FULL | PARAMETERTABLE |
|* 5 | TABLE ACCESS BY INDEX ROWID| FDNMAPPINGTABLE |
|* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
|* 7 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
|* 8 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
Predicate Information (identified by operation id):
5 - filter("F1"."OBJECT_TYPE"='ne_sub_type.780')
6 - access("P1"."ID"="F1"."ID")
7 - filter("FDN" LIKE '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#
8 - access("F1"."ID"="ID")
Note
- rule based optimizer used (consider using cbo)
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
0 bytes sent via SQL*Net to client
0 bytes received via SQL*Net from client
0 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
9 rows processed
SQL> -
Can I write this query in another way (prefferably in optimized manner)
My database version._
[oracle@localhost ~]$ uname -a
Linux localhost.localdomain 2.6.18-194.17.1.0.1.el5 #1 SMP Wed Sep 29 15:40:03 EDT 2010 i686 i686 i386 GNU/Linux
[oracle@localhost ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Fri Aug 12 04:44:21 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SELECT * FROM v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SQL>
Introduction to data and logic._
I have on table called inv_leg_dummy. The main columns to consider is arrival_airport and departure_airport. Say a flight starts from kolkata (KOL) -> Goes to Dellhi (DEL) -> Goes to Hongkong (HKG) -> Goes to Tiwan (TPE). So in total KOL -> DEL -> HKG -> TPE
Data will be like:
Arrival Airport Departure Airport
HKG TPE
KOL DEL
DEL HKGPlease note that the order is not as expected, that means the flight starts from kolkata can not be determined straight way from the arrangment or any kind of flag.
The main logic is, I first take Arrival Airport HKG and see if any Departure Airport exists as HKG, then I take the next KOL and see if any Departure Airport exists as KOL. You can notice KOL is only present as arrival airport, So, This is the first leg of the flight journey. By the same logic, I can determine next leg, that is DEL (because flight goes from KOL to DEL)...
I need output like :
ARRIVAL_AIRPORT DEPARTURE_AIRPORT SEQ
HKG TPE 1
DEL HKG 2
KOL DEL 3
KOL 4So, The starting point KOL has heighest sequence (arrival is null), then KOL to DEL, DEL to HKG and finally HKG to TPE (sequence 1). The sequence may look like reverse order.
Create Table and Insert Scripts._
CREATE TABLE inv_leg_dummy
carrier VARCHAR2(3) not null,
flt_num VARCHAR2(4) not null,
flt_num_suffix VARCHAR2(1) default ' ' not null,
flt_date DATE not null,
arrival_airport VARCHAR2(5),
departure_airport VARCHAR2(5) not null
alter table inv_leg_dummy
add constraint XPKINV_LEG primary key (carrier,flt_num,flt_num_suffix,flt_date,departure_airport);
TRUNCATE table inv_leg_dummy;
INSERT INTO inv_leg_dummy VALUES ('KA',1,' ',to_date('05/23/2011','mm/dd/rrrr'),'HKG','TPE');
INSERT INTO inv_leg_dummy VALUES ('KA',1,' ',to_date('05/23/2011','mm/dd/rrrr'),'KOL','DEL');
INSERT INTO inv_leg_dummy VALUES ('KA',1,' ',to_date('05/23/2011','mm/dd/rrrr'),'DEL','HKG');
INSERT INTO inv_leg_dummy VALUES ('CX',1,' ',to_date('05/22/2011','mm/dd/rrrr'),'HKG','BNE');
INSERT INTO inv_leg_dummy VALUES ('CX',1,' ',to_date('05/22/2011','mm/dd/rrrr'),'BNE','CNS');
Now, it time to show you, What I have done!_
SQL> ed
Wrote file afiedt.buf
1 SELECT Carrier,
2 Flt_Num,
3 Flt_Date,
4 Flt_num_Suffix,
5 arrival_airport,
6 departure_airport,
7 RANK() OVER(partition by Carrier, Flt_Num, Flt_Date, Flt_num_Suffix ORDER BY Carrier, Flt_Num, Flt_Date, Flt_num_Suffix, SEQ ASC NULLS LAST) SEQ,
8 /* Fetching Maximum leg Seq No excluding Dummy Leg*/
9 max(seq) over(partition by carrier, flt_num, flt_date, flt_num_suffix order by carrier, flt_num, flt_date, flt_num_suffix) max_seq
10 FROM (SELECT k.Carrier,
11 k.Flt_Num,
12 k.Flt_Date,
13 k.Flt_num_Suffix,
14 k.departure_airport,
15 k.arrival_airport,
16 level seq
17 FROM (SELECT
18 l.Carrier,
19 l.Flt_Num,
20 l.Flt_Date,
21 l.Flt_num_Suffix,
22 l.departure_airport,
23 l.arrival_airport
24 FROM inv_leg_dummy l) k
25 START WITH k.departure_airport = case when
26 (select count(*)
27 FROM inv_leg_dummy ifl
28 WHERE ifl.arrival_airport = k.departure_airport
29 AND ifl.flt_num = k.flt_num
30 AND ifl.carrier = k.carrier
31 AND ifl.flt_num_suffix = k.Flt_num_Suffix) = 0 then k.departure_airport end
32 CONNECT BY prior k.arrival_airport = k.departure_airport
33 AND prior k.carrier = k.carrier
34 AND prior k.flt_num = k.flt_num
35 AND prior TRUNC(k.flt_date) =
36 TRUNC(k.flt_date)
37 UNION ALL
38 /* Fetching Dummy Last Leg Information for Leg_Seq No*/
39 SELECT ofl.Carrier,
40 ofl.Flt_Num,
41 ofl.Flt_Date,
42 ofl.Flt_num_Suffix,
43 ofl.arrival_airport as departure_airport,
44 NULL arrival_airport,
45 NULL seq
46 FROM inv_leg_dummy ofl
47 where NOT EXISTS (SELECT 1
48 FROM inv_leg_dummy ifl
49 WHERE ofl.arrival_airport = ifl.departure_airport
50 AND ifl.flt_num = ofl.flt_num
51 AND ifl.carrier = ofl.carrier
52 AND ifl.flt_num_suffix =ofl.Flt_num_Suffix))
53* ORDER BY 1, 2, 3, 4,7
SQL> /
CAR FLT_ FLT_DATE F ARRIV DEPAR SEQ MAX_SEQ
CX 1 22-MAY-11 BNE CNS 1 2
CX 1 22-MAY-11 HKG BNE 2 2
CX 1 22-MAY-11 HKG 3 2
KA 1 23-MAY-11 HKG TPE 1 3
KA 1 23-MAY-11 DEL HKG 2 3
KA 1 23-MAY-11 KOL DEL 3 3
KA 1 23-MAY-11 KOL 4 3
7 rows selected.
SQL> The code is giving the right output, But I feel, I have done it in a hard way. Is there any easier/optimized approach to solve the problem ?Hello
I thought I'd run run all 3 methods twice with autotrace to get an overview of the execution plans and basic performance metrics. The results are interesting.
OPs method
SQL> set autot on
SQL> SELECT Carrier,
2 Flt_Num,
3 Flt_Date,
4 Flt_num_Suffix,
5 arrival_airport,
6 departure_airport,
7 RANK() OVER(partition by Carrier, Flt_Num, Flt_Date, Flt_num_Suffix ORDER BY Carrier, Flt_Num,
53 ORDER BY 1, 2, 3, 4,7
54 /
CAR FLT_ FLT_DATE F ARRIV DEPAR SEQ MAX_SEQ
CX 1 22-MAY-11 BNE CNS 1 2
CX 1 22-MAY-11 HKG BNE 2 2
CX 1 22-MAY-11 HKG 3 2
KA 1 23-MAY-11 HKG TPE 1 3
KA 1 23-MAY-11 DEL HKG 2 3
KA 1 23-MAY-11 KOL DEL 3 3
KA 1 23-MAY-11 KOL 4 3
7 rows selected.
Execution Plan
Plan hash value: 3680289985
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | WINDOW SORT | |
| 2 | VIEW | |
| 3 | UNION-ALL | |
|* 4 | CONNECT BY WITH FILTERING | |
|* 5 | FILTER | |
|* 6 | TABLE ACCESS FULL | INV_LEG_DUMMY |
| 7 | SORT AGGREGATE | |
|* 8 | TABLE ACCESS BY INDEX ROWID| INV_LEG_DUMMY |
|* 9 | INDEX RANGE SCAN | XPKINV_LEG |
| 10 | NESTED LOOPS | |
| 11 | CONNECT BY PUMP | |
| 12 | TABLE ACCESS BY INDEX ROWID | INV_LEG_DUMMY |
|* 13 | INDEX RANGE SCAN | XPKINV_LEG |
|* 14 | FILTER | |
| 15 | TABLE ACCESS FULL | INV_LEG_DUMMY |
|* 16 | INDEX RANGE SCAN | XPKINV_LEG |
Predicate Information (identified by operation id):
4 - access("L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPORT" AND
"L"."CARRIER"=PRIOR "L"."CARRIER" AND "L"."FLT_NUM"=PRIOR "L"."FLT
_NUM"
AND INTERNAL_FUNCTION(PRIOR TRUNC(INTERNAL_FUNCTION("L"."FLT_DATE"
)))=TR
UNC(INTERNAL_FUNCTION("L"."FLT_DATE")))
5 - filter("L"."DEPARTURE_AIRPORT"=CASE WHEN ( (SELECT COUNT(*)
FROM "INV_LEG_DUMMY" "IFL" WHERE "IFL"."FLT_NUM_SUFFIX"=:B1 AND
"IFL"."FLT_NUM"=:B2 AND "IFL"."CARRIER"=:B3 AND
"IFL"."ARRIVAL_AIRPORT"=:B4)=0) THEN "L"."DEPARTURE_AIRPORT" END )
6 - access("L"."CARRIER"=PRIOR "L"."CARRIER")
8 - filter("IFL"."ARRIVAL_AIRPORT"=:B1)
9 - access("IFL"."CARRIER"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
"IFL"."FLT_NUM_SUFFIX"=:B3)
13 - access("L"."CARRIER"=PRIOR "L"."CARRIER" AND "L"."FLT_NUM"=PRIOR
"L"."FLT_NUM" AND "L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPO
RT")
filter("L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPORT" AND
INTERNAL_FUNCTION(PRIOR TRUNC(INTERNAL_FUNCTION("L"."FLT_DATE")))=
TRUNC(
INTERNAL_FUNCTION("L"."FLT_DATE")))
14 - filter( NOT EXISTS (SELECT 0 FROM "INV_LEG_DUMMY" "IFL" WHERE
"IFL"."FLT_NUM_SUFFIX"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
"IFL"."CARRIER"=:B3 AND "IFL"."DEPARTURE_AIRPORT"=:B4))
16 - access("IFL"."CARRIER"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
"IFL"."FLT_NUM_SUFFIX"=:B3 AND "IFL"."DEPARTURE_AIRPORT"=:B4)
filter("IFL"."DEPARTURE_AIRPORT"=:B1)
Note
- rule based optimizer used (consider using cbo)
Statistics
1 recursive calls
0 db block gets
33 consistent gets
0 physical reads
0 redo size
877 bytes sent via SQL*Net to client
886 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
7 rows processed
SQL> /
CAR FLT_ FLT_DATE F ARRIV DEPAR SEQ MAX_SEQ
CX 1 22-MAY-11 BNE CNS 1 2
CX 1 22-MAY-11 HKG BNE 2 2
CX 1 22-MAY-11 HKG 3 2
KA 1 23-MAY-11 HKG TPE 1 3
KA 1 23-MAY-11 DEL HKG 2 3
KA 1 23-MAY-11 KOL DEL 3 3
KA 1 23-MAY-11 KOL 4 3
7 rows selected.
Execution Plan
Plan hash value: 3680289985
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | WINDOW SORT | |
| 2 | VIEW | |
| 3 | UNION-ALL | |
|* 4 | CONNECT BY WITH FILTERING | |
|* 5 | FILTER | |
|* 6 | TABLE ACCESS FULL | INV_LEG_DUMMY |
| 7 | SORT AGGREGATE | |
|* 8 | TABLE ACCESS BY INDEX ROWID| INV_LEG_DUMMY |
|* 9 | INDEX RANGE SCAN | XPKINV_LEG |
| 10 | NESTED LOOPS | |
| 11 | CONNECT BY PUMP | |
| 12 | TABLE ACCESS BY INDEX ROWID | INV_LEG_DUMMY |
|* 13 | INDEX RANGE SCAN | XPKINV_LEG |
|* 14 | FILTER | |
| 15 | TABLE ACCESS FULL | INV_LEG_DUMMY |
|* 16 | INDEX RANGE SCAN | XPKINV_LEG |
Predicate Information (identified by operation id):
4 - access("L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPORT" AND
"L"."CARRIER"=PRIOR "L"."CARRIER" AND "L"."FLT_NUM"=PRIOR "L"."FLT
_NUM"
AND INTERNAL_FUNCTION(PRIOR TRUNC(INTERNAL_FUNCTION("L"."FLT_DATE"
)))=TR
UNC(INTERNAL_FUNCTION("L"."FLT_DATE")))
5 - filter("L"."DEPARTURE_AIRPORT"=CASE WHEN ( (SELECT COUNT(*)
FROM "INV_LEG_DUMMY" "IFL" WHERE "IFL"."FLT_NUM_SUFFIX"=:B1 AND
"IFL"."FLT_NUM"=:B2 AND "IFL"."CARRIER"=:B3 AND
"IFL"."ARRIVAL_AIRPORT"=:B4)=0) THEN "L"."DEPARTURE_AIRPORT" END )
6 - access("L"."CARRIER"=PRIOR "L"."CARRIER")
8 - filter("IFL"."ARRIVAL_AIRPORT"=:B1)
9 - access("IFL"."CARRIER"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
"IFL"."FLT_NUM_SUFFIX"=:B3)
13 - access("L"."CARRIER"=PRIOR "L"."CARRIER" AND "L"."FLT_NUM"=PRIOR
"L"."FLT_NUM" AND "L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPO
RT")
filter("L"."DEPARTURE_AIRPORT"=PRIOR "L"."ARRIVAL_AIRPORT" AND
INTERNAL_FUNCTION(PRIOR TRUNC(INTERNAL_FUNCTION("L"."FLT_DATE")))=
TRUNC(
INTERNAL_FUNCTION("L"."FLT_DATE")))
14 - filter( NOT EXISTS (SELECT 0 FROM "INV_LEG_DUMMY" "IFL" WHERE
"IFL"."FLT_NUM_SUFFIX"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
"IFL"."CARRIER"=:B3 AND "IFL"."DEPARTURE_AIRPORT"=:B4))
16 - access("IFL"."CARRIER"=:B1 AND "IFL"."FLT_NUM"=:B2 AND
"IFL"."FLT_NUM_SUFFIX"=:B3 AND "IFL"."DEPARTURE_AIRPORT"=:B4)
filter("IFL"."DEPARTURE_AIRPORT"=:B1)
Note
- rule based optimizer used (consider using cbo)
Statistics
0 recursive calls
0 db block gets
33 consistent gets
0 physical reads
0 redo size
877 bytes sent via SQL*Net to client
886 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
7 rows processedMy method
SQL> SELECT
2 carrier,
3 flt_num,
4 flt_num_suffix,
5 flt_date,
6 arrival_airport,
7 departure_airport,
8 COUNT(*) OVER(PARTITION BY carrier,
9 flt_num
10 ) - LEVEL + 1 seq,
11 COUNT(*) OVER(PARTITION BY carrier,
12 flt_num
13 ) - 1 max_seq
57 /
CAR FLT_ F FLT_DATE ARRIV DEPAR SEQ MAX_SEQ
CX 1 22-MAY-11 BNE CNS 1 2
CX 1 22-MAY-11 HKG BNE 2 2
CX 1 22-MAY-11 HKG 3 2
KA 1 23-MAY-11 HKG TPE 1 3
KA 1 23-MAY-11 DEL HKG 2 3
KA 1 23-MAY-11 KOL DEL 3 3
KA 1 23-MAY-11 KOL 4 3
7 rows selected.
Execution Plan
Plan hash value: 921778235
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT ORDER BY | |
| 2 | WINDOW SORT | |
|* 3 | CONNECT BY NO FILTERING WITH START-WITH| |
| 4 | COUNT | |
| 5 | VIEW | |
| 6 | UNION-ALL | |
| 7 | TABLE ACCESS FULL | INV_LEG_DUMMY |
|* 8 | FILTER | |
| 9 | TABLE ACCESS FULL | INV_LEG_DUMMY |
|* 10 | INDEX RANGE SCAN | XPKINV_LEG |
Predicate Information (identified by operation id):
3 - access("ARRIVAL_AIRPORT"=PRIOR "DEPARTURE_AIRPORT" AND
"CARRIER"=PRIOR "CARRIER" AND "FLT_NUM"=PRIOR "FLT_NUM" AND
TRUNC(INTERNAL_FUNCTION("FLT_DATE"))=INTERNAL_FUNCTION(PRIOR
TRUNC(INTERNAL_FUNCTION("FLT_DATE"))))
filter("ARRIVAL_AIRPORT" IS NULL)
8 - filter( NOT EXISTS (SELECT 0 FROM "INV_LEG_DUMMY" "DL" WHERE
"DL"."FLT_NUM"=:B1 AND "DL"."CARRIER"=:B2 AND
"DL"."DEPARTURE_AIRPORT"=:B3 AND "DL"."FLT_DATE"=:B4))
10 - access("DL"."CARRIER"=:B1 AND "DL"."FLT_NUM"=:B2 AND
"DL"."FLT_DATE"=:B3 AND "DL"."DEPARTURE_AIRPORT"=:B4)
filter("DL"."DEPARTURE_AIRPORT"=:B1 AND "DL"."FLT_DATE"=:B2)
Note
- rule based optimizer used (consider using cbo)
Statistics
1 recursive calls
0 db block gets
19 consistent gets
0 physical reads
0 redo size
877 bytes sent via SQL*Net to client
338 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
7 rows processed
SQL> /
CAR FLT_ F FLT_DATE ARRIV DEPAR SEQ MAX_SEQ
CX 1 22-MAY-11 BNE CNS 1 2
CX 1 22-MAY-11 HKG BNE 2 2
CX 1 22-MAY-11 HKG 3 2
KA 1 23-MAY-11 HKG TPE 1 3
KA 1 23-MAY-11 DEL HKG 2 3
KA 1 23-MAY-11 KOL DEL 3 3
KA 1 23-MAY-11 KOL 4 3
7 rows selected.
Execution Plan
Plan hash value: 921778235
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT ORDER BY | |
| 2 | WINDOW SORT | |
|* 3 | CONNECT BY NO FILTERING WITH START-WITH| |
| 4 | COUNT | |
| 5 | VIEW | |
| 6 | UNION-ALL | |
| 7 | TABLE ACCESS FULL | INV_LEG_DUMMY |
|* 8 | FILTER | |
| 9 | TABLE ACCESS FULL | INV_LEG_DUMMY |
|* 10 | INDEX RANGE SCAN | XPKINV_LEG |
Predicate Information (identified by operation id):
3 - access("ARRIVAL_AIRPORT"=PRIOR "DEPARTURE_AIRPORT" AND
"CARRIER"=PRIOR "CARRIER" AND "FLT_NUM"=PRIOR "FLT_NUM" AND
TRUNC(INTERNAL_FUNCTION("FLT_DATE"))=INTERNAL_FUNCTION(PRIOR
TRUNC(INTERNAL_FUNCTION("FLT_DATE"))))
filter("ARRIVAL_AIRPORT" IS NULL)
8 - filter( NOT EXISTS (SELECT 0 FROM "INV_LEG_DUMMY" "DL" WHERE
"DL"."FLT_NUM"=:B1 AND "DL"."CARRIER"=:B2 AND
"DL"."DEPARTURE_AIRPORT"=:B3 AND "DL"."FLT_DATE"=:B4))
10 - access("DL"."CARRIER"=:B1 AND "DL"."FLT_NUM"=:B2 AND
"DL"."FLT_DATE"=:B3 AND "DL"."DEPARTURE_AIRPORT"=:B4)
filter("DL"."DEPARTURE_AIRPORT"=:B1 AND "DL"."FLT_DATE"=:B2)
Note
- rule based optimizer used (consider using cbo)
Statistics
0 recursive calls
0 db block gets
19 consistent gets
0 physical reads
0 redo size
877 bytes sent via SQL*Net to client
338 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
7 rows processedSalim Chelabi's method
SQL> WITH t AS
2 (SELECT k.*, LEVEL lvl
3 FROM inv_leg_dummy k
4 CONNECT BY PRIOR k.arrival_airport = k.departure_airport
5 AND PRIOR k.flt_date = k.flt_date
6 AND PRIOR k.carrier = k.carrier
7 AND PRIOR k.flt_num = k.flt_num)
8 SELECT carrier, flt_num, flt_num_suffix, flt_date, arrival_airport,
9 departure_airport, MAX (lvl) seq,
10 MAX (MAX (lvl)) OVER (PARTITION BY carrier, flt_num, flt_num_suffix)
11 max_seq
12 FROM t
13 GROUP BY carrier,
14 flt_num,
15 flt_num_suffix,
16 flt_date,
17 arrival_airport,
18 departure_airport
19 UNION ALL
20 SELECT carrier, flt_num, flt_num_suffix, flt_date, NULL,
21 MAX (arrival_airport), MAX (lvl) + 1 seq, MAX (lvl) max_seq
22 FROM t
23 GROUP BY carrier, flt_num, flt_num_suffix, flt_date
24 ORDER BY 1, 2, 3, seq, arrival_airport NULLS LAST;
CAR FLT_ F FLT_DATE ARRIV DEPAR SEQ MAX_SEQ
CX 1 22/05/2011 00:00:00 BNE CNS 1 2
CX 1 22/05/2011 00:00:00 HKG BNE 2 2
CX 1 22/05/2011 00:00:00 HKG 3 2
KA 1 23/05/2011 00:00:00 HKG TPE 1 3
KA 1 23/05/2011 00:00:00 DEL HKG 2 3
KA 1 23/05/2011 00:00:00 KOL DEL 3 3
KA 1 23/05/2011 00:00:00 KOL 4 3
7 rows selected.
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 2360206974
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | TEMP TABLE TRANSFORMATION | |
| 2 | LOAD AS SELECT | |
|* 3 | CONNECT BY WITHOUT FILTERING| |
| 4 | TABLE ACCESS FULL | INV_LEG_DUMMY |
| 5 | SORT ORDER BY | |
| 6 | UNION-ALL | |
| 7 | WINDOW BUFFER | |
| 8 | SORT GROUP BY | |
| 9 | VIEW | |
| 10 | TABLE ACCESS FULL | SYS_TEMP_0FD9FE280_59EF9B75 |
| 11 | SORT GROUP BY | |
| 12 | VIEW | |
| 13 | TABLE ACCESS FULL | SYS_TEMP_0FD9FE280_59EF9B75 |
Predicate Information (identified by operation id):
3 - access("K"."DEPARTURE_AIRPORT"=PRIOR "K"."ARRIVAL_AIRPORT" AND
"K"."FLT_DATE"=PRIOR "K"."FLT_DATE" AND "K"."CARRIER"=PRIOR
"K"."CARRIER" AND "K"."FLT_NUM"=PRIOR "K"."FLT_NUM")
Note
- rule based optimizer used (consider using cbo)
Statistics
57 recursive calls
10 db block gets
25 consistent gets
1 physical reads
1556 redo size
877 bytes sent via SQL*Net to client
338 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
7 rows processed
SQL> /
CAR FLT_ F FLT_DATE ARRIV DEPAR SEQ MAX_SEQ
CX 1 22/05/2011 00:00:00 BNE CNS 1 2
CX 1 22/05/2011 00:00:00 HKG BNE 2 2
CX 1 22/05/2011 00:00:00 HKG 3 2
KA 1 23/05/2011 00:00:00 HKG TPE 1 3
KA 1 23/05/2011 00:00:00 DEL HKG 2 3
KA 1 23/05/2011 00:00:00 KOL DEL 3 3
KA 1 23/05/2011 00:00:00 KOL 4 3
7 rows selected.
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 4065363664
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | TEMP TABLE TRANSFORMATION | |
| 2 | LOAD AS SELECT | |
|* 3 | CONNECT BY WITHOUT FILTERING| |
| 4 | TABLE ACCESS FULL | INV_LEG_DUMMY |
| 5 | SORT ORDER BY | |
| 6 | UNION-ALL | |
| 7 | WINDOW BUFFER | |
| 8 | SORT GROUP BY | |
| 9 | VIEW | |
| 10 | TABLE ACCESS FULL | SYS_TEMP_0FD9FE281_59EF9B75 |
| 11 | SORT GROUP BY | |
| 12 | VIEW | |
| 13 | TABLE ACCESS FULL | SYS_TEMP_0FD9FE281_59EF9B75 |
Predicate Information (identified by operation id):
3 - access("K"."DEPARTURE_AIRPORT"=PRIOR "K"."ARRIVAL_AIRPORT" AND
"K"."FLT_DATE"=PRIOR "K"."FLT_DATE" AND "K"."CARRIER"=PRIOR
"K"."CARRIER" AND "K"."FLT_NUM"=PRIOR "K"."FLT_NUM")
Note
- rule based optimizer used (consider using cbo)
Statistics
2 recursive calls
8 db block gets
15 consistent gets
1 physical reads
604 redo size
877 bytes sent via SQL*Net to client
338 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
7 rows processed
SQL> Personally I think Salim's method seems very suiccinct and I had expected there would be more of a difference in terms of performance metrics between it and my attempt but it appears there's not much between the two - although Salim's method is generating redo as a result of the temp table through the subquery factoring. I'd be interested to see the results of a full trace between them.
Either way though, there are two alternatives which seem a fair bit more optimal than the original SQL so it's quids in I guess! :-)
David
Edited by: Bravid on Aug 12, 2011 3:24 PM
Edited by: Bravid on Aug 12, 2011 3:27 PM
Updated the comparison with Salims additional column -
High library cache load lock waits in AWR
Hi All,
Today i faced a significant performance problem related to shared pool. I made some observations, thought it would be a nice idea to share them with Oracle experts. Please feel free to add your observations/recommendations and correct me where i am wrong.
Here are the excerpts from AWR report created for the problem timing. Database server is on 10.2.0.3 and running with 2*16 configuration. DB cache size is 4,000M and shared pool size is of 3008M.
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 9994 29-Jun-09 10:00:07 672 66.3
End Snap: 10001 29-Jun-09 17:00:49 651 64.4
Elapsed: 420.70 (mins)
DB Time: 4,045.34 (mins) -- Very poor response time visible from difference between DB time and elapsed time.
Load Profile
Per Second Per Transaction
Redo size: 248,954.70 23,511.82
Logical reads: 116,107.04 10,965.40
Block changes: 1,357.13 128.17
Physical reads: 125.49 11.85
Physical writes: 51.49 4.86
User calls: 224.69 21.22
Parses: 235.22 22.21
Hard parses: 4.83 0.46
Sorts: 102.94 9.72
Logons: 1.12 0.11
Executes: 821.11 77.55
Transactions: 10.59 -- User calls and Parse count are almost same, means most of the calls are for parse. Most of the parses are soft. Per transaction 22 parses are very high figure.
-- Not much disk I/O activity. Most of the reads are being satisfy from memory.
Instance Efficiency
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.92 In-memory Sort %: 100.00
Library Hit %: 98.92 Soft Parse %: 97.95
Execute to Parse %: 71.35 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 16.82 % Non-Parse CPU: 91.41 -- Low execute to parse ratio denotes CPU is significantly busy in parsing. Soft Parse% showing, most of the parse are soft parses. It means we should concentrate on soft parsing activity.
-- Parse CPU to Parse Elapsed % is quite low, means some bottleneck is there related to parsing. It could be a side-effect of huge parsing pressure. Like CPU cycles are not available.
Shared Pool Statistics
Begin End
Memory Usage %: 81.01 81.92
% SQL with executions>1: 88.51 86.93
% Memory for SQL w/exec>1: 86.16 86.76 -- Shared Pool memory seems ok (in 80% range)
-- 88% of the SQLs are repeating ones. It's a good sign.
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
library cache load lock 24,243 64,286 2,652 26.5 Concurrency
db file sequential read 1,580,769 42,267 27 17.4 User I/O
CPU time 33,039 13.6
latch: library cache 53,013 29,194 551 12.0 Concurrency
db file scattered read 151,669 13,550 89 5.6 User I/O Problem-1: Contention on Library cache: May be due to under-sized shared pool, incorrect parameters, poor application design, But since we already observed that most of the parses are soft parses and shared pool usgae in 80%, seems problem related to holding cursors. open_cursors/session_cached_cursors are red flags.
Problem-2: User I/O, may be due to poor SQLs, I/O sub-system, or poor physical design (wrong indexes are being used as DB file seq reads)
Wait Class
Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
Concurrency 170,577 44.58 109,020 639 0.64
User I/O 2,001,978 0.00 59,662 30 7.49
System I/O 564,771 0.00 8,069 14 2.11
Application 145,106 1.25 6,352 44 0.54
Commit 176,671 0.37 4,528 26 0.66
Other 27,557 6.31 2,532 92 0.10
Network 6,862,704 0.00 696 0 25.68
Configuration 3,858 3.71 141 37 0.01
Wait Events
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
library cache load lock 24,243 83.95 64,286 2652 0.09
db file sequential read 1,580,769 0.00 42,267 27 5.91
latch: library cache 53,013 0.00 29,194 551 0.20
db file scattered read 151,669 0.00 13,550 89 0.57
latch: shared pool 25,403 0.00 12,969 511 0.10
log file sync 176,671 0.37 4,528 26 0.66
enq: TM - contention 1,455 90.93 3,975 2732 0.01 Instance Activity Stats
opened cursors cumulative 5,290,760 209.60 19.80
parse count (failures) 6,181 0.24 0.02
parse count (hard) 121,841 4.83 0.46
parse count (total) 5,937,336 235.22 22.21
parse time cpu 283,787 11.24 1.06
parse time elapsed 1,687,096 66.84 6.31 Latch Activity
library cache 85,042,375 0.15 0.43 29194 304,831 7.16
library cache load lock 257,089 0.00 1.20 0 69,065 0.00
library cache lock 41,467,300 0.02 0.07 6 2,714 0.07
library cache lock allocation 730,422 0.00 0.44 0 0
library cache pin 28,453,986 0.01 0.16 8 167 0.00
library cache pin allocation 509,000 0.00 0.38 0 0 Init.ora parameters
cursor_sharing= EXACT
open_cursors= 3000
session_cached_cursors= 0
-- open_cursors value is too high. I have checked that maximum usage by a single session is 12%.
-- session_cached_cursors are 0 causing soft parsing. 500/600 is good number to start with.
cursor_sharing exact may cause hard parses. But here, hard parsing is comparatively small, we can ignore this.
From v$librarycache
NAMESPACE GETS GETHITS GETHITRATIO PINS PINHITRATIO RELOADS INVALIDATIONS
SQL AREA 162827 25127 .154317159 748901435 .999153087 107941 81886-- high invalidation count due to DDL like activities.
-- high reloads due to small library cache.
-- hit ratio too small.
-- Need to pin frequently executed objects into library cache.
P.S. Same question asked on Oracle_L, but due to formatting reasons, pasing duplicate contents here.
Regards,
Neeraj Bhatia
Edited by: Neeraj.Bhatia2 on Jul 13, 2009 6:51 AMThanks Charles. I really appreciate your efforts to diagnose the issue.
I agree with you performance issue is caused by soft parsing, which can be solved by holding cursors (session_cached_cursors). It may be due to oversized shared pool, which is causing delay in searching child cursors.
My second thought is, there is large number of reloads, which can be due to under-sized shared pool, if invalidation activities are not going (CBO statistics collection, DDL etc), cursors are being flushed frequently.
CPU utilization is continuously high (above 90%). Pasting additional information from same AWR report.
Namespace Get Requests Pct Miss Pin Requests Pct Miss Reloads Invalidations
BODY 225,345 0.76 4,965,541 0.15 5,533 0
CLUSTER 1,278 1.41 2,542 1.73 26 0
INDEX 5,982 9.31 13,922 7.35 258 0
SQL AREA 141,465 54.10 27,831,235 1.21 69,863 19,085 Latch Miss Sources
Latch Name Where NoWait Misses Sleeps Waiter Sleeps
library cache lock kgllkdl: child: no lock handle 0 8,250 5,792 Time Model Statistics
Statistic Name Time (s) % of DB Time
sql execute elapsed time 206,979.31 85.27
PL/SQL execution elapsed time 94,651.78 39.00
DB CPU 33,039.29 13.61
parse time elapsed 22,635.47 9.33
inbound PL/SQL rpc elapsed time 14,763.48 6.08
hard parse elapsed time 14,136.77 5.82
connection management call elapsed time 1,625.07 0.67
PL/SQL compilation elapsed time 760.76 0.31
repeated bind elapsed time 664.81 0.27
hard parse (sharing criteria) elapsed time 500.11 0.21
Java execution elapsed time 252.95 0.10
failed parse elapsed time 167.23 0.07
hard parse (bind mismatch) elapsed time 124.11 0.05
sequence load elapsed time 23.34 0.01
DB time 242,720.12
background elapsed time 11,645.52
background cpu time 247.25 According to this DB CPU is 65% utilization (DB CPU + Background CPU / Total Available CPU seconds). While at the same time DB host was 95% utilized (confirmed from DBA_HIST_SYSMETRIC_SUMMARY).
Operating System Statistics
Statistic Total
BUSY_TIME 3,586,030
IDLE_TIME 1,545,064
IOWAIT_TIME 22,237
NICE_TIME 0
SYS_TIME 197,661
USER_TIME 3,319,452
LOAD 11
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 867,180
NUM_CPUS 2 -
How to replace sapdba with brtools after upgrade database to 10.2
All,
I have update our database to oracle 10.2,my brtools version is 7.0 now ,
but can't run analyze table and dbcheck in db13,
seems to still use SQLDBA when run analyze table and dbcheck.
please refer to the below informations.
Detail log: 0810080300.aly
***** SAPDBA - SAP Database Administration for ORACLE *****
SAPDBA V6.10 Analyze tables
SAPDBA release: 6.10
Patch level : 1
Patch date : 2001-05-25
ORACLE_SID : PRD
ORACLE_HOME : /oracle/PRD/102_64
ORACLE VERSION: 10.2.0.2.0
Database state: 'open'
SAPPRD : 46C
SAPDBA DB USER: (-u option)
OS login user : prdadm
OS eff. user : prdadm
SYSDBA priv.: not checked
SYSOPER priv.: not checked
Command line : sapdba -u / -analyze DBSTATCO
HOST NAME : sapprd1
OS SYSTEM : HP-UX
OS RELEASE : B.11.31
OS VERSION : U
MACHINE : ia64
Log file : '/oracle/PRD/sapcheck/0810080300.aly'
Log start date: '2008-10-08'
Log start time: '03.00.09'
----- Start of deferred log ---
SAPDBA: Can't find the executable for SQLDBA/SVRMGR. Please, install one of
them or enter one of them in the SAPDBA profile (parameter
sqldba_path).
(2008-10-08 03.00.06)
SAPDBA: Error - running temporary sql script
'/oracle/PRD/sapreorg/dbacmd.sql' with contents:
CONNECT /******** AS SYSDBA
SAPDBA: Couldn't check SYSDBA privilege.
SAPDBA: Can't find the executable for SQLDBA/SVRMGR. Please, install one of
them or enter one of them in the SAPDBA profile (parameter
sqldba_path).
(2008-10-08 03.00.06)
SAPDBA: Error - running temporary sql script
'/oracle/PRD/sapreorg/dbacmd.sql' with contents:
CONNECT /******** AS SYSOPER
SAPDBA: Couldn't check SYSOPER privilege.
----- End of deferred log ---
Analyze parameters:
Object: All tables in table DBSTATC ( for DB optimization run )
Method: E ( Default )
Option: P10 ( Default )
Time frame: 100 hours
Refresh : All objects
Option: DBSTATCO ( for the DB optimizer: Tables with Flag DBSTATC-TOBDO = 'X' )
** Refresh Statistics according control table DBSTATC **
Total Number of Tables in DBSTATC to be analyzed: 170
Number of Tables with forced statistics update (ACTIV = 'U'): 0
SAPDBA: SELECT USER# FROM SYS.USER$ WHERE NAME='SAPPRD'
ORA-00942: table or view does not exist
(2008-10-08 03.00.09)
SAPDBA: Error - getting size of segment 'SAPPRD.D010INC'
SAPDBA: Error - during table analysis - table name: ->D010INC
SAPDBA: No tables analyzed ( No entries in DBSTATC with TOBDO = X or errors ).
SAPDBA: 0 table(s) out of 170 was (were) analyzed
Difference may be due to:
- Statistics not allowed ( see DBSTATC in CCMS )
- Tables do not exist on database and were skipped
Detailed summary of Step 1:
Number of Tables that needed new statistics according to DBSTATC: 1
Number of Tables marked in DBSTATC, but non-existent on the Database: 0
Number of Tables where the statistics flag was resetted: 0
******* Creating statistics for all tables without optimizer statistics *******
SAPDBA: Using control table DBSTATC
for taking optimizer settings into account
SAPDBA: 0 table(s) without statistics were found.
SAPDBA: 0 table(s) ( out of 0 ) was (were) analyzed/refreshed.
0 table(s) was (were) explicitely excluded or pool/cluster table(s).
SAPDBA: 0 index(es) without statistics was (were) found.
SAPDBA: 0 index(es) ( out of 0 ) was (were) analyzed/refreshed.
0 index(es) was (were) explicitely excluded or pool/cluster indexe(s).
SAPDBA: 157 table statistics from 157 tables were dropped.
They are either explicitely excluded in DBSTATC,
or R/3 Pool- or Cluster- tables
that must not have CBO Statistics
SAPDBA: The whole operation took 10 sec
SAPDBA: Step 1 was finished unsuccessfully
SAPDBA: Step 2 was finished successfully
Date: 2008-10-08
Time: 03.00.19
*********************** End of SAPDBA statistics report ****************
How to replace sapdba by brtools?please give me support,thanks.
Best Regards,
Mr.chen> I have update our database to oracle 10.2,my brtools version is 7.0 now ,
> but can't run analyze table and dbcheck in db13,
> seems to still use SQLDBA when run analyze table and dbcheck.
Yes, it does so, because somebody forgot to upgrade the BASIS SP as well...
What BASIS SP are you using?
regards
Lars -
Oracle 11g with OPTIMIZER_MODE=RULE go faster!!
I recently migrated Oracle 9.2.0.8 to Oracle 11g but the querys doesn't work as I hope.
The same query takes 3:20 min aprox using optimizer_mode=ALL_ROWS and 0:20 using optimizer_mode=RULE or using RULE hint.
The query in CBO makes a cartesian product between the indexes of the table.
This is one query and the "autrotrace on" log on Oracle 11g:
SELECT /*+ NO_INDEX (PK0004111303310) */MIN(BASE.ID_SCHED_TASK)+1 I
FROM M4RJS_SCHED_TASKS BASE
WHERE NOT EXISTS
(SELECT BASE2.ID_SCHED_TASK
FROM M4RJS_SCHED_TASKS BASE2
WHERE BASE2.ID_SCHED_TASK>BASE.ID_SCHED_TASK
AND BASE2.ID_SCHED_TASK<BASE.ID_SCHED_TASK+2)
ORDER BY 1 ASC
Execution Plan
Plan hash value: 3937517195
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 14 | | 328 (2)| 00:00:04 |
| 1 | SORT AGGREGATE | | 1 | 14 | | | |
| 2 | MERGE JOIN ANTI | | 495 | 6930 | | 328 (2)| 00:00:04 |
| 3 | INDEX FULL SCAN | PK0004111303310 | 49487 | 338K| | 119 (1)| 00:00:02 |
|* 4 | FILTER | | | | | | |
|* 5 | SORT JOIN | | 49487 | 338K| 1576K| 209 (2)| 00:00:03 |
| 6 | INDEX FAST FULL SCAN| PK0004111303310 | 49487 | 338K| | 33 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter("BASE2"."ID_SCHED_TASK"<"BASE"."ID_SCHED_TASK"+2)
5 - access("BASE2"."ID_SCHED_TASK">"BASE"."ID_SCHED_TASK")
filter("BASE2"."ID_SCHED_TASK">"BASE"."ID_SCHED_TASK")
Statistics
1 recursive calls
0 db block gets
242 consistent gets
8 physical reads
0 redo size
519 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
Thanks to all !Sorry Mschnatt, I posted the wrong query, i was testing with HINTS, the correct query is your posted query.
1* I analyzed the tables and the result is the same:
This is the query and "autorace on" log using OPTIMIZER_MODE=RULE on Oracle 11g:
SQL> R
1 SELECT MIN(BASE.ID_SCHED_TASK)+1 I
2 FROM M4RJS_SCHED_TASKS BASE
3 WHERE NOT EXISTS
4 (SELECT BASE2.ID_SCHED_TASK
5 FROM M4RJS_SCHED_TASKS BASE2
6 WHERE BASE2.ID_SCHED_TASK>BASE.ID_SCHED_TASK
7 AND BASE2.ID_SCHED_TASK<BASE.ID_SCHED_TASK+2)
8* ORDER BY 1 ASC
I
2
Elapsed: 00:00:00.33
Execution Plan
Plan hash value: 795265574
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
|* 2 | FILTER | |
| 3 | TABLE ACCESS FULL | M4RJS_SCHED_TASKS |
|* 4 | INDEX RANGE SCAN | PK0004111303310 |
Predicate Information (identified by operation id):
2 - filter( NOT EXISTS (SELECT 0 FROM "M4RJS_SCHED_TASKS" "BASE2"
WHERE "BASE2"."ID_SCHED_TASK"<:B1+2 AND "BASE2"."ID_SCHED_TASK">:B2))
4 - access("BASE2"."ID_SCHED_TASK">:B1 AND
"BASE2"."ID_SCHED_TASK"<:B2+2)
Note
- rule based optimizer used (consider using cbo)
Statistics
0 recursive calls
0 db block gets
101509 consistent gets
0 physical reads
0 redo size
519 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
This is the query and "autorace on" log using OPTIMIZER_MODE=ALL_ROWA on Oracle 11g:
Elapsed: 00:03:14.78
Execution Plan
Plan hash value: 3937517195
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 12 | | 317 (2)| 00:00:04 |
| 1 | SORT AGGREGATE | | 1 | 12 | | | |
| 2 | MERGE JOIN ANTI | | 495 | 5940 | | 317 (2)| 00:00:04 |
| 3 | INDEX FULL SCAN | PK0004111303310 | 49487 | 289K| | 119 (1)| 00:00:02 |
|* 4 | FILTER | | | | | | |
|* 5 | SORT JOIN | | 49487 | 289K| 1176K| 198 (3)| 00:00:03 |
| 6 | INDEX FAST FULL SCAN| PK0004111303310 | 49487 | 289K| | 33 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter("BASE2"."ID_SCHED_TASK"<"BASE"."ID_SCHED_TASK"+2)
5 - access("BASE2"."ID_SCHED_TASK">"BASE"."ID_SCHED_TASK")
filter("BASE2"."ID_SCHED_TASK">"BASE"."ID_SCHED_TASK")
Statistics
0 recursive calls
0 db block gets
242 consistent gets
0 physical reads
0 redo size
519 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
3* This is an example query, the problem persist in other bigger queries.
Thanks for you help
Maybe you are looking for
-
Firefox will play QT movies from the Apple Trailers site and up until maybe two months ago, played QT movies and MP3 files from my personal site. I can't hear the MP3 file that auto-loads at: www.garysillmusic.com With other browsers: Chrome asks for
-
How to restore the Contacts icon in my Gmail account?
I just realize that my contacts are not being updated from my Gmail account. When I went to see my Gmail account settings the Contacts icon was missing from my iPhone Gmail account. When I called the Apple support I was advised to delete the account
-
Any shadow created in an iWeb site is pouched with latest upgrade. What happened?
the latest "security" upgrade that came out in the last week distroys any shadows created for images in an iWeb created site. In the original upgrade to Firefox 5 they were fine. They are still fine in Safari and even work in Internet Explorer. What
-
Can PS CS 6 take advantage of a high RAM system
Hi. I am thinking of investing in a workstation desktop built from oem components (since the components are quite cheap now). I have confgured a system on paper that has: Core i7. Dual NVIDIA video cards 32 GB of DDR3 Win 7 64 bit There is a lot of d
-
I cant update, buy or get app from iTunes store, it says Please contact iTunes support in Apple.com.