Comples SQL performance issues in 11g
We are facing Performance Issues for a Complex SQL in Version 11.2. But earlier in 11.1 this SQL is performing well..
We dont have previous versions SQL Plan for this SQL Statement.
We Dont want to rewrite the SQL as this SQL has performed Well in previous versions.
Do we need to change the optimizer parameters so that the Query will be tuned automatically.
Please help me out
If you're licensed for the SQL Tuning Pack, you could run the query through the SQL Tuning Advisor see if it can improve.
It will probably recommend a SQL profile which you can accept.
You could also hint the SQL directly - you might just want to look at /*+ OPT_PARAM('OPTIMIZER_FEATURES_ENABLE','11.1.0.x') */ 'x' depending on your exact 11.1 version.
You could alter the session, set optimizer_features_enable and capture a SQL profile manually for the 11.1 plan.
You could do some root cause analysis as suggested by Karthick.
See threads:
How to post a SQL tuning request: HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long: When your query takes too long ...
For some recent relevant examples about identifying plans changes following upgrade, see also Coskan Gundogar's blog:
http://coskan.wordpress.com/2011/02/14/plan-stability-through-upgrade-why-is-my-plan-changed-bugfixes-1/
http://coskan.wordpress.com/2011/02/15/plan-stability-through-upgrade-why-is-my-plan-changed-bugfixes-2/
http://coskan.wordpress.com/2011/02/17/plan-stability-through-upgrade-why-is-my-plan-changed-new-optimizer-parameters/
Similar Messages
-
SQL Performance issue: Using user defined function with group by
Hi Everyone,
im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
IS
tz_name VARCHAR2(100);
date_out date;
BEGIN
SELECT
to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
INTO date_out
FROM dual;
RETURN date_out;
END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
select
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
But if I execute the following statement, it takes only ~90ms ...
select
fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
noi
from
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stamp
)The execution plan for all three statements is EXACTLY the same!!!
Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
My questions are:
Why is the second statement sooo much slower than the third?
and
Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
I would really appreciate some help on this really weird issue.
Thanks in advance,
AndiHi,
The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
drop table t cascade constraints purge;
create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
exec dbms_stats.gather_table_stats(user, 't');
create or replace function test_fnc(p_int number) return number is
begin
return trunc(p_int);
end;
explain plan for select id from t group by id;
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from t group by test_fnc(id);
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from (select id from t group by id);
select * from table(dbms_xplan.display(null,null,'advanced'));Output:
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL>
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "TEST_FNC"("ID")[22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$F5BB74E1
2 - SEL$F5BB74E1 / T@SEL$2
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
MERGE(@"SEL$2")
OUTLINE_LEAF(@"SEL$F5BB74E1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
37 rows selected. -
Ignore - SQL PERFORMANCE ANALYZER of 11g (dulpicate question)
I am using 11g on Windows 2000, I want to run SQL PERFORMANCE ANALYZER to see the impact of init.ora parameter changes on some sql’s. Currently, I am using it in a test environment, but eventually I want to apply it to production environment.
Let us say I want to see the effect of different values for db_file_muntilbock_readcount.
When I run this in my database, will the values changed impact only the session where I am running sql performance analyzer, or will it impact any other sessions, which are accessing the same database instance during the analysis period. I think, it impacts only the session where SQL Performance analyzer is being run, but want to make sure that is the case? I am not making any changes to paremeters myself using alter satementsm but Oracle is changing the parameters behind the scenes as part of its analysis,
Appreciate your feedback.
Message was edited by:
user632098Analyzer analyzes.
When you change in init parameter you change the parameter for everybody. -
Oracle Advance Compression Deletion Performance issue in 11g R1
Hi,
We have implemented OAC in our datawarehouse environment to enable table and index compression. We tested in our Test machine and we gained almost 600GB due to advance compression without any issues and all the informatica loads are running fine. And hence we implemented the same in our production but unfortunately two sessions which are involving deletion of data are taking more time (3 times of actual timing) for completion which affects our production environment.
The tables creating issue are all non partitioned tables.
I need to know whether Oracle Advance Compression will decrease delete performance? and is there any way to disable advance compression on those particular tables?
Our environment details:
DB earlier version: 11.1.0.6
DB current version : Oracle 11.1.0.7
Applied PSU: 11.1.0.7.6
Operating system: Solaris 5.9
Syntax used for compression:
ALTER TABLE TABLE_NAME MOVE COMPRESS FOR ALL OPERATIONS;
Thanks in Advance.Hi,
Thanks for your reply.
The note is for update performance issue and also I have applied necessary patches for improving update performance.
The update sessions are all working fine. only the deletion sessions are creating problem.
Could someone help me out to clear this problem.
Thanks,
VBK -
Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
Checklist for Quick Performance problem Resolution
· get trace, code and other information for given PE case
- Latest Code from Production env
- Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
- Program parameters & their frequently used values
- Run Frequency of the program
- existing Run-time/response time in Production
- Business Purpose
· Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
· Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
· Identify most time consuming operation(s) using Row Source Operation section
· Study program parameter input directly mapped to SQL
· Identify all Input bind parameters being used to SQL
· Is SQL query returning large records for given inputs
· what are the large tables and their respective columns being used to mapped with input parameters
· which operation is scanning highest number of records in Row Source operation/Explain Plan
· Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
· Check the time consuming index on large table and measure Index Selectivity
· Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
· Is correct index being used for all large tables?
· Is there any Full Table Scan on Large tables ?
· Is there any unwanted Table being used in SQL ?
· Evaluate Join condition on Large tables and their columns
· Is FTS on large table b'cos of usage of non index columns
· Is there any implicit or explicit conversion causing index not getting used ?
· Statistics of all large tables are upto date ?
Quick Resolution tips
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
2) Use Data Caching Technique/Options to cache static data
3) Use Pipe Line Table Functions whenever possible
4) Use Global Temporary Table, Materialized view to process complex records
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
8) Follow Oracle PL/SQL Best Practices
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
12) Review Join condition on existing query explain plan
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
14) Avoid applying SQL functions on index columns
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
Thanks
PrafulI understand you were trying to post something helpful to people, but sorry, this list is appalling.
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
No, use pure SQL.
2) Use Data Caching Technique/Options to cache static data
No, use pure SQL, and the database and operating system will handle caching.
3) Use Pipe Line Table Functions whenever possible
No, use pure SQL
4) Use Global Temporary Table, Materialized view to process complex records
No, use pure SQL
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
No, use pure SQL
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
Makes no sense.
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
What about using the execution trace?
8) Follow Oracle PL/SQL Best Practices
Which are?
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
You mean design your database and queries properly? And table scanning is not always bad.
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
It depends if that is necessary or not.
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan. There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
12) Review Join condition on existing query explain plan
Well, if you don't have your join conditions right then your query won't work, so that's obvious.
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
No. Oracle recommends you do not use hints for query optimization (it says so in the documentation). Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general. Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
14) Avoid applying SQL functions on index columns
Why? If there's a need for a function based index, then it should be used.
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
See 13.
In short, there are no silver bullets for dealing with performance. Each situation is different and needs to be evaluated on its own merits. -
Performance issue with 11g vs. 10gR2
Experts,
One of my customers did some performance comparisons between oracle 10gR2 and 11gR2.
While in general, they are quite happy with improved performance in 11g over 10g, there is one query of their test sequence which performs approx. 20% slower.
The query in question is of the type "select count(*) from emp, emp, emp, emp, ..." which - I think - all of us have already used to do a quick and dirty test of DB performance, CPU speed or similar.
The customer is fully aware of the fact, that this test is very artificial and not representative, but they still want to better understand, why this is running significantly slower consistently on several platforms, during peak hours as well as off peak in both 11.1 and 11.2 than it did in 10.2
Thank you for any hints
ErwinHello!
My company is the customer Erwin mentioned in his first post.
We've now carried out additional testing and here are the results:
This script was used:
set lines 120
set autotrace off
set timing off
select * from v$version;
drop table cpt;
create table cpt (rn number, text varchar2(10)) ;
insert into cpt values(1, 'alpha');
insert into cpt values(10,'kappa');
commit;
exec dbms_stats.gather_table_stats(ownname => 'SYSTEM', tabname => 'CPT', estimate_percent => 100, method_opt => 'FOR ALL COLUMNS');
set timing on
set autotrace traceonly explain statistics
select count(*) from cpt, cpt, cpt, cpt, cpt, cpt, cpt, cpt;
set autotrace off
ALTER SESSION SET tracefile_identifier='CPT_PERF';
exec dbms_monitor.session_trace_enable(null, null, true, true);
select count(*) from cpt, cpt, cpt, cpt, cpt, cpt, cpt, cpt;
exec dbms_monitor.session_trace_disable(null, null);For the 10g database we get:
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
[...preparation...]
Elapsed: 00:00:07.80
[...first execution plan...]
Statistics
1 recursive calls
0 db block gets
24 consistent gets
0 physical reads
0 redo size
522 bytes sent via SQL*Net to client
488 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
7 sorts (memory)
0 sorts (disk)
1 rows processed
PROFAHR:SQL> /
Elapsed: 00:00:08.03
Execution Plan
Plan hash value: 2108355742
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 3675K (1)| 14:17:31 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | MERGE JOIN CARTESIAN | | 100M| 3675K (1)| 14:17:31 |
| 3 | MERGE JOIN CARTESIAN | | 10M| 367K (1)| 01:25:46 |
| 4 | MERGE JOIN CARTESIAN | | 1000K| 36760 (1)| 00:08:35 |
| 5 | MERGE JOIN CARTESIAN | | 100K| 3683 (1)| 00:00:52 |
| 6 | MERGE JOIN CARTESIAN | | 10000 | 374 (0)| 00:00:06 |
| 7 | MERGE JOIN CARTESIAN | | 1000 | 42 (0)| 00:00:01 |
| 8 | MERGE JOIN CARTESIAN| | 100 | 7 (0)| 00:00:01 |
| 9 | TABLE ACCESS FULL | CPT | 10 | 2 (0)| 00:00:01 |
| 10 | BUFFER SORT | | 10 | 5 (0)| 00:00:01 |
| 11 | TABLE ACCESS FULL | CPT | 10 | 1 (0)| 00:00:01 |
| 12 | BUFFER SORT | | 10 | 42 (0)| 00:00:01 |
| 13 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 14 | BUFFER SORT | | 10 | 374 (0)| 00:00:06 |
| 15 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 16 | BUFFER SORT | | 10 | 3683 (1)| 00:00:52 |
| 17 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 18 | BUFFER SORT | | 10 | 36759 (1)| 00:08:35 |
| 19 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 20 | BUFFER SORT | | 10 | 367K (1)| 01:25:46 |
| 21 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 22 | BUFFER SORT | | 10 | 3675K (1)| 14:17:31 |
| 23 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
Statistics
0 recursive calls
0 db block gets
24 consistent gets
0 physical reads
0 redo size
522 bytes sent via SQL*Net to client
488 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
7 sorts (memory)
0 sorts (disk)
1 rows processed
COUNT(*)
100000000
Elapsed: 00:00:24.56
-- trace file output (profahr_ora_33161638_CPT_PERF.trc)
Dump file /oracle/oradata/PROFAHR/admin/udump/profahr_ora_33161638_CPT_PERF.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /oracle/product/10204
System name: AIX
Node name: as07902
Release: 1
Version: 6
Machine: 00C52E704C00
Instance name: PROFAHR
Redo thread mounted by this instance: 1
Oracle process number: 20
Unix process pid: 33161638, image: oracle@as07902 (TNS V1-V3)
*** 2011-06-14 10:34:29.296
*** SESSION ID:(45.14300) 2011-06-14 10:34:29.296
*** SERVICE NAME:(SYS$USERS) 2011-06-14 10:34:29.296
*** MODULE NAME:(SQL*Plus) 2011-06-14 10:34:29.296
*** ACTION NAME:() 2011-06-14 10:34:29.296
PARSING IN CURSOR #24 len=59 dep=0 uid=5 oct=3 lid=5 tim=7913167696462 hv=1413788073 ad='16668590'
select count(*) from cpt, cpt, cpt, cpt, cpt, cpt, cpt, cpt
END OF STMT
PARSE #24:c=10000,e=14470,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=7913167696460
BINDS #24:
EXEC #24:c=0,e=51,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=7913167696555
WAIT #24: nam='SQL*Net message to client' ela= 0 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=7913167696614
FETCH #24:c=19020000,e=23797054,p=0,cr=24,cu=0,mis=0,r=1,dep=0,og=1,tim=7913191493688
WAIT #24: nam='SQL*Net message from client' ela= 381 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=7913191494477
FETCH #24:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=7913191494529
WAIT #24: nam='SQL*Net message to client' ela= 3 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=7913191494547
WAIT #24: nam='SQL*Net message from client' ela= 435 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=7913191494997
STAT #24 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=24 pr=0 pw=0 time=23797048 us)'
STAT #24 id=2 cnt=100000000 pid=1 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=24 pr=0 pw=0 time=246 us)'
STAT #24 id=3 cnt=10000000 pid=2 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=21 pr=0 pw=0 time=220 us)'
STAT #24 id=4 cnt=1000000 pid=3 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=18 pr=0 pw=0 time=165 us)'
STAT #24 id=5 cnt=100000 pid=4 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=15 pr=0 pw=0 time=146 us)'
STAT #24 id=6 cnt=10000 pid=5 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=12 pr=0 pw=0 time=127 us)'
STAT #24 id=7 cnt=1000 pid=6 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=9 pr=0 pw=0 time=106 us)'
STAT #24 id=8 cnt=100 pid=7 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=6 pr=0 pw=0 time=183 us)'
STAT #24 id=9 cnt=10 pid=8 pos=1 obj=53251 op='TABLE ACCESS FULL CPT (cr=3 pr=0 pw=0 time=181 us)'
STAT #24 id=10 cnt=100 pid=8 pos=2 obj=0 op='BUFFER SORT (cr=3 pr=0 pw=0 time=42 us)'
STAT #24 id=11 cnt=10 pid=10 pos=1 obj=53251 op='TABLE ACCESS FULL CPT (cr=3 pr=0 pw=0 time=11 us)'
STAT #24 id=12 cnt=1000 pid=7 pos=2 obj=0 op='BUFFER SORT (cr=3 pr=0 pw=0 time=55 us)'
STAT #24 id=13 cnt=10 pid=12 pos=1 obj=53251 op='TABLE ACCESS FULL CPT (cr=3 pr=0 pw=0 time=6 us)'
STAT #24 id=14 cnt=10000 pid=6 pos=2 obj=0 op='BUFFER SORT (cr=3 pr=0 pw=0 time=489 us)'
STAT #24 id=15 cnt=10 pid=14 pos=1 obj=53251 op='TABLE ACCESS FULL CPT (cr=3 pr=0 pw=0 time=10 us)'
STAT #24 id=16 cnt=100000 pid=5 pos=2 obj=0 op='BUFFER SORT (cr=3 pr=0 pw=0 time=4588 us)'
STAT #24 id=17 cnt=10 pid=16 pos=1 obj=53251 op='TABLE ACCESS FULL CPT (cr=3 pr=0 pw=0 time=9 us)'
STAT #24 id=18 cnt=1000000 pid=4 pos=2 obj=0 op='BUFFER SORT (cr=3 pr=0 pw=0 time=46335 us)'
STAT #24 id=19 cnt=10 pid=18 pos=1 obj=53251 op='TABLE ACCESS FULL CPT (cr=3 pr=0 pw=0 time=5 us)'
STAT #24 id=20 cnt=10000000 pid=3 pos=2 obj=0 op='BUFFER SORT (cr=3 pr=0 pw=0 time=459465 us)'
STAT #24 id=21 cnt=10 pid=20 pos=1 obj=53251 op='TABLE ACCESS FULL CPT (cr=3 pr=0 pw=0 time=7 us)'
STAT #24 id=22 cnt=100000000 pid=2 pos=2 obj=0 op='BUFFER SORT (cr=3 pr=0 pw=0 time=4616973 us)'
STAT #24 id=23 cnt=10 pid=22 pos=1 obj=53251 op='TABLE ACCESS FULL CPT (cr=3 pr=0 pw=0 time=9 us)' And now for the 11g database:
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
[...preparation...]
Elapsed: 00:00:09.51
[...first execution plan...]
Statistics
1 recursive calls
0 db block gets
16 consistent gets
0 physical reads
0 redo size
526 bytes sent via SQL*Net to client
520 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
7 sorts (memory)
0 sorts (disk)
1 rows processed
PDIGIT:SQL> /
Elapsed: 00:00:09.08
Execution Plan
Plan hash value: 2108355742
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 3020K (1)| 10:04:10 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | MERGE JOIN CARTESIAN | | 100M| 3020K (1)| 10:04:10 |
| 3 | MERGE JOIN CARTESIAN | | 10M| 302K (1)| 01:00:26 |
| 4 | MERGE JOIN CARTESIAN | | 1000K| 30218 (1)| 00:06:03 |
| 5 | MERGE JOIN CARTESIAN | | 100K| 3029 (1)| 00:00:37 |
| 6 | MERGE JOIN CARTESIAN | | 10000 | 308 (1)| 00:00:04 |
| 7 | MERGE JOIN CARTESIAN | | 1000 | 35 (0)| 00:00:01 |
| 8 | MERGE JOIN CARTESIAN| | 100 | 6 (0)| 00:00:01 |
| 9 | TABLE ACCESS FULL | CPT | 10 | 2 (0)| 00:00:01 |
| 10 | BUFFER SORT | | 10 | 4 (0)| 00:00:01 |
| 11 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 12 | BUFFER SORT | | 10 | 35 (0)| 00:00:01 |
| 13 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 14 | BUFFER SORT | | 10 | 308 (1)| 00:00:04 |
| 15 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 16 | BUFFER SORT | | 10 | 3028 (1)| 00:00:37 |
| 17 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 18 | BUFFER SORT | | 10 | 30217 (1)| 00:06:03 |
| 19 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 20 | BUFFER SORT | | 10 | 302K (1)| 01:00:26 |
| 21 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
| 22 | BUFFER SORT | | 10 | 3020K (1)| 10:04:10 |
| 23 | TABLE ACCESS FULL | CPT | 10 | 0 (0)| 00:00:01 |
Statistics
0 recursive calls
0 db block gets
16 consistent gets
0 physical reads
0 redo size
526 bytes sent via SQL*Net to client
520 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
7 sorts (memory)
0 sorts (disk)
1 rows processed
COUNT(*)
100000000
Elapsed: 00:00:28.85
-- trace file output (PDIGIT_ora_27066482_CPT_PERF.trc)
Trace file /oracle/oradata/PDIGIT/diag/rdbms/pdigit/PDIGIT/trace/PDIGIT_ora_27066482_CPT_PERF.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /oracle/product/11202
System name: AIX
Node name: as07902
Release: 1
Version: 6
Machine: 00C52E704C00
Instance name: PDIGIT
Redo thread mounted by this instance: 1
Oracle process number: 31
Unix process pid: 27066482, image: oracle@as07902 (TNS V1-V3)
*** 2011-06-14 10:24:50.724
*** SESSION ID:(331.21951) 2011-06-14 10:24:50.724
*** CLIENT ID:() 2011-06-14 10:24:50.724
*** SERVICE NAME:(SYS$USERS) 2011-06-14 10:24:50.724
*** MODULE NAME:(SQL*Plus) 2011-06-14 10:24:50.724
*** ACTION NAME:() 2011-06-14 10:24:50.724
PARSING IN CURSOR #4578508928 len=59 dep=0 uid=5 oct=3 lid=5 tim=8102505154564 hv=1413788073 ad='70000002e203ff8' sqlid='fqkgnaja49cd9'
select count(*) from cpt, cpt, cpt, cpt, cpt, cpt, cpt, cpt
END OF STMT
PARSE #4578508928:c=10000,e=18215,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=2108355742,tim=8102505154562
EXEC #4578508928:c=0,e=65,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2108355742,tim=8102505154716
WAIT #4578508928: nam='SQL*Net message to client' ela= 6 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=8102505154838
FETCH #4578508928:c=21270000,e=28828643,p=0,cr=16,cu=0,mis=0,r=1,dep=0,og=1,plh=2108355742,tim=8102533983509
STAT #4578508928 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=16 pr=0 pw=0 time=28828629 us)'
STAT #4578508928 id=2 cnt=100000000 pid=1 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=16 pr=0 pw=0 time=73243303 us cost=3020823 size=0 card=100000000)'
STAT #4578508928 id=3 cnt=10000000 pid=2 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=14 pr=0 pw=0 time=8010638 us cost=302092 size=0 card=10000000)'
STAT #4578508928 id=4 cnt=1000000 pid=3 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=12 pr=0 pw=0 time=818447 us cost=30218 size=0 card=1000000)'
STAT #4578508928 id=5 cnt=100000 pid=4 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=10 pr=0 pw=0 time=86497 us cost=3029 size=0 card=100000)'
STAT #4578508928 id=6 cnt=10000 pid=5 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=8 pr=0 pw=0 time=9531 us cost=308 size=0 card=10000)'
STAT #4578508928 id=7 cnt=1000 pid=6 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=6 pr=0 pw=0 time=1519 us cost=35 size=0 card=1000)'
STAT #4578508928 id=8 cnt=100 pid=7 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=327 us cost=6 size=0 card=100)'
STAT #4578508928 id=9 cnt=10 pid=8 pos=1 obj=22700 op='TABLE ACCESS FULL CPT (cr=2 pr=0 pw=0 time=353 us cost=2 size=0 card=10)'
STAT #4578508928 id=10 cnt=100 pid=8 pos=2 obj=0 op='BUFFER SORT (cr=2 pr=0 pw=0 time=176 us cost=4 size=0 card=10)'
STAT #4578508928 id=11 cnt=10 pid=10 pos=1 obj=22700 op='TABLE ACCESS FULL CPT (cr=2 pr=0 pw=0 time=27 us cost=0 size=0 card=10)'
STAT #4578508928 id=12 cnt=1000 pid=7 pos=2 obj=0 op='BUFFER SORT (cr=2 pr=0 pw=0 time=436 us cost=35 size=0 card=10)'
STAT #4578508928 id=13 cnt=10 pid=12 pos=1 obj=22700 op='TABLE ACCESS FULL CPT (cr=2 pr=0 pw=0 time=8 us cost=0 size=0 card=10)'
STAT #4578508928 id=14 cnt=10000 pid=6 pos=2 obj=0 op='BUFFER SORT (cr=2 pr=0 pw=0 time=2941 us cost=308 size=0 card=10)'
STAT #4578508928 id=15 cnt=10 pid=14 pos=1 obj=22700 op='TABLE ACCESS FULL CPT (cr=2 pr=0 pw=0 time=8 us cost=0 size=0 card=10)'
STAT #4578508928 id=16 cnt=100000 pid=5 pos=2 obj=0 op='BUFFER SORT (cr=2 pr=0 pw=0 time=27468 us cost=3028 size=0 card=10)'
STAT #4578508928 id=17 cnt=10 pid=16 pos=1 obj=22700 op='TABLE ACCESS FULL CPT (cr=2 pr=0 pw=0 time=8 us cost=0 size=0 card=10)'
STAT #4578508928 id=18 cnt=1000000 pid=4 pos=2 obj=0 op='BUFFER SORT (cr=2 pr=0 pw=0 time=240715 us cost=30217 size=0 card=10)'
STAT #4578508928 id=19 cnt=10 pid=18 pos=1 obj=22700 op='TABLE ACCESS FULL CPT (cr=2 pr=0 pw=0 time=7 us cost=0 size=0 card=10)'
STAT #4578508928 id=20 cnt=10000000 pid=3 pos=2 obj=0 op='BUFFER SORT (cr=2 pr=0 pw=0 time=2462884 us cost=302092 size=0 card=10)'
STAT #4578508928 id=21 cnt=10 pid=20 pos=1 obj=22700 op='TABLE ACCESS FULL CPT (cr=2 pr=0 pw=0 time=7 us cost=0 size=0 card=10)'
STAT #4578508928 id=22 cnt=100000000 pid=2 pos=2 obj=0 op='BUFFER SORT (cr=2 pr=0 pw=0 time=21633068 us cost=3020822 size=0 card=10)'
STAT #4578508928 id=23 cnt=10 pid=22 pos=1 obj=22700 op='TABLE ACCESS FULL CPT (cr=2 pr=0 pw=0 time=9 us cost=0 size=0 card=10)'
WAIT #4578508928: nam='SQL*Net message from client' ela= 513 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=8102533985627
FETCH #4578508928:c=0,e=7,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=2108355742,tim=8102533985752
WAIT #4578508928: nam='SQL*Net message to client' ela= 7 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=8102533985803
WAIT #4578508928: nam='SQL*Net message from client' ela= 485 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=8102533986331
CLOSE #4578508928:c=0,e=35,dep=0,type=0,tim=8102533986415 Remarks:
a) I omited the autoexplain plan for the first execution as it is exactly the same as the second one.
b) Several non relevant messages from sql*plus were also eliminated.
c) I also deleted some non relevant lines out of the trace files.
d) No tkprof is added because it delivers no additional value.
Observations:
1) Execution time is
Version 1st 2nd trace
10g 7.80 8.03 24.56
11g 9.51 9.08 28.85
detla 22% 13% 17%2) Execution time increases significantly when tracing is enabled, although the generated trace file is really tiny.
Thanks for your help!
Regards,
Daniel -
Post Upgrade SQL Performance Issue
Hello,
I Just Upgraded/Migrated my database from 11.1.0.6 SE to 11.2.0.3 EE. I did this with datapump export/import out of the 11.1.0.6 and into a new 11.2.0.3 database. Both the old and the new database are on the same Linux server. The new database has 2GB more RAM assigned to its SGA then the old one. Both DB are using AMM.
The strange part is I have a SQL statement that completes in 1 second in the Old DB and takes 30 seconds in the new one. I even moved the SQL Plan from the Old DB into the New DB so they are using the same plan.
To sum up the issue. I have one SQL statement using the same SQL Plan running at dramatically different speeds on two different databases on the same server. The databases are 11.1.0.7 SE and 11.2.0.3 EE.
Not sure what is going on or how to fix it, Any help would be great!
I have included Explains and Auto Traces from both NEW and OLD databases.
NEW DB Explain Plan (Slow)
Plan hash value: 1046170788
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 94861 | 193M| | 74043 (1)| 00:18:52 |
| 1 | SORT ORDER BY | | 94861 | 193M| 247M| 74043 (1)| 00:18:52 |
| 2 | VIEW | PBM_MEMBER_INTAKE_VW | 94861 | 193M| | 31803 (1)| 00:08:07 |
| 3 | UNION-ALL | | | | | | |
| 4 | NESTED LOOPS OUTER | | 1889 | 173K| | 455 (1)| 00:00:07 |
|* 5 | HASH JOIN | | 1889 | 164K| | 454 (1)| 00:00:07 |
| 6 | TABLE ACCESS FULL| PBM_CODES | 2138 | 21380 | | 8 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1889 | 145K| | 446 (1)| 00:00:07 |
|* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 92972 | 9987K| | 31347 (1)| 00:08:00 |
| 10 | NESTED LOOPS OUTER| | 92972 | 8443K| | 31346 (1)| 00:08:00 |
|* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 92972 | 7989K| | 31344 (1)| 00:08:00 |
|* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - access("C"."CODE_ID"="MI"."STATUS_ID")
7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%' AND "MI"."CLAIM_NUMBER" IS NOT NULL)
8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%' AND "M"."THEIR_GROUP_ID" IS NOT NULL)
12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
Note
- SQL plan baseline "SYS_SQL_PLAN_a3c20fdcecd98dfe" used for this statement
OLD DB Explain Plan (Fast)
Plan hash value: 1046170788
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 95201 | 193M| | 74262 (1)| 00:14:52 |
| 1 | SORT ORDER BY | | 95201 | 193M| 495M| 74262 (1)| 00:14:52 |
| 2 | VIEW | PBM_MEMBER_INTAKE_VW | 95201 | 193M| | 31853 (1)| 00:06:23 |
| 3 | UNION-ALL | | | | | | |
| 4 | NESTED LOOPS OUTER | | 1943 | 178K| | 486 (1)| 00:00:06 |
|* 5 | HASH JOIN | | 1943 | 168K| | 486 (1)| 00:00:06 |
| 6 | TABLE ACCESS FULL| PBM_CODES | 2105 | 21050 | | 7 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1943 | 149K| | 479 (1)| 00:00:06 |
|* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 93258 | 9M| | 31367 (1)| 00:06:17 |
| 10 | NESTED LOOPS OUTER| | 93258 | 8469K| | 31358 (1)| 00:06:17 |
|* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 93258 | 8014K| | 31352 (1)| 00:06:17 |
|* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - access("C"."CODE_ID"="MI"."STATUS_ID")
7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%')
8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%')
12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
NEW DB Auto trace (Slow)
active txn count during cleanout 0
blocks decrypted 0
buffer is not pinned count 664129
buffer is pinned count 3061793
bytes received via SQL*Net from client 3339
bytes sent via SQL*Net to client 28758
Cached Commit SCN referenced 662366
calls to get snapshot scn: kcmgss 3
calls to kcmgas 0
calls to kcmgcs 8
CCursor + sql area evicted 0
cell physical IO interconnect bytes 0
cleanout - number of ktugct calls 0
cleanouts only - consistent read gets 0
cluster key scan block gets 0
cluster key scans 0
commit cleanout failures: block lost 0
commit cleanout failures: callback failure 0
commit cleanouts 0
commit cleanouts successfully completed 0
Commit SCN cached 0
commit txn count during cleanout 0
concurrency wait time 0
consistent changes 0
consistent gets 985371
consistent gets - examination 2993
consistent gets direct 0
consistent gets from cache 985371
consistent gets from cache (fastpath) 982093
CPU used by this session 3551
CPU used when call started 3551
CR blocks created 0
cursor authentications 1
data blocks consistent reads - undo records applied 0
db block changes 0
db block gets 0
db block gets direct 0
db block gets from cache 0
db block gets from cache (fastpath) 0
DB time 3553
deferred (CURRENT) block cleanout applications 0
dirty buffers inspected 0
Effective IO time 0
enqueue releases 0
enqueue requests 0
execute count 3
file io wait time 0
free buffer inspected 0
free buffer requested 0
heap block compress 0
Heap Segment Array Updates 0
hot buffers moved to head of LRU 0
HSC Heap Segment Block Changes 0
immediate (CR) block cleanout applications 0
immediate (CURRENT) block cleanout applications 0
IMU Flushes 0
IMU ktichg flush 0
IMU Redo allocation size 0
IMU undo allocation size 0
index fast full scans (full) 2
index fetch by key 0
index scans kdiixs1 12944
lob reads 0
LOB table id lookup cache misses 0
lob writes 0
lob writes unaligned 0
logical read bytes from cache -517775360
logons cumulative 0
logons current 0
messages sent 0
no buffer to keep pinned count 10
no work - consistent read gets 982086
non-idle wait count 6
non-idle wait time 0
Number of read IOs issued 0
opened cursors cumulative 4
opened cursors current 1
OS Involuntary context switches 853
OS Maximum resident set size 0
OS Page faults 0
OS Page reclaims 2453
OS System time used 9
OS User time used 3549
OS Voluntary context switches 238
parse count (failures) 0
parse count (hard) 0
parse count (total) 1
parse time cpu 0
parse time elapsed 0
physical read bytes 0
physical read IO requests 0
physical read total bytes 0
physical read total IO requests 0
physical read total multi block requests 0
physical reads 0
physical reads cache 0
physical reads cache prefetch 0
physical reads direct 0
physical reads direct (lob) 0
physical write bytes 0
physical write IO requests 0
physical write total bytes 0
physical write total IO requests 0
physical writes 0
physical writes direct 0
physical writes direct (lob) 0
physical writes non checkpoint 0
pinned buffers inspected 0
pinned cursors current 0
process last non-idle time 0
recursive calls 0
recursive cpu usage 0
redo entries 0
redo size 0
redo size for direct writes 0
redo subscn max counts 0
redo synch time 0
redo synch time (usec) 0
redo synch writes 0
Requests to/from client 3
rollbacks only - consistent read gets 0
RowCR - row contention 0
RowCR attempts 0
rows fetched via callback 0
session connect time 0
session cursor cache count 1
session cursor cache hits 3
session logical reads 985371
session pga memory 131072
session pga memory max 0
session uga memory 392928
session uga memory max 0
shared hash latch upgrades - no wait 284
shared hash latch upgrades - wait 0
sorts (memory) 3
sorts (rows) 243
sql area evicted 0
sql area purged 0
SQL*Net roundtrips to/from client 4
switch current to new buffer 0
table fetch by rowid 1861456
table fetch continued row 9
table scan blocks gotten 0
table scan rows gotten 0
table scans (short tables) 0
temp space allocated (bytes) 0
undo change vector size 0
user calls 7
user commits 0
user I/O wait time 0
workarea executions - optimal 10
workarea memory allocated 342
OLD DB Auto trace (Fast)
active txn count during cleanout 0
buffer is not pinned count 4
buffer is pinned count 101
bytes received via SQL*Net from client 1322
bytes sent via SQL*Net to client 9560
calls to get snapshot scn: kcmgss 15
calls to kcmgas 0
calls to kcmgcs 0
calls to kcmgrs 1
cleanout - number of ktugct calls 0
cluster key scan block gets 0
cluster key scans 0
commit cleanouts 0
commit cleanouts successfully completed 0
concurrency wait time 0
consistent changes 0
consistent gets 117149
consistent gets - examination 56
consistent gets direct 115301
consistent gets from cache 1848
consistent gets from cache (fastpath) 1792
CPU used by this session 118
CPU used when call started 119
cursor authentications 1
db block changes 0
db block gets 0
db block gets from cache 0
db block gets from cache (fastpath) 0
DB time 123
deferred (CURRENT) block cleanout applications 0
Effective IO time 2012
enqueue conversions 3
enqueue releases 2
enqueue requests 2
enqueue waits 1
execute count 2
free buffer requested 0
HSC Heap Segment Block Changes 0
IMU Flushes 0
IMU ktichg flush 0
index fast full scans (full) 0
index fetch by key 101
index scans kdiixs1 0
lob writes 0
lob writes unaligned 0
logons cumulative 0
logons current 0
messages sent 0
no work - consistent read gets 117080
Number of read IOs issued 1019
opened cursors cumulative 3
opened cursors current 1
OS Involuntary context switches 54
OS Maximum resident set size 7868
OS Page faults 12
OS Page reclaims 2911
OS System time used 57
OS User time used 71
OS Voluntary context switches 25
parse count (failures) 0
parse count (hard) 0
parse count (total) 3
parse time cpu 0
parse time elapsed 0
physical read bytes 944545792
physical read IO requests 1019
physical read total bytes 944545792
physical read total IO requests 1019
physical read total multi block requests 905
physical reads 115301
physical reads cache 0
physical reads cache prefetch 0
physical reads direct 115301
physical reads prefetch warmup 0
process last non-idle time 0
recursive calls 0
recursive cpu usage 0
redo entries 0
redo size 0
redo synch writes 0
rows fetched via callback 0
session connect time 0
session cursor cache count 1
session cursor cache hits 2
session logical reads 117149
session pga memory -983040
session pga memory max 0
session uga memory 0
session uga memory max 0
shared hash latch upgrades - no wait 0
sorts (memory) 2
sorts (rows) 157
sql area purged 0
SQL*Net roundtrips to/from client 3
table fetch by rowid 0
table fetch continued row 0
table scan blocks gotten 117077
table scan rows gotten 1972604
table scans (direct read) 1
table scans (long tables) 1
table scans (short tables) 2
undo change vector size 0
user calls 5
user I/O wait time 0
workarea executions - optimal 4Hi Srini,
Yes the stats on the tables and indexes are current in both DBs. However the NEW DB has "System Stats" in sys.aux_stats$ and the OLD DB does not. The old DB has optimizer_index_caching=0 and optimizer_index_cost_adj=100. The new DB as them at optimizer_index_caching=90 and optimizer_index_cost_adj=25 but should not be using them because of the "System Stats".
Also I thought none of the SQL Optimize stuff would matter because I forced in my own SQL Plan using SPM.
Differences in init.ora
OLD-11 optimizerpush_pred_cost_based = FALSE
NEW-15 audit_sys_operations = FALSE
audit_trail = "DB, EXTENDED"
awr_snapshot_time_offset = 0
OLD-16 audit_sys_operations = TRUE
audit_trail = "XML, EXTENDED"
NEW-22 cell_offload_compaction = "ADAPTIVE"
cell_offload_decryption = TRUE
cell_offload_plan_display = "AUTO"
cell_offload_processing = TRUE
NEW-28 clonedb = FALSE
NEW-32 compatible = "11.2.0.0.0"
OLD-27 compatible = "11.1.0.0.0"
NEW-37 cursor_bind_capture_destination = "memory+disk"
cursor_sharing = "FORCE"
OLD-32 cursor_sharing = "EXACT"
NEW-50 db_cache_size = 4294967296
db_domain = "my.com"
OLD-44 db_cache_size = 0
NEW-54 db_flash_cache_size = 0
NEW-58 db_name = "NEWDB"
db_recovery_file_dest_size = 214748364800
OLD-50 db_name = "OLDDB"
db_recovery_file_dest_size = 8438939648
NEW-63 db_unique_name = "NEWDB"
db_unrecoverable_scn_tracking = TRUE
db_writer_processes = 2
OLD-55 db_unique_name = "OLDDB"
db_writer_processes = 1
NEW-68 deferred_segment_creation = TRUE
NEW-71 dispatchers = "(PROTOCOL=TCP) (SERVICE=NEWDBXDB)"
OLD-61 dispatchers = "(PROTOCOL=TCP) (SERVICE=OLDDBXDB)"
NEW-73 dml_locks = 5068
dst_upgrade_insert_conv = TRUE
OLD-63 dml_locks = 3652
drs_start = FALSE
NEW-80 filesystemio_options = "SETALL"
OLD-70 filesystemio_options = "none"
NEW-87 instance_name = "NEWDB"
OLD-77 instance_name = "OLDDB"
NEW-94 job_queue_processes = 1000
OLD-84 job_queue_processes = 100
NEW-104 log_archive_dest_state_11 = "enable"
log_archive_dest_state_12 = "enable"
log_archive_dest_state_13 = "enable"
log_archive_dest_state_14 = "enable"
log_archive_dest_state_15 = "enable"
log_archive_dest_state_16 = "enable"
log_archive_dest_state_17 = "enable"
log_archive_dest_state_18 = "enable"
log_archive_dest_state_19 = "enable"
NEW-114 log_archive_dest_state_20 = "enable"
log_archive_dest_state_21 = "enable"
log_archive_dest_state_22 = "enable"
log_archive_dest_state_23 = "enable"
log_archive_dest_state_24 = "enable"
log_archive_dest_state_25 = "enable"
log_archive_dest_state_26 = "enable"
log_archive_dest_state_27 = "enable"
log_archive_dest_state_28 = "enable"
log_archive_dest_state_29 = "enable"
NEW-125 log_archive_dest_state_30 = "enable"
log_archive_dest_state_31 = "enable"
NEW-139 log_buffer = 7012352
OLD-108 log_buffer = 34412032
OLD-112 max_commit_propagation_delay = 0
NEW-144 max_enabled_roles = 150
memory_max_target = 12884901888
memory_target = 8589934592
nls_calendar = "GREGORIAN"
OLD-114 max_enabled_roles = 140
memory_max_target = 6576668672
memory_target = 6576668672
NEW-149 nls_currency = "$"
nls_date_format = "DD-MON-RR"
nls_date_language = "AMERICAN"
nls_dual_currency = "$"
nls_iso_currency = "AMERICA"
NEW-157 nls_numeric_characters = ".,"
nls_sort = "BINARY"
NEW-160 nls_time_format = "HH.MI.SSXFF AM"
nls_time_tz_format = "HH.MI.SSXFF AM TZR"
nls_timestamp_format = "DD-MON-RR HH.MI.SSXFF AM"
nls_timestamp_tz_format = "DD-MON-RR HH.MI.SSXFF AM TZR"
NEW-172 optimizer_features_enable = "11.2.0.3"
optimizer_index_caching = 90
optimizer_index_cost_adj = 25
OLD-130 optimizer_features_enable = "11.1.0.6"
optimizer_index_caching = 0
optimizer_index_cost_adj = 100
NEW-184 parallel_degree_limit = "CPU"
parallel_degree_policy = "MANUAL"
parallel_execution_message_size = 16384
parallel_force_local = FALSE
OLD-142 parallel_execution_message_size = 2152
NEW-189 parallel_max_servers = 320
OLD-144 parallel_max_servers = 0
NEW-192 parallel_min_time_threshold = "AUTO"
NEW-195 parallel_servers_target = 128
NEW-197 permit_92_wrap_format = TRUE
OLD-154 plsql_native_library_subdir_count = 0
NEW-220 result_cache_max_size = 21495808
OLD-173 result_cache_max_size = 0
NEW-230 service_names = "NEWDB, NEWDB.my.com, NEW"
OLD-183 service_names = "OLDDB, OLD.my.com"
NEW-233 sessions = 1152
sga_max_size = 12884901888
OLD-186 sessions = 830
sga_max_size = 6576668672
NEW-238 shared_pool_reserved_size = 35232153
OLD-191 shared_pool_reserved_size = 53687091
OLD-199 sql_version = "NATIVE"
NEW-248 star_transformation_enabled = "TRUE"
OLD-202 star_transformation_enabled = "FALSE"
NEW-253 timed_os_statistics = 60
OLD-207 timed_os_statistics = 5
NEW-256 transactions = 1267
OLD-210 transactions = 913
NEW-262 use_large_pages = "TRUE" -
Dear Guys
I have migrated from 10.2.0.4 to 11gr2.But performance degraded ..crysstal reports with images taking much time to open..So what the strategy for blob objects i should follow....what should i do for it..please suggest
db-11gr2 standard
os-windows 2008 standardHi,
You should convert all your blobs to securefiles to help with performance - but I don;t know why a move from 10g to 11g would really show a big difference in retreiving old style blobs.
Cheers,
Harry
http://dbaharrison.blogspot.com/ -
Hi all
we have oracle 11 rel2 database 11.20.3.0 on windows
application is using delphi to connect via ole db driver
first time sql runs it takes time to run in process
and in subsequent runs it runs faster than previous runs
the person coding in delphi who is more of sql server background than dba asking why it running faster
cursor_sharing=similar
memory_target =40gb
db_cache_size=12GB
shared_pool_size=8GB
bascially first run the sql is not in cache so it reads from disk and in subsequent runs the sql is available so it runs more efficient
any links which could help the developer understandYes, it runs faster the second time because things are cached in memory instead of having to be read from disk. It could also be that the second time, the query doesn't have to be parsed and thus saves time.
Tracing and tkprof would show you actual hard and fast numbers to explain exactly what happened. -
SQL Performance Issues after migrating to 10.2.0.3 from 9i
Hi,
We recently migrated our database from 9i (9.2.0.8) to 10.2.0.3 (Test Environment), we have Windows 32 bit server on RAC+ASM, we have noticed several of our SQL which ran prettry efficiently in 9i are just taking hours to complete. I opened a SR with Oracle support and it is still WIP, but I would like to ask if anyone has similar kind of experience and what they did to resolve it ? Any feedback is appreciated.
Regards
MansoorThe first thing to do would be to take a SQL Trace of the same SQL from both 9i and 10g installations and compare the results. That should give you the clue as to where to look for.
Also, it is always recommended to gather the statistics after an upgrade in major version. I hope it is done already. -
Database migrated from Oracle 10g to 11g Discoverer report performance issu
Hi All,
We are now getting issue in Discoverer Report performance as the report is keep on running when database got upgrade from 10g to 11g.
In database 10g the report is working fine but the same report is not working fine in 11g.
The query i have changed as I have passed the date format TO_CHAR("DD-MON-YYYY" and removed the NVL & TRUNC function from the existing query.
The report is now working fine in Database 11g backhand but when I am using the same query in Discoverer it is not working and report is keep on running.
Please advise.
Regards,Pl post exact OS, database and Discoverer versions. After the upgrade, have statistics been updated ? Have you traced the Discoverer query to determine where the performance issue is ?
How To Find Oracle Discoverer Diagnostic and Tracing Guides [ID 290658.1]
How To Enable SQL Tracing For Discoverer Sessions [ID 133055.1]
Discoverer 11g: Performance degradation after Upgrade to Database 11g [ID 1514929.1]
HTH
Srini -
Performance issue while wrapping the sql in pl/sql block
Hi All,
I am facing performance issue in a query while wrapping the sql in pl/sql block.
I have a complex view. while quering the view using
Select * from v_csp_tabs(Name of View I am using), it is taking 10 second to fetch 50,000 records.
But when I am using some conditions on the view, Like
Select * from v_csp_tabs where clientid = 500006 and programid = 1 and vendorid = 1, it is taking more then 250 secs. to return the result set.
now the weird part is this is happening only for one programID, that is 1
I am using Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
Any one please suggest what are the things i need to check..
I am sorry, I could not provide you the explain plan, because this is in production and I do not have enough prevelage.
Thank you in advance.
Thnx,
BitsBits wrote:
I have a complex view. while quering the view using
Select * from v_csp_tabs(Name of View I am using), it is taking 10 second to fetch 50,000 records.
But when I am using some conditions on the view, Like
Select * from v_csp_tabs where clientid = 500006 and programid = 1 and vendorid = 1, it is taking more then 250 secs. to return the result set.That's one problem with views - you never know how they will be used in the future, nor what performance implications variant uses can have.
>
now the weird part is this is happening only for one programID, that is 1
Any one please suggest what are the things i need to check..
I am sorry, I could not provide you the explain plan, because this is in production and I do not have enough prevelage.I understand what you are saying - I have worked at similar sites. HiddenName is correct in suggesting that you need to get execution plans but sometimes getting privileges from the DBA group is simply Not Going To Happen. Its wrong but that's the way it is. Follow through on HiddenName's suggested to get help from somebody who has the privleges needed
Post the query that view view is executing. Desk checking a query is NOT ideal but is one thing we can do.
I don't suppose you can see V$ views on production - V$SQL and V$SQL_PLAN (probably not if you can't generate plans, but its worth a thought) -
Performance issue with Crystal when upgrading Oracle to 11g
Dear,
I am facing performance issue in crystal report and oracle 11g as below:
In ther report server, I have created a ODBC for connect to another Oracle 11g server. also in report server I have created and published a folder to content all of my crystal report. These report can connect to oracle 11g server via ODBC.
and I have a tomcat server to run my application in my application I refer to report folder in report server.
This way can work with SQL server and oracle 9 or 10g but it facing performance issue in oracle 11g.
please let me know the root cause.
Notes: report server, tomcate server are win 32bit, but oracle is in win 64bit, and i have upgraded DataDirect connect ODBC version 6.1 but the issue can not resolve.
Please help me to solve it.
Thanks so much,
AnhHi Anh,
Use a third party ODBC test tool now. SQL Plus will be using the Native Oracle client so you can't compare performance.
Download our old tool called SQLCON: https://smpdl.sap-ag.de/~sapidp/012002523100006252882008E/sqlcon32.zip
Connect and then click on the SQL tab and paste in the SQL from the report and time that test.
I believe the issue is because the Oracle client is 64 bit, you should install the 32 bit Oracle Client. If using the 64 bit client then the client must thunk ( convert 64 bit data to 32 bit data format ) which is going to take more time.
If you can use OLE DB or using the Oracle Server driver ( native driver ) should be faster. ODBC puts another layer on top of the Oracle client so it too takes time to communicate between the layers.
Thank you
Don -
Performance Issue After Moving To 11g From 9i
We have a process that inserts approximately 275,000 records into a table containing 22,000,000+ records. This process consistently runs in 1 hour and 20 minutes in our Oracle 9i Database (Standard Edition). This same process runs in 8+ hours in our Oracle 11g Database (Standard Edition) which is a copy of the Oracle 9i Database.
Both databases run on identical hardware running Windows Server 2000. The Servers each have 2 GB RAM.
We have noticed that the process in 11g slows down significantly after it has been running for about 30 minutes and is continuously consuming memory. We also ran a test in which we dropped all indexes on the table being inserted into except the primary key and the process still ran for 8+ hours again.
We executed another test in which the same process was run however we had it insert into a table that contained 0 records and our performance was better than on the 9i Database.
Any ideas on what might be causing the performance issue?Welcome to the forums !
Troubleshooting performance issues is difficult when all of the factual data is absent. Pl review these threads to identify information that you need to post in order to help you.
When your query takes too long:
HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long ...
HTH
Srini -
Performance issues with dynamic action (PL/SQL)
Hi!
I'm having perfomance issues with a dynamic action that is triggered on a button click.
I have 5 drop down lists to select columns which the users want to filter, 5 drop down lists to select an operation and 5 boxes to input values.
After that, there is a filter button that just submits the page based on the selected filters.
This part works fine, the data is filtered almost instantaneously.
After this, I have 3 column selectors and 3 boxes where users put values they wish to update the filtered rows to,
There is an update button that calls the dynamic action (procedure that is written below).
It should be straight out, the only performance issue could be the decode section, because I need to cover cases when user wants to set a value to null (@) and when he doesn't want update 3 columns, but less (he leaves '').
Hence P99_X_UC1 || ' = decode(' || P99_X_UV1 ||','''','|| P99_X_UC1 ||',''@'',null,'|| P99_X_UV1 ||')
However when I finally click the update button, my browser freezes and nothing happens on the table.
Can anyone help me solve this and improve the speed of the update?
Regards,
Ivan
P.S. The code for the procedure is below:
create or replace
PROCEDURE DWP.PROC_UPD
(P99_X_UC1 in VARCHAR2,
P99_X_UV1 in VARCHAR2,
P99_X_UC2 in VARCHAR2,
P99_X_UV2 in VARCHAR2,
P99_X_UC3 in VARCHAR2,
P99_X_UV3 in VARCHAR2,
P99_X_COL in VARCHAR2,
P99_X_O in VARCHAR2,
P99_X_V in VARCHAR2,
P99_X_COL2 in VARCHAR2,
P99_X_O2 in VARCHAR2,
P99_X_V2 in VARCHAR2,
P99_X_COL3 in VARCHAR2,
P99_X_O3 in VARCHAR2,
P99_X_V3 in VARCHAR2,
P99_X_COL4 in VARCHAR2,
P99_X_O4 in VARCHAR2,
P99_X_V4 in VARCHAR2,
P99_X_COL5 in VARCHAR2,
P99_X_O5 in VARCHAR2,
P99_X_V5 in VARCHAR2,
P99_X_CD in VARCHAR2,
P99_X_VD in VARCHAR2
) IS
l_sql_stmt varchar2(32600);
p_table_name varchar2(30) := 'DWP.IZV_SLOG_DET';
BEGIN
l_sql_stmt := 'update ' || p_table_name || ' set '
|| P99_X_UC1 || ' = decode(' || P99_X_UV1 ||','''','|| P99_X_UC1 ||',''@'',null,'|| P99_X_UV1 ||'),'
|| P99_X_UC2 || ' = decode(' || P99_X_UV2 ||','''','|| P99_X_UC2 ||',''@'',null,'|| P99_X_UV2 ||'),'
|| P99_X_UC3 || ' = decode(' || P99_X_UV3 ||','''','|| P99_X_UC3 ||',''@'',null,'|| P99_X_UV3 ||') where '||
P99_X_COL ||' '|| P99_X_O ||' ' || P99_X_V || ' and ' ||
P99_X_COL2 ||' '|| P99_X_O2 ||' ' || P99_X_V2 || ' and ' ||
P99_X_COL3 ||' '|| P99_X_O3 ||' ' || P99_X_V3 || ' and ' ||
P99_X_COL4 ||' '|| P99_X_O4 ||' ' || P99_X_V4 || ' and ' ||
P99_X_COL5 ||' '|| P99_X_O5 ||' ' || P99_X_V5 || ' and ' ||
P99_X_CD || ' = ' || P99_X_VD ;
--dbms_output.put_line(l_sql_stmt);
EXECUTE IMMEDIATE l_sql_stmt;
END;Hi Ivan,
I do not think that the decode is performance relevant. Maybe the update hangs because some other transaction has uncommitted changes to one of the affected rows or the where clause is not selective enough and needs to update a huge amount of records.
Besides that - and I might be wrong, because I only know some part of your app - the code here looks like you have a huge sql injection vulnerability here. Maybe you should consider re-writing your logic in static sql. If that is not possible, you should make sure that the user input only contains allowed values, e.g. by white-listing P99_X_On (i.e. make sure they only contain known values like '=', '<', ...), and by using dbms_assert.enquote_name/enquote_literal on the other P99_X_nnn parameters.
Regards,
Christian
Maybe you are looking for
-
I want to submit a concurrent program from pl/sql. Please help me.
Dear all, I want to submit a concurrent program from pl/sql. But I failed. Please help me. Detail: I create a concurrent program in 'Cash Management, Vision Operations (USA)' responsibility. <strong>And it be submitted success in EBS</strong>. Then I
-
I bought a new iPhone 6 and switched carriers and am trying to get my voicemail and instant messages ported over to the new phone
-
The font used in Adobe's touch type video
Hey folks, I have a question for anybody who's watched the video for the Illustrator Touch Type tool. Can be found here: http://tv.adobe.com/watch/creative-cloud-for-design/discover-the-new-touch-type-tool-in-ad obe-illustrator-cc/ In the video, Rufu
-
Dynamic Component in Table Column
JDev 11.1.1.6 Can anyone give me guidance on how to change the component that a table column uses at runtime? Basically, I want to have an Input Text on Column B when the value on the Output Text on Column A is within a list of strings. Otherwise, I
-
Wireless adapter name is changing from wlp0s26u1u1 to wlan0
If I reboot my computer from an on state the name changes from wlp0s26u1u1 to wlan0 which makes me unable to connect to wifi, even if I make a netctl profile for wlan0, it fails. If I start my computer from an off state, the adapter takes the name wl