METHOD_OPT parameter
Hi,
which is the best method to set the option METHOD_OPT
execute dbms_stats.gather_schema_stats('SCOTT',CASCADE=>TRUE,ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,METHOD_OPT=>'FOR ALL INDEXED COLUMNS SIZE AUTO')
or
execute dbms_stats.gather_schema_stats('SCOTT',CASCADE=>TRUE,ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,METHOD_OPT=>'FOR ALL COLUMNS SIZE AUTO')
which is the best one.....mine is 9.2.0.1 database used for Banking Application having 1500 Tables with 15 GB of Database Size.
Please share your experience
The best method is the one that is going to make queries run faster ...
I'm not sure that it's a good idea to use the same DBMS_STATS options for all objects in the same schema. At the very end, this depends on your data and your queries.
Following AskTom thread might help you.
You could also consider upgrading your database at least to upgrade to 9.2.0.8 if not to 10g.
Similar Messages
-
Questions in dbms_stats parameter
Hello, I am using oracle9i.
I have couple of questions when i read this help...
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461
I would appreciate if any one could answers my questions.
Question1
=======
Method_opt parameter.
- AUTO : Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.
- SKEWONLY : Oracle determines the columns to collect histograms based on the data distribution of the columns.
I understand about SKEWONLY. In AUTO determine the column based on data distribution and workload of the columns... What is workload of the column? Is this based on how many queries run based on that column? How does oracle track this?
Question2
======
Method_opt Parameter.
SIZE integer
My understanding is, when we specify size 1, then it creates only one bucket. Even though, we have more then one distinct value in the column.
Let us say, we have 10 distinct values in a column, if we specify SIZE 5, then it creates only 5 bucket. But we have 10 distinct values... How does this work in this scenario?
Let us say, we have 5 distinct values in a column, if we specify SIZE 10, then it creates 10 bucket. But we have 5 distinct values... How does this work in this scenario?There is a table called col_usage$ that stores information about column usage. The dbms_stats package uses this to determine whether on not a histogram will be gathered on a given column. How it determines whether or not to gather histograms from the given data I don't know exactly.
size 1 means that there is no histogram as all values are in a single bucket.
10 Distinct values and only 5 buckets, means you get height balanced histograms and you will evenly spread sampled values through the buckets.
You will have a max and min values. The optimizer uses a formula which estimates selectivity by a range of factors such as min max value in a bucket, size of the bucket, number of values per bucket, and how popular a given value is(number of buckets value appears in)
5 Distinct values and 10 buckets, give you a frequency histograms. You will only have 5 buckets each with 1 value. There is no point in having an empty bucket. Frequency histograms are more accurate as the exact number of occurrences of a given value are known. They calculate selectivity based on the number of occurrences of a value per bucket over the total number of occurrences of all values. -
Multiple Executions Plans for the same SQL statement
Dear experts,
awrsqrpt.sql is showing multiple executions plans for a single SQL statement. How is it possible that one SQL statement will have multiple Executions Plans within the same AWR report.
Below is the awrsqrpt's output for your reference.
WORKLOAD REPOSITORY SQL Report
Snapshot Period Summary
DB Name DB Id Instance Inst Num Release RAC Host
TESTDB 2157605839 TESTDB1 1 10.2.0.3.0 YES testhost1
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 32541 11-Oct-08 21:00:13 248 141.1
End Snap: 32542 11-Oct-08 21:15:06 245 143.4
Elapsed: 14.88 (mins)
DB Time: 12.18 (mins)
SQL Summary DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
Elapsed
SQL Id Time (ms)
51szt7b736bmg 25,131
Module: SQL*Plus
UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL,0) + NVL(ACCT_DR_BAL,
0)) FROM ACCT WHERE ACCT_TRN_DT = (:B1 ) AND TEST_ACC_NB = ACCT_ACC_NB(+)) WHERE
TEST_BATCH_DT = (:B1 )
SQL ID: 51szt7b736bmg DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
-> 1st Capture and Last Capture Snap IDs
refer to Snapshot IDs witin the snapshot range
-> UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL,0) + NVL(AC...
Plan Hash Total Elapsed 1st Capture Last Capture
# Value Time(ms) Executions Snap ID Snap ID
1 2960830398 25,131 1 32542 32542
2 3834848140 0 0 32542 32542
Plan 1(PHV: 2960830398)
Plan Statistics DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 25,131 25,130.7 3.4
CPU Time (ms) 23,270 23,270.2 3.9
Executions 1 N/A N/A
Buffer Gets 2,626,166 2,626,166.0 14.6
Disk Reads 305 305.0 0.3
Parse Calls 1 1.0 0.0
Rows 371,735 371,735.0 N/A
User I/O Wait Time (ms) 564 N/A N/A
Cluster Wait Time (ms) 0 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 0 N/A N/A
Invalidations 0 N/A N/A
Version Count 2 N/A N/A
Sharable Mem(KB) 26 N/A N/A
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | | | 1110 (100)| |
| 1 | UPDATE | TEST | | | | |
| 2 | TABLE ACCESS FULL | TEST | 116K| 2740K| 1110 (2)| 00:00:14 |
| 3 | TABLE ACCESS BY INDEX ROWID| ACCT | 1 | 26 | 5 (0)| 00:00:01 |
| 4 | INDEX RANGE SCAN | ACCT_DT_ACC_IDX | 1 | | 4 (0)| 00:00:01 |
Plan 2(PHV: 3834848140)
Plan Statistics DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 0 N/A 0.0
CPU Time (ms) 0 N/A 0.0
Executions 0 N/A N/A
Buffer Gets 0 N/A 0.0
Disk Reads 0 N/A 0.0
Parse Calls 0 N/A 0.0
Rows 0 N/A N/A
User I/O Wait Time (ms) 0 N/A N/A
Cluster Wait Time (ms) 0 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 0 N/A N/A
Invalidations 0 N/A N/A
Version Count 2 N/A N/A
Sharable Mem(KB) 26 N/A N/A
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | | | 2 (100)| |
| 1 | UPDATE | TEST | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 28 | 2 (0)| 00:00:01 |
| 3 | INDEX RANGE SCAN | TEST_DT_IND | 1 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| ACCT | 1 | 26 | 4 (0)| 00:00:01 |
| 5 | INDEX RANGE SCAN | INDX_ACCT_DT | 1 | | 3 (0)| 00:00:01 |
Full SQL Text
SQL ID SQL Text
51szt7b736bm UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL, 0) +
NVL(ACCT_DR_BAL, 0)) FROM ACCT WHERE ACCT_TRN_DT = (:B1 ) AND PB
RN_ACC_NB = ACCT_ACC_NB(+)) WHERE TEST_BATCH_DT = (:B1 )Your input is highly appreciated.
Thanks for taking your time in answering my question.
RegardsOracle Lover3 wrote:
Dear experts,
awrsqrpt.sql is showing multiple executions plans for a single SQL statement. How is it possible that one SQL statement will have multiple Executions Plans within the same AWR report.If you're using bind variables and you've histograms on your columns which can be created by default in 10g due to the "SIZE AUTO" default "method_opt" parameter of DBMS_STATS.GATHER__STATS it is quite normal that you get different execution plans for the same SQL statement. Depending on the values passed when the statement is hard parsed (this feature is called "bind variable peeking" and enabled by default since 9i) an execution plan is determined and re-used for all further executions of the same "shared" SQL statement.
If now your statement ages out of the shared pool or is invalidated due to some DDL or statistics gathering activity it will be re-parsed and again the values passed in that particular moment will determine the execution plan. If you have skewed data distribution and a histogram in place that reflects that skewness you might get different execution plans depending on the actual values used.
Since this "flip-flop" behaviour can sometimes be counter-productive if you're unlucky and the values used to hard parse the statement leading to a plan that is unsuitable for the majority of values used afterwards, 11g introduced the "adaptive" cursor sharing that attempts to detect such a situation and can automatically re-evaluate the execution plan of the statement.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
How to set dbms_stats parameters for a single table
Hi,
I see that dbms_stats has the following procedure:
PROCEDURE SET_PARAM
Argument Name Type In/Out Default?
PNAME VARCHAR2 IN
PVAL VARCHAR2 IN
Is there a way to change the parameters only for a single table?
I need to set METHOD_OPT=>'FOR ALL COLUMNS SIZE 1' only for a specific table...I'm sorry, mate. It looks like setting individual table preferences was introduced in 11g (and doesn't seem to work all that well).
You can still:
1. Explicitly specify any of the supported parameters by using DBMS_STATS.GATHER_TABLE_STATS() for the individual table and run it along.
2. Write a PL/SQL wrapper for let's say DBMS_STATS.GATHER_SCHEMA_STATS/GATHER_DTABASE_STATS that would gather the stats for the whole schema but ignore this particular table. Then gather the stats for the table with the METHOD_OPT parameter of your choice that could be different from the one used for the rest of the schema.
This could be achieved by locking particular table stats with DBMS_STATS.LOCK_TABLE_STATS, running GATHER_SCHEMA_STATS with force=>FALSE (which is the default). That parameter will make the procedure ignore any tables with locked stats. As the last step of the wrapper you can execute DBMS_STATS.GATHER_TABLE_STATS for the table in question with the desired METHOD_OPT and force=>TRUE.
It's a little more work, but may solve your problem.
Max
Edited by: Max Seleznev on Nov 28, 2012 6:21 PM
Edited by: Max Seleznev on Nov 28, 2012 6:22 PM -
Statistics gathering in 10g - Histograms
I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
In the above post author mentions that
"It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
Following one is post from Oracle forums.
Statistics
In the above post Mr Lewis mentions about adding
method_opt => 'for all columns size 1' to the DBMS job
And in the same forum post Mr Richard Foote has mentioned that
"Not only does it change from 'FOR ALL COLUMNS SIZE 1' (no histograms) to 'FOR ALL COLUMNS SIZE AUTO' (histograms for those tables that Oracle deems necessary based on data distribution and whether sql statements reference the columns), but it also generates a job by default to collect these statistics for you.
It all sounds like the ideal scenario, just let Oracle worry about it for you, except for the slight disadvantage that Oracle is not particularly "good" at determining which columns really need histograms and will likely generate many many many histograms unnecessarily while managing to still miss out on generating histograms on some of those columns that do need them."
http://richardfoote.wordpress.com/2008/01/04/dbms_stats-method_opt-default-behaviour-changed-in-10g-be-careful/
Our environment Windows 2003 server Oracle 10.2.0.3 64bit oracle
We use the following script for our analyze job.
BEGIN DBMS_STATS.GATHER_TABLE_STATS
(ownname => ''username'', '
'tabname => TABLE_NAME
'method_opt => ''FOR ALL COLUMNS SIZE AUTO''
'granularity => ''ALL'', '
'cascade => TRUE, '
'degree => DBMS_STATS.DEFAULT_DEGREE);
END;
This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
Here are my questions
What is the option I should use for method_opt parameter?
I am sure there are no hard and fast rules for this and each environment is different.
But reading all the above post kind of made me confused and want to be sure we are using the correct options.
I would appreciate any suggestions, insight or further readings regarding the same.
Appreciate your time
Thanks
NikiNiki wrote:
I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
In the above post author mentions that
"It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
Here are my questions
What is the option I should use for method_opt parameter?
I am sure there are no hard and fast rules for this and each environment is different.
But reading all the above post kind of made me confused and want to be sure we are using the correct options.As the author of one of the posts cited, let me make some comments. First, I would always recommend starting with the defaults. All to often people "tune" their dbms_stats call only to make it run slower and gather less accurate stats than if they did absolutely nothing and let the default autostats job gather stats in the maintenance window. With your dbms_stats command I would comment that granularity => 'ALL', is rarely needed and certainly adds to the stats collection times. Also, if the data has not changed enough why recollect stats? This is the advantage of the using options=>'gather stale'. You haven't mentioned what kind of application your database is used for: OLTP or data warehouse. If it is OLTP and the application uses bind values, then I would recommend to disable or manually collect histograms (bind peeking and histograms should not be used together in 10g) using size 1 or size repeat. Histograms can be very useful in a DW where skew may be present.
The one non-default option I find myself using is degree=>dbms_stats.auto_degree. This allows dbms_stats to choose a DOP for the gather based on the size of the object. This works well if you dont want to specify a fixed degree or you would like dbms_stats to use a different DOP than the table is decorated with.
Hope this helps.
Regards,
Greg Rahn
http://structureddata.org -
Performance Problem with _DIFF - Views
Hi!
We have tables with several million rows under version - control. Before we merge the changes in the Child - Workspace (very few changes typically), we need to view the differences to our users.
So we tried:
exec dbms_wm.setdiffversions('LIVE', 'myworkspace');
and:
Select ID from XXX_Diff ('ID' being the primary - key column).
Unfortunately the select - statement takes a very long time:
call count cpu elapsed disk query current rows
Parse 1 16.68 16.66 0 12 0 0
Execute 1 0.01 0.01 0 0 0 0
Fetch 1 74.65 152.71 785781 816676 0 1
total 3 91.35 169.40 785781 816688 0 1
As i can see from the trace, the explain - plan is very bad, we get a table full access on the XXX_LT table.
Our statistics are up to date (dbms_stats.gather_schema_stats).
WM - Version: 10.1.0.5.1
DB - Version: 10.1.0.4.0
Any ideas?
cheers,
NothiHi Nothi,
It is difficult to say without alot more information. The easiest way to fix performance issues is to file a TAR on the problem, so that all of the required information can be obtained for testing/analysis. A testcase might also be needed.
The only suggestion I have without this information would be when gathering statistics with dbms_stats to set the method_opt parameter to 'for all columns size x', where x is greater than the number of versions in the system. This can be obtained with the following:
SQL> select count(*) from all_version_hview ;
Also, be sure that the shared cursors get invalidated after gathering statistics. If this is not done, it will continue using the old plan. There is a no_invalidate parameter in gather_schema_stats/gather_table_stats used for this purpose.
Regards,
Ben -
Following Query running more than 4 hrs. could somone please suggest me to tune this query.
SELECT fi_contract_id, a.cust_id, a.product_id, a.currency_cd,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 4), '4992', posted_tran_amt, 0),
2
) ftp_amt,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 4), '4992', posted_base_amt, 0),
2
) ftp_base_amt,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 4),
'4994', posted_tran_amt,
'4995', posted_tran_amt,
0
2
) col_amt,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 4),
'4994', posted_base_amt,
'4995', posted_base_amt,
0
2
) col_base_amt,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 3), '499', 0, posted_tran_amt),
2
) closing_bal,
a.ACCOUNT, a.deptid, a.business_unit,
CASE
WHEN a.ACCOUNT LIKE '499%'
THEN '990'
ELSE a.operating_unit
END operating_unit,
a.base_currency, NVL (TRIM (pf_system_code), a.SOURCE) pf_system_code,
b.setid, a.channel_id, scb_arm_code, scb_tp_product, scb_tranche_id,
CASE
WHEN pf_system_code = 'CLS'
THEN scb_bncpr_flg
ELSE NULL
END tranche_purpose,
CASE
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 1, 1)
ELSE NULL
END lc_ind,
CASE
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 2, 3)
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) NOT IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 1, 3)
ELSE NULL
END bill_branch_id,
CASE
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 5, 1)
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) NOT IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 4, 1)
ELSE NULL
END section_id,
CASE
WHEN pf_system_code = 'IFS'
THEN SUBSTR (scb_bncpr_flg, 1, 1)
ELSE NULL
END recourse_ind,
CASE
WHEN pf_system_code = 'IFS'
THEN SUBSTR (scb_bncpr_flg, 2, 1)
ELSE NULL
END disclosure_ind,
TO_CHAR (LAST_DAY (upload_date), 'DDMMYYYY')
FROM ps_fi_ildgr_f00 a,
(SELECT c.business_unit, c.fi_instrument_id, c.scb_arm_code,
c.scb_tp_product, c.scb_tranche_id, c.scb_bncpr_flg
FROM ps_fi_iother_r00 c, ps_scb_bus_unit b1
WHERE c.business_unit = b1.business_unit
AND b1.setid = 'PKSTN'
AND c.asof_dt =
(SELECT MAX (c1.asof_dt)
FROM ps_fi_iother_r00 c1
WHERE c.business_unit = c1.business_unit
AND c1.fi_instrument_id = c.fi_instrument_id)) c,
ps_scb_bus_unit b,
(SELECT upload_date - 15 upload_date
FROM stg_ftp_trans_bal_tb
WHERE setid = 'PKSTN' AND ROWNUM < 2),
(SELECT i.business_unit, i.fi_instrument_id, i.pf_system_code,
i.fi_contract_id
FROM ps_fi_instr_f00 i, ps_scb_bus_unit b1
WHERE i.business_unit = b1.business_unit
AND b1.setid = 'PKSTN'
AND (i.asof_dt) =
(SELECT MAX (i1.asof_dt)
FROM ps_fi_instr_f00 i1
WHERE i.business_unit = i1.business_unit
AND i1.fi_instrument_id = i.fi_instrument_id)) d
WHERE a.business_unit = b.business_unit
AND a.business_unit = c.business_unit
AND a.business_unit = d.business_unit
AND a.fi_instrument_id = c.fi_instrument_id(+)
AND a.fi_instrument_id = d.fi_instrument_id(+)
AND fiscal_year = TO_CHAR (upload_date, 'YYYY')
AND a.ACCOUNT != '191801'
AND a.pf_scenario_id LIKE '%M_'
AND accounting_period = TO_CHAR (upload_date, 'MM')
AND b.setid = 'PKSTN'
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 225 | | 14059 (2)| | | | | |
|* 1 | FILTER | | | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10006 | 962 | 211K| | 13578 (2)| | | Q1,06 | P->S | QC (RAND) |
|* 4 | HASH JOIN | | 962 | 211K| | 13578 (2)| | | Q1,06 | PCWP | |
| 5 | PX RECEIVE | | 977 | 190K| | 4273 (2)| | | Q1,06 | PCWP | |
| 6 | PX SEND BROADCAST | :TQ10004 | 977 | 190K| | 4273 (2)| | | Q1,04 | P->P | BROADCAST |
PLAN_TABLE_OUTPUT
|* 7 | HASH JOIN | | 977 | 190K| | 4273 (2)| | | Q1,04 | PCWP | |
| 8 | BUFFER SORT | | | | | | | | Q1,04 | PCWC | |
| 9 | PX RECEIVE | | 1 | 10 | | 2 (0)| | | Q1,04 | PCWP | |
| 10 | PX SEND BROADCAST | :TQ10000 | 1 | 10 | | 2 (0)| | | | S->P | BROADCAST |
|* 11 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
| 12 | TABLE ACCESS BY LOCAL INDEX ROWID| PS_FI_INSTR_F00 | 1 | 42 | | 1 (0)| | | Q1,04 | PCWC | |
| 13 | NESTED LOOPS | | 1954 | 362K| | 4271 (2)| | | Q1,04 | PCWP | |
|* 14 | HASH JOIN | | 1954 | 282K| | 3999 (2)| | | Q1,04 | PCWP | |
| 15 | BUFFER SORT | | | | | | | | Q1,04 | PCWC | |
| 16 | PX RECEIVE | | 1 | 10 | | 2 (0)| | | Q1,04 | PCWP | |
| 17 | PX SEND BROADCAST | :TQ10001 | 1 | 10 | | 2 (0)| | | | S->P | BROADCAST |
PLAN_TABLE_OUTPUT
|* 18 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
|* 19 | HASH JOIN | | 3907 | 526K| | 3997 (2)| | | Q1,04 | PCWP | |
| 20 | PX RECEIVE | | 54702 | 4700K| | 616 (1)| | | Q1,04 | PCWP | |
| 21 | PX SEND HASH | :TQ10003 | 54702 | 4700K| | 616 (1)| | | Q1,03 | P->P | HASH |
| 22 | PX BLOCK ITERATOR | | 54702 | 4700K| | 616 (1)| 1 | 6119 | Q1,03 | PCWC | |
|* 23 | TABLE ACCESS FULL | PS_FI_ILDGR_F00 | 54702 | 4700K| | 616 (1)| 1 | 6119 | Q1,03 | PCWP | |
| 24 | BUFFER SORT | | | | | | | | Q1,04 | PCWC | |
| 25 | PX RECEIVE | | 221K| 10M| | 3380 (3)| | | Q1,04 | PCWP | |
| 26 | PX SEND HASH | :TQ10002 | 221K| 10M| | 3380 (3)| | | | S->P | HASH |
| 27 | NESTED LOOPS | | 221K| 10M| | 3380 (3)| | | | | |
| 28 | NESTED LOOPS | | 1 | 16 | | 2351 (2)| | | | | |
PLAN_TABLE_OUTPUT
| 29 | VIEW | | 1 | 6 | | 2349 (2)| | | | | |
|* 30 | COUNT STOPKEY | | | | | | | | | | |
| 31 | PARTITION LIST SINGLE | | 661K| 7755K| | 2349 (2)| KEY | KEY | | | |
| 32 | TABLE ACCESS FULL | STG_FTP_TRANS_BAL_TB | 661K| 7755K| | 2349 (2)| 2 | 2 | | | |
|* 33 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
| 34 | PARTITION LIST ITERATOR | | 442K| 14M| | 1029 (3)| KEY | KEY | | | |
|* 35 | TABLE ACCESS FULL | PS_FI_IOTHER_R00 | 442K| 14M| | 1029 (3)| KEY | KEY | | | |
| 36 | PARTITION LIST ITERATOR | | 1 | | | 1 (0)| KEY | KEY | Q1,04 | PCWP | |
|* 37 | INDEX RANGE SCAN | PS_FI_INSTR_F00 | 1 | | | 1 (0)| KEY | KEY | Q1,04 | PCWP | |
| 38 | VIEW | VW_SQ_1 | 5220K| 124M| | 9296 (1)| | | Q1,06 | PCWP | |
| 39 | SORT GROUP BY | | 5220K| 169M| 479M| 9296 (1)| | | Q1,06 | PCWP | |
PLAN_TABLE_OUTPUT
| 40 | PX RECEIVE | | 5220K| 169M| | 9220 (1)| | | Q1,06 | PCWP | |
| 41 | PX SEND HASH | :TQ10005 | 5220K| 169M| | 9220 (1)| | | Q1,05 | P->P | HASH |
| 42 | PX BLOCK ITERATOR | | 5220K| 169M| | 9220 (1)| 1 | 7 | Q1,05 | PCWC | |
| 43 | TABLE ACCESS FULL | PS_FI_INSTR_F00 | 5220K| 169M| | 9220 (1)| 1 | 7 | Q1,05 | PCWP | |
| 44 | SORT AGGREGATE | | 1 | 20 | | | | | | | |
| 45 | PARTITION LIST SINGLE | | 1 | 20 | | 1 (0)| KEY | KEY | | | |
|* 46 | INDEX RANGE SCAN | PS_FI_IOTHER_R00 | 1 | 20 | | 1 (0)| KEY | KEY | | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - filter("C"."ASOF_DT"= (SELECT /*+ */ MAX("C1"."ASOF_DT") FROM "PS_FI_IOTHER_R00" "C1" WHERE "C1"."FI_INSTRUMENT_ID"=:B1 AND
"C1"."BUSINESS_UNIT"=:B2))
4 - access("I"."ASOF_DT"="VW_COL_1" AND "I"."BUSINESS_UNIT"="BUSINESS_UNIT" AND "FI_INSTRUMENT_ID"="I"."FI_INSTRUMENT_ID")
7 - access("I"."BUSINESS_UNIT"="B1"."BUSINESS_UNIT")
11 - filter("B1"."SETID"='PKSTN')
14 - access("A"."BUSINESS_UNIT"="B"."BUSINESS_UNIT")
18 - filter("B"."SETID"='PKSTN')
19 - access("A"."BUSINESS_UNIT"="C"."BUSINESS_UNIT" AND "A"."FI_INSTRUMENT_ID"="C"."FI_INSTRUMENT_ID" AND
"FISCAL_YEAR"=TO_NUMBER(TO_CHAR("UPLOAD_DATE",'YYYY')) AND "ACCOUNTING_PERIOD"=TO_NUMBER(TO_CHAR("UPLOAD_DATE",'MM')))
23 - filter("A"."PF_SCENARIO_ID" LIKE '%M_' AND "A"."ACCOUNT"<>'191801')
PLAN_TABLE_OUTPUT
30 - filter(ROWNUM<2)
33 - filter("B1"."SETID"='PKSTN')
35 - filter("C"."BUSINESS_UNIT"="B1"."BUSINESS_UNIT")
37 - access("A"."BUSINESS_UNIT"="I"."BUSINESS_UNIT" AND "A"."FI_INSTRUMENT_ID"="I"."FI_INSTRUMENT_ID")
46 - access("C1"."BUSINESS_UNIT"=:B1 AND "C1"."FI_INSTRUMENT_ID"=:B2)
Note
- 'PLAN_TABLE' is old version
75 rows selected.[email protected] wrote:
Following Query running more than 4 hrs. could somone please suggest me to tune this query.1. You can try to avoid self-joins or FILTER operations in the C and D inline views if you change below queries to use analytic functions instead:
(SELECT c.business_unit, c.fi_instrument_id, c.scb_arm_code,
c.scb_tp_product, c.scb_tranche_id, c.scb_bncpr_flg
FROM ps_fi_iother_r00 c, ps_scb_bus_unit b1
WHERE c.business_unit = b1.business_unit
AND b1.setid = 'PKSTN'
AND c.asof_dt =
(SELECT MAX (c1.asof_dt)
FROM ps_fi_iother_r00 c1
WHERE c.business_unit = c1.business_unit
AND c1.fi_instrument_id = c.fi_instrument_id)) c,
(SELECT upload_date - 15 upload_date
FROM stg_ftp_trans_bal_tb
WHERE setid = 'PKSTN' AND ROWNUM < 2),
(SELECT i.business_unit, i.fi_instrument_id, i.pf_system_code,
i.fi_contract_id
FROM ps_fi_instr_f00 i, ps_scb_bus_unit b1
WHERE i.business_unit = b1.business_unit
AND b1.setid = 'PKSTN'
AND (i.asof_dt) =
(SELECT MAX (i1.asof_dt)
FROM ps_fi_instr_f00 i1
WHERE i.business_unit = i1.business_unit
AND i1.fi_instrument_id = i.fi_instrument_id)) d
...Try to use something like this instead:
(select * from
(SELECT c.business_unit, c.fi_instrument_id, c.scb_arm_code,
c.scb_tp_product, c.scb_tranche_id, c.scb_bncpr_flg,
rank() over (order by c.asof_dt desc partition by c.business_unit, c.fi_instrument_id) rnk
FROM ps_fi_iother_r00 c, ps_scb_bus_unit b1
WHERE c.business_unit = b1.business_unit
AND b1.setid = 'PKSTN')
where rnk = 1) c,
...2. This piece seems to be questionable since it seems to pick the "UPLOAD_DATE" from an arbitrary row where SETID = 'PKSTN'. I assume that the UPLOAD_DATE is then the same for all these rows, otherwise this would potentially return a different UPLOAD_DATE for each execution of the query. Still it's a questionable approach and seems to be de-normalized data.
(SELECT upload_date - 15 upload_date
FROM stg_ftp_trans_bal_tb
WHERE setid = 'PKSTN' AND ROWNUM < 2),3. Your execution plan contains some parts that are questionable and might lead to inappropriate work performed by the database if the estimates of optimizer are wrong:
a. Are you sure that the filter predicate "SETID"='PKSTN' on PS_SCB_BUS_UNIT returns only a single row? If not, below NESTED LOOP operation could scan the PS_FI_IOTHER_R00 table more than once making this rather inefficient
| 27 | NESTED LOOPS | | 221K| 10M| | 3380 (3)| | | | | |
| 28 | NESTED LOOPS | | 1 | 16 | | 2351 (2)| | | | | |
| 29 | VIEW | | 1 | 6 | | 2349 (2)| | | | | |
|* 30 | COUNT STOPKEY | | | | | | | | | | |
| 31 | PARTITION LIST SINGLE | | 661K| 7755K| | 2349 (2)| KEY | KEY | | | |
| 32 | TABLE ACCESS FULL | STG_FTP_TRANS_BAL_TB | 661K| 7755K| | 2349 (2)| 2 | 2 | | | |
|* 33 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
| 34 | PARTITION LIST ITERATOR | | 442K| 14M| | 1029 (3)| KEY | KEY | | | |
|* 35 | TABLE ACCESS FULL | PS_FI_IOTHER_R00 | 442K| 14M| | 1029 (3)| KEY | KEY | | | |b. The optimizer assumes that below join returns only 3907 rows out of the 54k and 221k source sets. This could be wrong, because the join expression contains multiple function calls and an implicit TO_NUMBER conversion you haven't mentioned in your SQL which is bad practice in general:
19 - access("A"."BUSINESS_UNIT"="C"."BUSINESS_UNIT" AND "A"."FI_INSTRUMENT_ID"="C"."FI_INSTRUMENT_ID" AND
"FISCAL_YEAR"=TO_NUMBER(TO_CHAR("UPLOAD_DATE",'YYYY')) AND "ACCOUNTING_PERIOD"=TO_NUMBER(TO_CHAR("UPLOAD_DATE",'MM')))The conversion functions might hide from the optimizer that the join returns many more rows than estimated, because the optimizer uses default selectivities or guesses for function expressions. If you can't fix the data model to use appropriate join expressions you could try to create function based indexes on the expressions TO_NUMBER(TO_CHAR("UPLOAD_DATE",'YYYY')) and TO_NUMBER(TO_CHAR("UPLOAD_DATE",'MM')) and gather statistics on the corresponding hidden columns (method_opt parameter of DBMS_STATS.GATHER_TABLE_STATS call set to "FOR ALL HIDDEN COLUMNS"). If you're already on 11g you can achieve the same by using virtual columns.
|* 19 | HASH JOIN | | 3907 | 526K| | 3997 (2)| | | Q1,04 | PCWP | |
| 20 | PX RECEIVE | | 54702 | 4700K| | 616 (1)| | | Q1,04 | PCWP | |
| 21 | PX SEND HASH | :TQ10003 | 54702 | 4700K| | 616 (1)| | | Q1,03 | P->P | HASH |
| 22 | PX BLOCK ITERATOR | | 54702 | 4700K| | 616 (1)| 1 | 6119 | Q1,03 | PCWC | |
|* 23 | TABLE ACCESS FULL | PS_FI_ILDGR_F00 | 54702 | 4700K| | 616 (1)| 1 | 6119 | Q1,03 | PCWP | |
| 24 | BUFFER SORT | | | | | | | | Q1,04 | PCWC | |
| 25 | PX RECEIVE | | 221K| 10M| | 3380 (3)| | | Q1,04 | PCWP | |
| 26 | PX SEND HASH | :TQ10002 | 221K| 10M| | 3380 (3)| | | | S->P | HASH |
| 27 | NESTED LOOPS | | 221K| 10M| | 3380 (3)| | | | | |
| 28 | NESTED LOOPS | | 1 | 16 | | 2351 (2)| | | | | |
| 29 | VIEW | | 1 | 6 | | 2349 (2)| | | | | |
|* 30 | COUNT STOPKEY | | | | | | | | | | |
| 31 | PARTITION LIST SINGLE | | 661K| 7755K| | 2349 (2)| KEY | KEY | | | |
| 32 | TABLE ACCESS FULL | STG_FTP_TRANS_BAL_TB | 661K| 7755K| | 2349 (2)| 2 | 2 | | | |
|* 33 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
| 34 | PARTITION LIST ITERATOR | | 442K| 14M| | 1029 (3)| KEY | KEY | | | |
|* 35 | TABLE ACCESS FULL | PS_FI_IOTHER_R00 | 442K| 14M| | 1029 (3)| KEY | KEY | | | |c. Due to the small number of rows estimated, mainly caused by b. above, the result of the joins is broadcasted to all parallel slaves when performing the final join. This might be quite inefficient if the result is much larger than expected.
| 6 | PX SEND BROADCAST | :TQ10004 | 977 | 190K| | 4273 (2)| | | Q1,04 | P->P | BROADCAST |Note that this join is not necessary any longer / obsolete if you introduce above analytic functions as suggested.
4. Your PLAN_TABLE does not match your Oracle version. If you're already on 10g or later, simply drop all PLAN_TABLEs in non-SYS schemas since there is already one provided as part of the data dictionary. Otherwise re-create them using $ORACLE_HOME/rdbms/admin/utlxplan.sql
Note
- 'PLAN_TABLE' is old versionIf you want to understand where the majority of the time is spent you need to trace the execution. Note that your statement introduces an increased complexity because it uses parallel execution, therefore you'll end up with multiple trace files per parallel slave and query coordinator process, which makes the analysis not that straightforward.
Please read this HOW TO: Post a SQL statement tuning request - template posting that explains how you can enable the statement trace and what you should provide if you have SQL statement tuning question and how to format it here so that the posted information is readable by others.
This accompanying blog post shows step-by-step instructions how to obtain that information.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Select after upgrade 9i - 10g runs slowly thousand times - urgent
I have select which runs around one second on Oracle 9. After upgrade to 10g this select runs more than two hours. I compute statistics on all tables, all columns and all indexes. This select consist from two parts - every part run on oracle 10g around 0.5 second but together more than 2 hours. When I rewrite it using WITH clausule, it runs 2 second. But I don't want to rewrite selects .. I want to find why it runs slow on 10g.
Below are original and rewrited select with their execution plans.
Any idea or recomendation?
Select:
SELECT * FROM (
SELECT DISTINCT from_fix_ident ident, from_ident_icao icao
, latitude1 latitude, longitude1 longitude, from_fix_fea_pk src
, -1 mslink
FROM l_sky_airway, l_dgn_airway
WHERE l_sky_airway.mslink=l_dgn_airway.mslink
UNION ALL
SELECT DISTINCT to_fix_ident ident, to_ident_icao icao
, latitude2 latitude, longitude2 longitude, to_fix_fea_pk src
, -2 mslink
FROM l_sky_airway, l_dgn_airway
WHERE l_sky_airway.mslink=l_dgn_airway.mslink
UNION ALL
SELECT ident, icao, latitude, longitude, 5 src, mslink FROM l_sky_navaid
UNION ALL
SELECT ident, icao, latitude, longitude, 6 src, mslink FROM l_sky_waypoint)
WHERE ident||';'||icao||';'||src IN (
SELECT ident||';'||icao||';'||src FROM (
SELECT from_fix_ident ident
, from_ident_icao icao
, latitude1 latitude
, longitude1 longitude
, from_fix_fea_pk src
FROM l_sky_airway, l_dgn_airway
WHERE l_sky_airway.mslink = l_dgn_airway.mslink
UNION ALL
SELECT to_fix_ident ident
, to_ident_icao icao
, latitude2 latitude
, longitude2 longitude
, to_fix_fea_pk src
FROM l_sky_airway, l_dgn_airway
WHERE l_sky_airway.mslink = l_dgn_airway.mslink
MINUS
SELECT ident
, icao
, latitude
, longitude
, 5 src
FROM l_sky_navaid
MINUS
SELECT ident
, icao
, latitude
, longitude
, 6 src
FROM l_sky_waypoint
ORDER BY ident, icao, src, mslink, latitude, longitude;Execution plan:
Plan
SELECT STATEMENT ALL_ROWSCost: 2 003 Bytes: 1 572 402 240 Cardinality: 24 568 785
29 SORT ORDER BY Cost: 2 003 Bytes: 1 572 402 240 Cardinality: 24 568 785
28 FILTER
12 VIEW EUS. Cost: 825 Bytes: 3 522 880 Cardinality: 55 045
11 UNION-ALL
4 HASH UNIQUE Cost: 398 Bytes: 981 948 Cardinality: 22 317
3 HASH JOIN Cost: 142 Bytes: 981 948 Cardinality: 22 317
1 TABLE ACCESS FULL TABLE EUS.L_SKY_AIRWAY Cost: 85 Bytes: 290 121 Cardinality: 22 317
2 TABLE ACCESS FULL TABLE EUS.L_DGN_AIRWAY Cost: 56 Bytes: 691 827 Cardinality: 22 317
8 HASH UNIQUE Cost: 398 Bytes: 981 948 Cardinality: 22 317
7 HASH JOIN Cost: 143 Bytes: 981 948 Cardinality: 22 317
5 TABLE ACCESS FULL TABLE EUS.L_SKY_AIRWAY Cost: 85 Bytes: 290 121 Cardinality: 22 317
6 TABLE ACCESS FULL TABLE EUS.L_DGN_AIRWAY Cost: 56 Bytes: 691 827 Cardinality: 22 317
9 TABLE ACCESS FULL TABLE EUS.L_SKY_NAVAID Cost: 6 Bytes: 57 225 Cardinality: 1 635
10 TABLE ACCESS FULL TABLE EUS.L_SKY_WAYPOINT Cost: 22 Bytes: 324 712 Cardinality: 8 776
27 VIEW EUS. Cost: 325 Bytes: 12 042 Cardinality: 446
26 MINUS
23 MINUS
20 SORT UNIQUE Cost: 325 Bytes: 23 128 Cardinality: 446
19 UNION-ALL
15 HASH JOIN Cost: 145 Bytes: 9 812 Cardinality: 223
13 TABLE ACCESS FULL TABLE EUS.L_SKY_AIRWAY Cost: 89 Bytes: 2 899 Cardinality: 223
14 TABLE ACCESS FULL TABLE EUS.L_DGN_AIRWAY Cost: 56 Bytes: 691 827 Cardinality: 22 317
18 HASH JOIN Cost: 146 Bytes: 9 812 Cardinality: 223
16 TABLE ACCESS FULL TABLE EUS.L_SKY_AIRWAY Cost: 89 Bytes: 2 899 Cardinality: 223
17 TABLE ACCESS FULL TABLE EUS.L_DGN_AIRWAY Cost: 56 Bytes: 691 827 Cardinality: 22 317
22 SORT UNIQUE Bytes: 512 Cardinality: 16
21 TABLE ACCESS FULL TABLE EUS.L_SKY_NAVAID Cost: 6 Bytes: 512 Cardinality: 16
25 SORT UNIQUE Bytes: 2 992 Cardinality: 88
24 TABLE ACCESS FULL TABLE EUS.L_SKY_WAYPOINT Cost: 24 Bytes: 2 992 Cardinality: 88 Rewrited select which run fast:
WITH inselect AS
(SELECT ident || ';' || icao || ';' || src
FROM (SELECT from_fix_ident ident, from_ident_icao icao,
latitude1 latitude, longitude1 longitude,
from_fix_fea_pk src
FROM l_sky_airway, l_dgn_airway
WHERE l_sky_airway.mslink = l_dgn_airway.mslink
UNION ALL
SELECT to_fix_ident ident, to_ident_icao icao,
latitude2 latitude, longitude2 longitude,
to_fix_fea_pk src
FROM l_sky_airway, l_dgn_airway
WHERE l_sky_airway.mslink = l_dgn_airway.mslink
MINUS
SELECT ident, icao, latitude, longitude, 5 src
FROM l_sky_navaid
MINUS
SELECT ident, icao, latitude, longitude, 6 src
FROM l_sky_waypoint)),
mainselect AS
(SELECT DISTINCT from_fix_ident ident, from_ident_icao icao,
latitude1 latitude, longitude1 longitude,
from_fix_fea_pk src, -1 mslink
FROM l_sky_airway, l_dgn_airway
WHERE l_sky_airway.mslink = l_dgn_airway.mslink
UNION ALL
SELECT DISTINCT to_fix_ident ident, to_ident_icao icao,
latitude2 latitude, longitude2 longitude,
to_fix_fea_pk src, -2 mslink
FROM l_sky_airway, l_dgn_airway
WHERE l_sky_airway.mslink = l_dgn_airway.mslink
UNION ALL
SELECT ident, icao, latitude, longitude, 5 src, mslink
FROM l_sky_navaid
UNION ALL
SELECT ident, icao, latitude, longitude, 6 src, mslink
FROM l_sky_waypoint)
SELECT *
FROM mainselect
WHERE ident || ';' || icao || ';' || src IN (SELECT *
FROM inselect)
ORDER BY ident, icao, src, mslink, latitude, longitude;
Plan
SELECT STATEMENT ALL_ROWSCost: 550 336 Bytes: 2 383 172 145 Cardinality: 24 568 785
31 SORT ORDER BY Cost: 550 336 Bytes: 2 383 172 145 Cardinality: 24 568 785
30 HASH JOIN Cost: 2 647 Bytes: 2 383 172 145 Cardinality: 24 568 785
17 VIEW VIEW SYS.VW_NSO_1 Cost: 1 173 Bytes: 1 472 922 Cardinality: 44 634
16 HASH UNIQUE Cost: 1 173 Bytes: 1 205 118 Cardinality: 44 634
15 VIEW EUS. Cost: 828 Bytes: 1 205 118 Cardinality: 44 634
14 MINUS
11 MINUS
8 SORT UNIQUE Cost: 828 Bytes: 2 314 600 Cardinality: 44 634
7 UNION-ALL
3 HASH JOIN Cost: 142 Bytes: 981 948 Cardinality: 22 317
1 TABLE ACCESS FULL TABLE EUS.L_SKY_AIRWAY Cost: 85 Bytes: 290 121 Cardinality: 22 317
2 TABLE ACCESS FULL TABLE EUS.L_DGN_AIRWAY Cost: 56 Bytes: 691 827 Cardinality: 22 317
6 HASH JOIN Cost: 143 Bytes: 981 948 Cardinality: 22 317
4 TABLE ACCESS FULL TABLE EUS.L_SKY_AIRWAY Cost: 85 Bytes: 290 121 Cardinality: 22 317
5 TABLE ACCESS FULL TABLE EUS.L_DGN_AIRWAY Cost: 56 Bytes: 691 827 Cardinality: 22 317
10 SORT UNIQUE Bytes: 52 320 Cardinality: 1 635
9 TABLE ACCESS FULL TABLE EUS.L_SKY_NAVAID Cost: 6 Bytes: 52 320 Cardinality: 1 635
13 SORT UNIQUE Bytes: 298 384 Cardinality: 8 776
12 TABLE ACCESS FULL TABLE EUS.L_SKY_WAYPOINT Cost: 22 Bytes: 298 384 Cardinality: 8 776
29 VIEW EUS. Cost: 825 Bytes: 3 522 880 Cardinality: 55 045
28 UNION-ALL
21 HASH UNIQUE Cost: 398 Bytes: 981 948 Cardinality: 22 317
20 HASH JOIN Cost: 142 Bytes: 981 948 Cardinality: 22 317
18 TABLE ACCESS FULL TABLE EUS.L_SKY_AIRWAY Cost: 85 Bytes: 290 121 Cardinality: 22 317
19 TABLE ACCESS FULL TABLE EUS.L_DGN_AIRWAY Cost: 56 Bytes: 691 827 Cardinality: 22 317
25 HASH UNIQUE Cost: 398 Bytes: 981 948 Cardinality: 22 317
24 HASH JOIN Cost: 143 Bytes: 981 948 Cardinality: 22 317
22 TABLE ACCESS FULL TABLE EUS.L_SKY_AIRWAY Cost: 85 Bytes: 290 121 Cardinality: 22 317
23 TABLE ACCESS FULL TABLE EUS.L_DGN_AIRWAY Cost: 56 Bytes: 691 827 Cardinality: 22 317
26 TABLE ACCESS FULL TABLE EUS.L_SKY_NAVAID Cost: 6 Bytes: 57 225 Cardinality: 1 635
27 TABLE ACCESS FULL TABLE EUS.L_SKY_WAYPOINT Cost: 22 Bytes: 324 712 Cardinality: 8 776 Reformated
Message was edited by:
Vlada
Message was edited by:
VladaVlada,
could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the \[code\] and \[code\] tags to enhance readability of the output provided:
In SQL*Plus:
SET LINESIZE 130
EXPLAIN PLAN FOR <your statement>;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
[When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
and post the "tkprof" output here, too.
Could you also provide the information which version of 10g you're currently using? (4-digit version number). Note that 10g introduced a couple of significant changes including CPU costing enabled by default and a different default setting of DBMS_STATS regarding column histograms.
So you might want to re-gather the statistics using the method_opt parameter of the DBMS_STATS.GATHER__STATS procedures explicitly defined as "FOR ALL COLUMNS SIZE 1" to mimic the 9i behaviour.
You also might want to try the "OPTIMIZER_FEATURES_ENABLE" session setting set to "9.2.0" in order to find out if any of the new optimizer features could be causing the issue if re-gathering the statistics as suggested above doesn't make a difference.
Do you know if the tables/indexes in your 9i database also had reasonable statistics?
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle:
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Edited by: Randolf Geist on Sep 8, 2008 9:32 AM
Added some specific suggestions -
AVG_ROW_LEN not update , even I analyze the table in compute statistics
I execute DBMS_STATS.GATHER_SCHEMA_STATS on my old table , but the statistics doesn't update the row length.
After that , I give up the gather_schema_stats stored proc and use sql statement -> ANALYZE TABLE COMPUTE STATISTICS.
The statistics still is stale .
Is there anything I have missed ?San KTN wrote:
Hi Banka,
I suggest you to perform followings :-
1. Delete the table statistics first.
2. Create another table as select from this target table , in same schema and same tablespace.
3. exec dbms_stat.gather_table_stats with METHOD_OPT parameter value as
FOR ALL INDEXED COLUMNS SIZE n ( n is any number )
or FOR COLUMNS (Column list ... even if all columns in the list)
or NULL
on new and old table respectively.
4. Compare the results.
You better paste the results here for investigation.It's strange. The new table could be updated. AVG_ROW_LEN and AVG_COL_LEN both make change.
The old table only partial be updated. AVG_COL_LEN is ok but not AVG_ROW_LEN. -
What value, one should use for method_opt , while collecting stats for database/schema/table.
It is obseverd AUTO never give good result, in some case "for all indexed columns 1" work fine and some case "for all indexed columns 254".
Please advise , what is right way to collect stats.
OS : Linux AS 4
DB : 9.2.0.8 , 10.2.0.3
ThanksGather AUTO. Gathers all necessary statistics automatically. Oracle implicitly determines which objects need new statistics, and determines how to gather those statistics. When GATHER AUTO is specified, the only additional valid parameters are stattab, statid, objlist and statown; all other parameter settings are ignored. Returns a list of processed objects.
If you specify the for all indexed columns, or specify the column, it will gather histograms. Those are useful if you have an uneven data distribution, and are specially useful for DSS databases, where Oracle will determine if a column is suitable, on a given data rage, to perform full table scan or index scan. If uncertain about it I suggest you not to gather column statistics, as they are costly.
~ Madrid. -
BR0978W Database profile alert - level: WARNING, parameter: STATISTICS_LEVE
Hi,
While doing DBCheck one of my system i found this error
BR0973W Database operation alert - level: WARNING, operation: cdvkvbva.sta, time: 2007-06-07 06.05.02, condition: Last update optimizer stati
BR0976W Database message alert - level: WARNING, line: 13165, time: 2007-06-07 22.53.58, message:
BR0976W Database message alert - level: WARNING, line: 13184, time: 2007-06-07 22.55.23, message:
BR0976W Database message alert - level: WARNING, line: 13203, time: 2007-06-07 22.57.35, message:
BR0976W Database message alert - level: WARNING, line: 13590, time: 2007-06-08 00.09.04, message:
BR0978W Database profile alert - level: WARNING, parameter: STATISTICS_LEVEL, value: TYPICAL (set in parameter file)
How to resolve please help what to check in DB17
Thanks
PunitBR0973W Database operation alert - level: WARNING, operation: cdvkvbva.sta, time: 2007-06-07 06.05.02, condition:
After seeing into file we found this error
BR0301E SQL error -20000 in thread 4 at location stats_tab_collect-18, SQL statement:
'BEGIN DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => '"SAPR3"', TABNAME => '"/BI0/0P00000025"', ESTIMATE_PERCENT => 10, METHOD_OPT => 'FOR ALL COL
ORA-20000: Unable to analyze TABLE "SAPR3"."/BI0/0P00000025", insufficient privileges or does not exist
ORA-06512: at "SYS.DBMS_STATS", line 13149
ORA-06512: at "SYS.DBMS_STATS", line 13179
ORA-06512: at line 1
BR0886E Checking/collecting statistics failed for table SAPR3./BI0/0P00000025
BR0976W Database message alert - level: WARNING, line: 13165, time: 2007-06-07 22.53.58, message:
"checkpoint not complete" error
SAP 79341 - Checkpoint not complete
The note which you have suggested is not suited to my environment what are the other possibilities the above error i am facing in one of the server does there i any data loss
Thanks the 3 rd problem is solved
Punit -
Positional vs. named parameter passing
Is named parameter passing always better than positional when calling a PL/SQL stored procedure? What are the benefits and pitfalls of each method. Are there instances where you prefer one over the other?
Hi Roger,
I personally prefer named notation due to its much enhanced clarity. It greatly helps a subsequent developer by not forcing him/her to look up the spec of the procedure or function each time they see it referenced in the code which calls it (I have a terrible short-term memory - so this is particularly frustrating to me).
Additionally, if some whacko comes by and changes the order (but not the names) of the parameters to the referenced procedure or function - named notation protects you from having to change your code - with positional notation - you MUST change your code.
Of course, named notation is not supported in normal SQL which calls a function, only positional is. But for PL/SQL I say use named notation all the way.
I've never tested a comparison of performance, but I would take an educated guess that it is roughly equivalent (within a few microseconds) between the approaches.
One example I think will show you the benefits of named over positional is a call to DBMS_STATS.GATHER_TABLE_STATS. This packaged procedure is used all of the time, but it has a large number of arguments. Changing a call to it can be challenging with positional notation if you don't remember the parameter order.
Here's the named notation call:
BEGIN
DBMS_STATS.gather_table_stats (ownname => 'SCOTT'
, tabname => 'EMP'
, partname => NULL
, estimate_percent => DBMS_STATS.auto_sample_size
, block_sample => FALSE
, method_opt => 'FOR ALL INDEXED COLUMNS SIZE 254'
, DEGREE => NULL
, granularity => 'ALL'
, CASCADE => TRUE
, no_invalidate => FALSE
END;
/Here's the positional notation call:
BEGIN
DBMS_STATS.gather_table_stats ('SCOTT'
, 'EMP'
, NULL
, DBMS_STATS.auto_sample_size
, FALSE
, 'FOR ALL INDEXED COLUMNS SIZE 254'
, NULL
, 'ALL'
, TRUE
, FALSE
END;
/I strongly prefer to support code with the first example over the 2nd.
Hope this helps...
Message was edited by:
PDaddy
Message was edited by:
PDaddy -
Logical AND in MDX Reporting Services Parameter
Hi, I would like to implement logical AND on a cube parameter. I have seen examples of hard-coded logical AND in MDX.
(http://salvoz.com/blog/2013/12/24/mdx-implementing-logical-and-on-members-of-the-same-hierarchy/)
But I'm not sure how to apply this to a parameter's MDX dataset.
Here is an example of the automatically generated MDX which uses logical OR:
This is the drop down parameter:
WITH MEMBER [Measures].[ParameterCaption] AS [Department].[Department].CURRENTMEMBER.MEMBER_CAPTION MEMBER
[Measures].[ParameterValue] AS
[Department].[Department].CURRENTMEMBER.UNIQUENAME MEMBER [Measures].[ParameterLevel] AS
[Department].[Department].CURRENTMEMBER.LEVEL.ORDINAL
SELECT {[Measures].[ParameterCaption], [Measures].[ParameterValue],
[Measures].[ParameterLevel]} ON COLUMNS
, [Department].[Department].ALLMEMBERS ON ROWS
FROM [MyCube]
And the demo report dataset is:
SELECT NON EMPTY { [Measures].[CompanyTbl Count] } ON COLUMNS,
NON EMPTY { ([Product Level No].[Product Level No].[Product Level No].ALLMEMBERS ) }
DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM
( SELECT ( STRTOSET(@DepartmentDepartment, CONSTRAINED) )
ON COLUMNS FROM [MyCube]) WHERE
( IIF( STRTOSET(@DepartmentDepartment, CONSTRAINED).Count = 1,
STRTOSET(@DepartmentDepartment, CONSTRAINED), [Department].[Department].currentmember ) )
CELL PROPERTIES VALUE,
BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING,
FONT_NAME, FONT_SIZE, FONT_FLAGSHi,
I can see there just one parameter @Department@Department in your script. But if you had two parameters that should return resultset affected by two parameters. You can do it as either select from subselect from subselect.
Example 1
SELECT
{x} ON COLUMNS,
{ROWS_SET} ON ROWS
FROM
(SELECT StrToSet(@Param1) ON COLUMNS FROM
(SELECT StrToSet(@Param2) ON COLUMNS FROM
[CUBE_NAME]
Or crossjoin between 2 parameters
SELECT
{x} ON COLUMNS,
{ROWS_SET} ON ROWS
FROM
(SELECT StrToSet(@Param1)*StrToSet(@Param2) ON COLUMNS FROM
[CUBE_NAME]
Jiri
Jiri Neoral -
Report RFKLBU10: Missing Parameter for Logical filename in Release ERP 2005
Hello Experts,
Report RFKLBU10 in Sap Release 4.6c hat a parameter "Old dataset logical name".
The new version of this report in SAP Release ERP 2005 there is no such parameter.
Is this a SAP-Bug ?
Best regards,
MikeThere are no Export or Print events accessible for the viewer
Since it sounds like you are creating the reportdocument object in your click event, the settings on this object become out of scope on successive postbacks executed by other events.
to get around this without major changes, you can place your "report" object in session in this event and retrieve it from session on successive postbacks. This should solve your problems around navigation, printing and exporting. What you will need to do is check if the session object exists (usually in page_load or page_initialze) and if so, retrieve it from session and bind it to the viewer's reportsource. If the session object does not exist, then do nothing (ie you have not clicked your button yet that retrieves the parameter values from session and loads the report). Also, in your click event you can check if the report session object exists and if so, remove it so that it can be re-created with your new parameter values (ie i'm assuming the only time you want to set parameter values is in this event).
Dan -
Unable to capture the parameter values from a PL/SQL procedure
hi.
i'm trying to capture the parameter values of a PL/SQL procedure by calling inside a anonymous block but i'm getting a "reference to uninitialized collection error" ORA-06531.
Please help me regarding.
i'm using following block for calling the procedure.
declare
err_cd varchar2(1000);
err_txt VARCHAR2(5000);
no_of_recs number;
out_sign_tab search_sign_tab_type:=search_sign_tab_type(search_sign_type(NULL,NULL,NULL,NULL,NULL));
cntr_var number:=0;
begin
rt843pq('DWS','3000552485',out_sign_tab,no_of_recs,err_cd,err_txt);
dbms_output.put_line('The error is ' ||err_cd);
dbms_output.put_line('The error is ' ||err_txt);
dbms_output.put_line('The cntr is ' ||cntr_var);
for incr in 1 .. OUT_SIGN_TAB.count
loop
cntr_var := cntr_var + 1 ;
Dbms_output.put_line(OUT_SIGN_TAB(incr).ref_no||','||OUT_SIGN_TAB(incr).ciref_no||','||OUT_SIGN_TAB(incr).ac_no||','||OUT_SIGN_TAB(incr).txn_type||','||OUT_SIGN_TAB(incr).objid);
end loop;
end;
Error is thrown on "for incr in 1 .. OUT_SIGN_TAB.count" this line
Following is some related information.
the 3rd parameter of the procedure is a out parameter. it is a type of a PL/SQL table (SEARCH_SIGN_TAB_TYPE) which is available in database as follows.
TYPE "SEARCH_SIGN_TAB_TYPE" IS TABLE OF SEARCH_SIGN_TYPE
TYPE "SEARCH_SIGN_TYPE" AS OBJECT
(ref_no VARCHAR2(22),
ciref_no VARCHAR2(352),
ac_no VARCHAR2(22),
txn_type VARCHAR2(301),
objid VARCHAR2(1024))............We don't have your rt843pq procedure, but when commenting that line out, everything works:
SQL> create TYPE "SEARCH_SIGN_TYPE" AS OBJECT
2 (ref_no VARCHAR2(22),
3 ciref_no VARCHAR2(352),
4 ac_no VARCHAR2(22),
5 txn_type VARCHAR2(301),
6 objid VARCHAR2(1024))
7 /
Type is aangemaakt.
SQL> create type "SEARCH_SIGN_TAB_TYPE" IS TABLE OF SEARCH_SIGN_TYPE
2 /
Type is aangemaakt.
SQL> declare
2 err_cd varchar2(1000);
3 err_txt VARCHAR2(5000);
4 no_of_recs number;
5 out_sign_tab search_sign_tab_type:=search_sign_tab_type(search_sign_type(NULL,NULL,NULL,NULL,NULL));
6 cntr_var number:=0;
7 begin
8 -- rt843pq('DWS','3000552485',out_sign_tab,no_of_recs,err_cd,err_txt);
9 dbms_output.put_line('The error is ' ||err_cd);
10 dbms_output.put_line('The error is ' ||err_txt);
11 dbms_output.put_line('The cntr is ' ||cntr_var);
12 for incr in 1 .. OUT_SIGN_TAB.count
13 loop
14 cntr_var := cntr_var + 1 ;
15 Dbms_output.put_line(OUT_SIGN_TAB(incr).ref_no||','||OUT_SIGN_TAB(incr).ciref_no||','||OUT_SIGN_TAB(incr).ac_no||','||OUT_SIGN
TAB(incr).txntype||','||OUT_SIGN_TAB(incr).objid);
16 end loop;
17 end;
18 /
The error is
The error is
The cntr is 0
PL/SQL-procedure is geslaagd.Regards,
Rob.
Maybe you are looking for
-
My iPod synced automatically and now I can't play songs on iTunes.
I used to play my songs from my iPod all the time on iTunes. One day, I connected my iPod to my computer to charge it up and it started to sync automatically for no reason and now I can't seem to 'rewind' this and go back to the way things were. What
-
Hi I have successfully installed Lenya in my machine. However, after logging in when i try to create any document through default or blog publication, it shows lot of exceptions like below: Kindly help me out in solving this issue pls.......... its v
-
Problem in Layout od Adobe PDF
Hi all, I have created and activated the interface. There is an import parameter and an internal table in the interface. the problem is I am getting the internal table fields and the parameter in the layout of the form as it is required to dra
-
Which carriers let one use the iPad Air as a Personal Hotspot?
Dear Folks, I know Verizon does. But what about ATT, Sprint and TMobile??? I can't figure it out from their websites. Thanks! pax / Ctein
-
Transfering data between profiles
Upgraded to Mavericks. Mavericks messed up my network drives in my user profile that was joined to the domain. Tried to drop the domain and rejoin, but can't. Created a new local user profile. I have time machine backups of old profile I want to migr