Optimizer parameters
Hi,
One of the our OLTP database has default settings for OPTIMIZER parameters.
INDEX_COST_ADJ,MAX_PERMUTATIONS,DB_MULTIBLOCK values.
Its been on production for some time now.
What will happen if I change these parameters.How much it will cost me do my query plans will change.
Is it advisible to change once its on production.
Regards
MMU
Hi MMU,
What will happen if I change these parametersWhat release are you on? It makes a BIG difference!
OICA is a "silver bullet" parm, one whose setting will have a profound impact on performance, both good and bad:
http://www.amazon.com/Oracle-Silver-Bullets-Performance-Focus/dp/0975913522
On 10g and later, ALWAYS attempt to address the root cause of the performance issue (usually with dbms_stats) before resorting to changing OICA. Here are my notes:
http://www.dba-oracle.com/t_global_sql_optimization.htm
Similar Messages
-
Optimizer parameters different in 10053 trace
Hello,
The optimizer settings and the ones reported in the 10053 trace does not match. Is this a known issue ? Version is printed in the code snippet.
Here, optimizer_mode is set to ALL_ROWS, but 10053 trace reports this as first_rows_100. Similarly, optimizer_index_cost_adj is 1. But, it is 25 in the trace.
The query is not using hints.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL> show parameter opti
NAME TYPE VALUE
filesystemio_options string none
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 100
optimizer_index_cost_adj integer 1
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
SQL>Contents of 10053 trace
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
sort_area_retained_size = 65535
optimizer_mode = first_rows_100
optimizer_index_cost_adj = 25
optimizer_index_caching = 100
*********************************I can see the same used in here..
Content of other_xml column
===========================
db_version : 10.2.0.3
parse_schema : COT_PLUS
plan_hash : 733167152
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
OPT_PARAM('optimizer_index_cost_adj' 25)
OPT_PARAM('optimizer_index_caching' 100)
FIRST_ROWS(100)
OUTLINE_LEAF(@"SEL$5DA710D3")
UNNEST(@"SEL$2")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
FULL(@"SEL$5DA710D3" "CDW"@"SEL$1")
INDEX_RS_ASC(@"SEL$5DA710D3" "O"@"SEL$2" ("ORDERS"."STATUS_ID"))
LEADING(@"SEL$5DA710D3" "CDW"@"SEL$1" "O"@"SEL$2")
USE_NL(@"SEL$5DA710D3" "O"@"SEL$2")
END_OUTLINE_DATA
*/Rgds,
Gokul
Edited by: Gokul Gopal on 13-Jun-2012 03:14Gokul,
Please report the output of the following, which checks the V$SES_OPTIMIZER_ENV view for the current session:
SELECT
NAME,
VALUE,
ISDEFAULT
FROM
V$SES_OPTIMIZER_ENV
WHERE
SID=(SELECT SID FROM V$MYSTAT WHERE ROWNUM=1)
AND NAME IN ('optimizer_mode','optimizer_index_cost_adj','optimizer_index_caching')
ORDER BY
NAME;In the same session, execute the following (your SQL statement with 1=1 added in the WHERE clause to produce a hard parse):
ALTER SESSION SET TRACEFILE_IDENTIFIER='OPTIMIZER_TEST';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
select * from A where 1=1 AND col1 = (select to_char(col1) from B where status in (16,12,22));
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';Take a look in the generated 10053 trace file. Are the values for the optimizer_mode, optimizer_index_cost_adj, and optimizer_index_caching found in the OPTIMIZER_TEST 10053 trace file the same as what was produced by the above select from V$SES_OPTIMIZER_ENV?
Charles Hooper
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
FIRST ROWS OPTIMIZER HINT IN ORACLE
Hello All,
Version : Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
Can we use First Rows hint quick response time for application programming.
The java result set waits for query completion to achieve the complete result set.
Weird though to raise in a DB forum but i Guess most application might have used this and most of you would have come across this issue in application front.
Thanks
Vijay GExactly the same query should, more or less, perform exactly the same (except when statistics and optimizer parameters vary !), but adjusting for client / network latency overheads.
Are you sure that your Java program is actually submitting exactly the same SELECT statement to the Oracle server as your sqlplus or TOAD session is ? (btw, TOAD shows only the first 'n' rows).
Another question : Why do you have a PARALLEL Hint there ? That would make sense for a FullTableScan -- but your requirement should NOT be for a FullTableScan.
Your ORDER BY clause will not guarantee that the 21st row -- ie the 3rd page -- is different from the 20th row -- the last row on the second page.
See Tom Kyte's notes again.
http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html
Hemant K Chitale
http://hemantoracledba.blogspot.com -
Question re optimize/cleanup/compress in Standard
Hello,
I have Acrobat 9 pro on one computer, and Acrobat 9 standard on another. When scanning, some combination of settings (I can't figure out why) is causing automatic compression. In pro, I just go into cleanup and make sure the entire document is not being compressed. I am trying to find a similar setting in Standard. Any ideas?
thanksHello,
When scanning to PDF using Acrobat Pro there is a configuration dialog that can be edited by the user.
I do not know if Acrobat Standard has this.
With Acrobat Pro -
File > Create PDF > From Scanner > Configure Presets
In the Configure Presets dialog you can edit the value for a number of parameters.
Select scanner, preset for documents (B/W | Grayscale | Color) or color images, input parameters (color mode, resolution, etc.), run OCR (and select one of the three available OCR modes) as well as configure for upfront optimization of the image.
Optimization parameters such as compression and filtering can be set.
If you do not edit the "Configure Presets" to something you want then the defaults are used.
Another place for user configuration is in the Preferences.
In the Category "Convert to PDF", select TIFF and then click the Edit button.
A dialog appears. In this dialog you can provide values for some parameters.
Again, this is with Professional.
In Standard, look to see what is available from the File menu and in the Preferences.
By providing desired upfront configuration you can significantly reduce post-processing activities.
A good, recent on demand eSeminar on Scanning and OCR that discusses the above is at AUC.
http://www.acrobatusers.com/learning_center/eseminars/on_demand
Select David Mankin's "Scanning and OCR".
Be well... -
Compress/Optimize Pdf using Adobe LiveCycle Pdf Generator in my c# console application
Hello,
I am creating an console application in C# that will optimize / compress pdf which contain high defination images. I have the pdf which have size above 500 MB i need to compress that kind of pdf files. I have seen the option to save as optimize pdf in Adobe professional software but i want to do that programmatically in my c# application.
Now I have asked this question in Acrobat SDK Forums as well and there they have told me to check Adobe LiveCycle Software Here is the link for that : http://forums.adobe.com/message/5468065.
Now If i want to go with Adobe Livecycle my question is.
Can I call or use adobeLivecycle pdf optimizer libraries in my c# application.
Let me know what i can do to integrate in my c# applicationThe API call is here.
http://help.adobe.com/en_US/livecycle/9.0/programLC/javadoc/index.html
optimizePDF
public OptimizePDFResult optimizePDF(Document inputDocument, String fileTypeSettings, Document inSettingsDoc) throws ConversionException, InvalidParameterException, FileFormatNotSupportedException
Optimizes the input PDF document by reducing its size. This method also converts the PDF document to the PDF version specified in the optimization parameters. This method supports the same optimization settings as Adobe Acrobat.
Parameters:
inDoc - This mandatory parameter contains an instance of the Document whose content is to be optimized. For this object, it is recommended that you either use the appropriate Document constructor if you have access to the file, or that you use the com.adobe.idp.Document.setAttribute() method if you only have access to the stream.
fileTypeSettings - Name of a file type settings instance that is defined on the LiveCycle server. The LiveCycle Administration Console window for Generate PDF lets you view the currently defined file type settings. It also lets you create custom file type settings. If the inSettingsDoc parameter specifies a non-NULL value, this parameter is ignored. If this parameter and the inSettingsDoc parameter are both null, this method uses the default file type settings instance that is defined on the LiveCycle server.
inSettingsDoc - file2pdf-settings XML file that contains file type settings to use for optimization, including the file type settings used by this method. For information about this file, see the description for the createPDF2() method.
Returns:
An object that exposes a method for getting the optimized PDF document.
Throws:
ConversionException
InvalidParameterException
Parameters:
inDoc - This mandatory parameter contains an instance of the Document whose content is to be optimized. For this object, it is recommended that you either use the appropriate Document constructor if you have access to the file, or that you use the com.adobe.idp.Document.setAttribute() method if you only have access to the stream.
fileTypeSettings - Name of a file type settings instance that is defined on the LiveCycle server. The LiveCycle Administration Console window for Generate PDF lets you view the currently defined file type settings. It also lets you create custom file type settings. If the inSettingsDoc parameter specifies a non-NULL value, this parameter is ignored. If this parameter and the inSettingsDoc parameter are both null, this method uses the default file type settings instance that is defined on the LiveCycle server.
inSettingsDoc - file2pdf-settings XML file that contains file type settings to use for optimization, including the file type settings used by this method. For information about this file, see the description for the createPDF2() method.
Returns:
An object that exposes a method for getting the optimized PDF document.
Throws:
ConversionException
InvalidParameterException
Parameters:
inDoc - This mandatory parameter contains an instance of the Document whose content is to be optimized. For this object, it is recommended that you either use the appropriate Document constructor if you have access to the file, or that you use the com.adobe.idp.Document.setAttribute() method if you only have access to the stream.
fileTypeSettings - Name of a file type settings instance that is defined on the LiveCycle server. The LiveCycle Administration Console window for Generate PDF lets you view the currently defined file type settings. It also lets you create custom file type settings. If the inSettingsDoc parameter specifies a non-NULL value, this parameter is ignored. If this parameter and the inSettingsDoc parameter are both null, this method uses the default file type settings instance that is defined on the LiveCycle server.
inSettingsDoc - file2pdf-settings XML file that contains file type settings to use for optimization, including the file type settings used by this method. For information about this file, see the description for the createPDF2() method.
Returns:
An object that exposes a method for getting the optimized PDF document.
Throws:
ConversionException
InvalidParameterException FileFormatNotSupportedException
You should look here at some of the Web Service API calls for .Net. to see how to follow the framework of connecting via SOAP from .Net
http://help.adobe.com/en_US/livecycle/10.0/ProgramLC/WS624e3cba99b79e12e69a9941333732bac8- 7749.html -
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Hi,
I have a small 'orcl' database on my local machine, and I did not perform heavy activity on it. Today and yesterday I performed just some simple queries, like:
SELECT COUNT(*)
FROM products p, (SELECT prod_id, AVG(unit_cost) ac FROM costs GROUP BY prod_id) c
WHERE p.prod_id = c.prod_id AND
p.prod_list_price < 1.15 * c.ac;
or
select * from products;
from the 'sh' schema. Today I run a ADDM report, and this is the result:
ADDM Report for Task 'TASK_557'
Analysis Period
AWR snapshot range from 490 to 494.
Time period starts at 17-JUL-13 11.00.34 PM
Time period ends at 18-JUL-13 05.31.00 PM
Analysis Target
Database 'ORCL' with DB ID 1346555844.
Database version 11.2.0.3.0.
ADDM performed an analysis of instance orcl, numbered 1 and hosted at ROGER.
Activity During the Analysis Period
Total database time was 499 seconds.
The average number of active sessions was .01.
Summary of Findings
Description
Active Sessions
Recommendations
Percent of Activity
1 I/O Throughput
.01 | 100
2
2 Hard Parse
0 | 29.47
0
3 Hard Parse Due to Sharing Criteria 0 | 8.89
1
4 Row Lock Waits
0 | 7.37
0
5 PL/SQL Compilation
0 | 4.04
1
6 Unusual "User I/O" Wait Event
0 | 4.02
1
7 Commits and Rollbacks
0 | 3.08
1
8 Shared Pool Latches
0 | 2.78
0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Findings and Recommendations
Finding 1: I/O Throughput
Impact is .01 active sessions, 100% of total activity.
The throughput of the I/O subsystem was significantly lower than expected.
Recommendation 1: Host Configuration
Estimated benefit is .01 active sessions, 100% of total activity.
Action
Consider increasing the throughput of the I/O subsystem. Oracle's
recommended solution is to stripe all data files using the SAME
methodology. You might also need to increase the number of disks for
better performance.
Rationale
During the analysis period, the average data files' I/O throughput was
1.4 K per second for reads and 1 K per second for writes. The average
response time for single block reads was 18 milliseconds.
Recommendation 2: Host Configuration
Estimated benefit is 0 active sessions, 17.55% of total activity.
Action
The performance of some data and temp files was significantly worse than
others. If striping all files using the SAME methodology is not
possible, consider striping these file over multiple disks.
Rationale
For file D:\ORACLE\APP\ORADATA\ORCL\SYSTEM01.DBF, the average response
time for single block reads was 168 milliseconds, and the total excess
I/O wait was 70 seconds.
Related Object
Database file
"D:\ORACLE\APP\ORADATA\ORCL\SYSTEM01.DBF"
Rationale
For file D:\ORACLE\APP\ORADATA\ORCL\SYSAUX01.DBF, the average response
time for single block reads was 16 milliseconds, and the total excess
I/O wait was 16 seconds.
Related Object
Database file
"D:\ORACLE\APP\ORADATA\ORCL\SYSAUX01.DBF"
Symptoms That Led to the Finding:
Wait class "User I/O" was consuming significant database time.
Impact is 0 active sessions, 30.87% of total activity.
Finding 2: Hard Parse
Impact is 0 active sessions, 29.47% of total activity.
Hard parsing of SQL statements was consuming significant database time.
Hard parsing SQL statements that encountered parse errors was not consuming
significant database time.
Hard parses due to literal usage and cursor invalidation were not consuming
significant database time.
The Oracle instance memory (SGA and PGA) was adequately sized.
No recommendations are available.
Finding 3: Hard Parse Due to Sharing Criteria
Impact is 0 active sessions, 8.89% of total activity.
SQL statements with the same text were not shared because of cursor
environment mismatch. This resulted in additional hard parses which were
consuming significant database time.
Common causes of environment mismatch are session NLS settings, SQL trace
settings and optimizer parameters.
Recommendation 1: Application Analysis
Estimated benefit is 0 active sessions, 8.89% of total activity.
Action
Look for top reason for cursor environment mismatch in
V$SQL_SHARED_CURSOR.
Symptoms That Led to the Finding:
Hard parsing of SQL statements was consuming significant database time.
Impact is 0 active sessions, 29.47% of total activity.
Finding 4: Row Lock Waits
Impact is 0 active sessions, 7.37% of total activity.
SQL statements were found waiting for row lock waits.
No recommendations are available.
Symptoms That Led to the Finding:
Wait class "Application" was consuming significant database time.
Impact is 0 active sessions, 7.78% of total activity.
Finding 5: PL/SQL Compilation
Impact is 0 active sessions, 4.04% of total activity.
PL/SQL compilation consumed significant database time.
Recommendation 1: Application Analysis
Estimated benefit is 0 active sessions, 4.04% of total activity.
Action
Investigate the appropriateness of PL/SQL compilation. PL/SQL
compilation can be caused by DDL on dependent objects.
Finding 6: Unusual "User I/O" Wait Event
Impact is 0 active sessions, 4.02% of total activity.
Wait event "Disk file operations I/O" in wait class "User I/O" was consuming
significant database time.
Recommendation 1: Application Analysis
Estimated benefit is 0 active sessions, 4.02% of total activity.
Action
Investigate the cause for high "Disk file operations I/O" waits. Refer
to Oracle's "Database Reference" for the description of this wait event.
Symptoms That Led to the Finding:
Wait class "User I/O" was consuming significant database time.
Impact is 0 active sessions, 30.87% of total activity.
Finding 7: Commits and Rollbacks
Impact is 0 active sessions, 3.08% of total activity.
Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
were consuming significant database time.
Recommendation 1: Host Configuration
Estimated benefit is 0 active sessions, 3.08% of total activity.
Action
Investigate the possibility of improving the performance of I/O to the
online redo log files.
Rationale
The average size of writes to the online redo log files was 21 K and the
average time per write was 7 milliseconds.
Rationale
The total I/O throughput on redo log files was 0 K per second for reads
and 0.7 K per second for writes.
Rationale
The redo log I/O throughput was divided as follows: 0% by RMAN and
recovery, 100% by Log Writer, 0% by Archiver, 0% by Streams AQ and 0% by
all other activity.
Symptoms That Led to the Finding:
Wait class "Commit" was consuming significant database time.
Impact is 0 active sessions, 3.08% of total activity.
Finding 8: Shared Pool Latches
Impact is 0 active sessions, 2.78% of total activity.
Contention for latches related to the shared pool was consuming significant
database time.
Waits for "library cache load lock" amounted to 1% of database time.
Waits for "latch: shared pool" amounted to 1% of database time.
No recommendations are available.
Symptoms That Led to the Finding:
Wait class "Concurrency" was consuming significant database time.
Impact is 0 active sessions, 3.12% of total activity.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional Information
Miscellaneous Information
Wait class "Configuration" was not consuming significant database time.
CPU was not a bottleneck for the instance.
Wait class "Network" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
The database's maintenance windows were active during 94% of the analysis
period.
Being a small local database, my question is: do I really need to do something? For example at Finding 5: PL/SQL Compilation, what I really need to do? Or ar Finding 1: I/O Throughput
Thanks.Hi,
Mainly in ADDM you can consider
Recommendation
Estimated benefit
You have generated the report for ~18 hr time interval.
for test/local you can ignore this. -
Slow down Database after upgrading to 10.2.0.5
Hi
I am having performance problems after upgrading to 10.2.0.5
At the beginning I thought the problem was sga too smalle ( ini: 598M , now 1408M ) but even after recreating the database with the new value the problem remains.
I am sending reports so that someone could give me an idea.
Thanks in advance!
DETAILED ADDM REPORT FOR TASK 'TASK_240' WITH ID 240
Analysis Period: 22-JUN-2011 from 08:34:06 to 16:00:13
Database ID/Instance: 2462860799/1
Database/Instance Names: DXT/DXT
Host Name: thoracle
Database Version: 10.2.0.5.0
Snapshot Range: from 71 to 78
Database Time: 6726 seconds
Average Database Load: .3 active sessions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FINDING 1: 38% impact (2540 seconds)
SQL statements consuming significant database time were found.
RECOMMENDATION 1: SQL Tuning, 26% benefit (1763 seconds)
ACTION: Investigate the SQL statement with SQL_ID "30rku9qg2y30j" for
possible performance improvements.
RELEVANT OBJECT: SQL statement with SQL_ID 30rku9qg2y30j and
PLAN_HASH 2734400036
select a.owner, a.object_name, INSTR(a.object_type, :"SYS_B_00"),
:"SYS_B_01" from sys.all_objects a where a.object_type IN
(:"SYS_B_02",:"SYS_B_03") and a.status = :"SYS_B_04" and a.owner
like:"SYS_B_05"escape:"SYS_B_06" and a.object_name
like:"SYS_B_07"escape:"SYS_B_08" union all select c.owner,
c.synonym_name, INSTR(a.object_type, :"SYS_B_09"), :"SYS_B_10" from
sys.all_objects a, sys.all_synonyms c where c.table_owner = a.owner
and c.table_name = a.object_name and a.object_type IN
(:"SYS_B_11",:"SYS_B_12") and a.status = :"SYS_B_13" and c.owner
like:"SYS_B_14"escape:"SYS_B_15" and c.synonym_name
like:"SYS_B_16"escape:"SYS_B_17" union all select distinct b.owner,
CONCAT(b.package_name, :"SYS_B_18" || b.object_name),
min(b.position), max(b.overload) from sys.all_arguments b where
b.package_name IS NOT NULL and b.owner
like:"SYS_B_19"escape:"SYS_B_20" and b.package_name
like:"SYS_B_21"escape:"SYS_B_22" group by b.owner,
CONCAT(b.package_name, :"SYS_B_23" || b.object_name) union all select
distinct c.owner, CONCAT(c.synonym_name, :"SYS_B_24" ||
b.object_name), min(b.position), max(b.overload) from
sys.all_arguments b, sys.all_synonyms c where c.table_owner = b.owner
and c.table_name = b.package_name and b.package_name IS NOT NULL and
c.owner like:"SYS_B_25"escape:"SYS_B_26" and c.synonym_name
like:"SYS_B_27"escape:"SYS_B_28" group by c.owner,
CONCAT(c.synonym_name, :"SYS_B_29" || b.object_name) union all select
distinct c.owner, c.synonym_name, min(b.position), max(b.overload)
from sys.all_arguments b, sys.all_synonyms c where c.owner = b.owner
and c.table_owner=b.package_name and c.table_name=b.object_name and
c.owner like:"SYS_B_30"escape:"SYS_B_31" and c.synonym_name
like:"SYS_B_32"escape:"SYS_B_33" group by c.owner, c.synonym_name
RATIONALE: SQL statement with SQL_ID "30rku9qg2y30j" was executed 12270
times and had an average elapsed time of 0.036 seconds.
RATIONALE: Waiting for event "cursor: pin S wait on X" in wait class
"Concurrency" accounted for 7% of the database time spent in
processing the SQL statement with SQL_ID "30rku9qg2y30j".
RECOMMENDATION 2: SQL Tuning, 23% benefit (1550 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"7yv1ba0c8y86t".
RELEVANT OBJECT: SQL statement with SQL_ID 7yv1ba0c8y86t and
PLAN_HASH 2684283631
Select WSTJ_.ROWID, WSTJ_.*, WMVD_.*
From THPR.STOJOU WSTJ_, THPR.SMVTD WMVD_ Where ((WMVD_.VCRTYP_0(+) =
WSTJ_.VCRTYP_0) AND (WMVD_.VCRNUM_0(+) = WSTJ_.VCRNUM_0) AND
(WMVD_.VCRLIN_0(+) = WSTJ_.VCRLIN_0))
And WMVD_.CCE2_0 = :1 And WSTJ_.IPTDAT_0 <= :2 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_0",:"SYS_B_1") <> :3 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_2",:"SYS_B_3") <> :4 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_4",:"SYS_B_5") <> :5 And
((WSTJ_.TRSFAM_0 = :6) Or (WSTJ_.TRSFAM_0 = :7))
Order by WSTJ_.STOFCY_0,WSTJ_.UPDCOD_0,WSTJ_.ITMREF_0,WSTJ_.IPTDAT_0
Desc,WSTJ_.MVTSEQ_0,WSTJ_.MVTIND_0
RATIONALE: SQL statement with SQL_ID "7yv1ba0c8y86t" was executed 47
times and had an average elapsed time of 32 seconds.
RECOMMENDATION 3: SQL Tuning, 14% benefit (926 seconds)
ACTION: Use bigger fetch arrays while fetching results from the SELECT
statement with SQL_ID "7yv1ba0c8y86t".
RELEVANT OBJECT: SQL statement with SQL_ID 7yv1ba0c8y86t and
PLAN_HASH 2684283631
Select WSTJ_.ROWID, WSTJ_.*, WMVD_.*
From THPR.STOJOU WSTJ_, THPR.SMVTD WMVD_ Where ((WMVD_.VCRTYP_0(+) =
WSTJ_.VCRTYP_0) AND (WMVD_.VCRNUM_0(+) = WSTJ_.VCRNUM_0) AND
(WMVD_.VCRLIN_0(+) = WSTJ_.VCRLIN_0))
And WMVD_.CCE2_0 = :1 And WSTJ_.IPTDAT_0 <= :2 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_0",:"SYS_B_1") <> :3 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_2",:"SYS_B_3") <> :4 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_4",:"SYS_B_5") <> :5 And
((WSTJ_.TRSFAM_0 = :6) Or (WSTJ_.TRSFAM_0 = :7))
Order by WSTJ_.STOFCY_0,WSTJ_.UPDCOD_0,WSTJ_.ITMREF_0,WSTJ_.IPTDAT_0
Desc,WSTJ_.MVTSEQ_0,WSTJ_.MVTIND_0
FINDING 2: 37% impact (2508 seconds)
Time spent on the CPU by the instance was responsible for a substantial part
of database time.
RECOMMENDATION 1: SQL Tuning, 26% benefit (1763 seconds)
ACTION: Investigate the SQL statement with SQL_ID "30rku9qg2y30j" for
possible performance improvements.
RELEVANT OBJECT: SQL statement with SQL_ID 30rku9qg2y30j and
PLAN_HASH 2734400036
select a.owner, a.object_name, INSTR(a.object_type, :"SYS_B_00"),
:"SYS_B_01" from sys.all_objects a where a.object_type IN
(:"SYS_B_02",:"SYS_B_03") and a.status = :"SYS_B_04" and a.owner
like:"SYS_B_05"escape:"SYS_B_06" and a.object_name
like:"SYS_B_07"escape:"SYS_B_08" union all select c.owner,
c.synonym_name, INSTR(a.object_type, :"SYS_B_09"), :"SYS_B_10" from
sys.all_objects a, sys.all_synonyms c where c.table_owner = a.owner
and c.table_name = a.object_name and a.object_type IN
(:"SYS_B_11",:"SYS_B_12") and a.status = :"SYS_B_13" and c.owner
like:"SYS_B_14"escape:"SYS_B_15" and c.synonym_name
like:"SYS_B_16"escape:"SYS_B_17" union all select distinct b.owner,
CONCAT(b.package_name, :"SYS_B_18" || b.object_name),
min(b.position), max(b.overload) from sys.all_arguments b where
b.package_name IS NOT NULL and b.owner
like:"SYS_B_19"escape:"SYS_B_20" and b.package_name
like:"SYS_B_21"escape:"SYS_B_22" group by b.owner,
CONCAT(b.package_name, :"SYS_B_23" || b.object_name) union all select
distinct c.owner, CONCAT(c.synonym_name, :"SYS_B_24" ||
b.object_name), min(b.position), max(b.overload) from
sys.all_arguments b, sys.all_synonyms c where c.table_owner = b.owner
and c.table_name = b.package_name and b.package_name IS NOT NULL and
c.owner like:"SYS_B_25"escape:"SYS_B_26" and c.synonym_name
like:"SYS_B_27"escape:"SYS_B_28" group by c.owner,
CONCAT(c.synonym_name, :"SYS_B_29" || b.object_name) union all select
distinct c.owner, c.synonym_name, min(b.position), max(b.overload)
from sys.all_arguments b, sys.all_synonyms c where c.owner = b.owner
and c.table_owner=b.package_name and c.table_name=b.object_name and
c.owner like:"SYS_B_30"escape:"SYS_B_31" and c.synonym_name
like:"SYS_B_32"escape:"SYS_B_33" group by c.owner, c.synonym_name
RATIONALE: SQL statement with SQL_ID "30rku9qg2y30j" was executed 12270
times and had an average elapsed time of 0.036 seconds.
RATIONALE: Waiting for event "cursor: pin S wait on X" in wait class
"Concurrency" accounted for 7% of the database time spent in
processing the SQL statement with SQL_ID "30rku9qg2y30j".
RATIONALE: Average CPU used per execution was 0.036 seconds.
RECOMMENDATION 2: SQL Tuning, 23% benefit (1550 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"7yv1ba0c8y86t".
RELEVANT OBJECT: SQL statement with SQL_ID 7yv1ba0c8y86t and
PLAN_HASH 2684283631
Select WSTJ_.ROWID, WSTJ_.*, WMVD_.*
From THPR.STOJOU WSTJ_, THPR.SMVTD WMVD_ Where ((WMVD_.VCRTYP_0(+) =
WSTJ_.VCRTYP_0) AND (WMVD_.VCRNUM_0(+) = WSTJ_.VCRNUM_0) AND
(WMVD_.VCRLIN_0(+) = WSTJ_.VCRLIN_0))
And WMVD_.CCE2_0 = :1 And WSTJ_.IPTDAT_0 <= :2 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_0",:"SYS_B_1") <> :3 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_2",:"SYS_B_3") <> :4 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_4",:"SYS_B_5") <> :5 And
((WSTJ_.TRSFAM_0 = :6) Or (WSTJ_.TRSFAM_0 = :7))
Order by WSTJ_.STOFCY_0,WSTJ_.UPDCOD_0,WSTJ_.ITMREF_0,WSTJ_.IPTDAT_0
Desc,WSTJ_.MVTSEQ_0,WSTJ_.MVTIND_0
RATIONALE: SQL statement with SQL_ID "7yv1ba0c8y86t" was executed 47
times and had an average elapsed time of 32 seconds.
RATIONALE: Average CPU used per execution was 32 seconds.
RECOMMENDATION 3: SQL Tuning, 5.8% benefit (390 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"cbtd2nt52qn1c".
RELEVANT OBJECT: SQL statement with SQL_ID cbtd2nt52qn1c and
PLAN_HASH 2897530229
Select DAE_.ROWID, DAE_.*, HAE_.*
From THPR.GACCENTRYD DAE_, THPR.GACCENTRY HAE_ Where ((HAE_.TYP_0(+)
= DAE_.TYP_0) AND (HAE_.NUM_0(+) = DAE_.NUM_0))
And HAE_.CPY_0 = :1 And HAE_.ACCDAT_0 >= :2 And HAE_.ACCDAT_0 <= :3
And DAE_.ACC_0 = :4 And HAE_.FCY_0 >= :5 And HAE_.FCY_0 <= :6
Order by DAE_.BPR_0,DAE_.CUR_0,DAE_.ACC_0
RATIONALE: SQL statement with SQL_ID "cbtd2nt52qn1c" was executed 12980
times and had an average elapsed time of 0.03 seconds.
RATIONALE: Average CPU used per execution was 0.029 seconds.
RECOMMENDATION 4: SQL Tuning, 2.1% benefit (138 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"33t7fszkr29gy".
RELEVANT OBJECT: SQL statement with SQL_ID 33t7fszkr29gy and
PLAN_HASH 2684283631
Select WSTJ_.ROWID, WSTJ_.*, WMVD_.*
From THPR.STOJOU WSTJ_, THPR.SMVTD WMVD_ Where ((WMVD_.VCRTYP_0(+) =
WSTJ_.VCRTYP_0) AND (WMVD_.VCRNUM_0(+) = WSTJ_.VCRNUM_0) AND
(WMVD_.VCRLIN_0(+) = WSTJ_.VCRLIN_0))
And WMVD_.CCE2_0 = :1 And WSTJ_.IPTDAT_0 <= :2 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_0",:"SYS_B_1") <> :3 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_2",:"SYS_B_3") <> :4 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_4",:"SYS_B_5") <> :5 And
(((WSTJ_.TRSFAM_0 = :6) Or (WSTJ_.TRSFAM_0 = :7)))
Order by WSTJ_.STOFCY_0,WSTJ_.UPDCOD_0,WSTJ_.ITMREF_0,WSTJ_.IPTDAT_0
Desc,WSTJ_.MVTSEQ_0,WSTJ_.MVTIND_0
RATIONALE: SQL statement with SQL_ID "33t7fszkr29gy" was executed 1
times and had an average elapsed time of 136 seconds.
RATIONALE: Average CPU used per execution was 138 seconds.
FINDING 3: 15% impact (1008 seconds)
SQL statements with the same text were not shared because of cursor
environment mismatch. This resulted in additional hard parses which were
consuming significant database time.
RECOMMENDATION 1: Application Analysis, 15% benefit (1008 seconds)
ACTION: Look for top reason for cursor environment mismatch in
V$SQL_SHARED_CURSOR.
ADDITIONAL INFORMATION:
Common causes of environment mismatch are session NLS settings, SQL
trace settings and optimizer parameters.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Hard parsing of SQL statements was consuming significant
database time. (20% impact [1336 seconds])
SYMPTOM: Contention for latches related to the shared pool was
consuming significant database time. (2% impact [135
seconds])
INFO: Waits for "cursor: pin S wait on X" amounted to 1% of
database time.
SYMPTOM: Wait class "Concurrency" was consuming significant
database time. (2.3% impact [154 seconds])
FINDING 4: 8.5% impact (570 seconds)
Wait class "User I/O" was consuming significant database time.
NO RECOMMENDATIONS AVAILABLE
ADDITIONAL INFORMATION:
Waits for I/O to temporary tablespaces were not consuming significant
database time.
The throughput of the I/O subsystem was not significantly lower than
expected.
FINDING 5: 5.3% impact (355 seconds)
The SGA was inadequately sized, causing additional I/O or hard parses.
RECOMMENDATION 1: DB Configuration, 3.2% benefit (215 seconds)
ACTION: Increase the size of the SGA by setting the parameter
"sga_target" to 1740 M.
ADDITIONAL INFORMATION:
The value of parameter "sga_target" was "1392 M" during the analysis
period.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Hard parsing of SQL statements was consuming significant
database time. (20% impact [1336 seconds])
SYMPTOM: Contention for latches related to the shared pool was
consuming significant database time. (2% impact [135
seconds])
INFO: Waits for "cursor: pin S wait on X" amounted to 1% of
database time.
SYMPTOM: Wait class "Concurrency" was consuming significant
database time. (2.3% impact [154 seconds])
SYMPTOM: Wait class "User I/O" was consuming significant database time.
(8.5% impact [570 seconds])
INFO: Waits for I/O to temporary tablespaces were not consuming
significant database time.
The throughput of the I/O subsystem was not significantly lower
than expected.
FINDING 6: 4.2% impact (281 seconds)
Cursors were getting invalidated due to DDL operations. This resulted in
additional hard parses which were consuming significant database time.
RECOMMENDATION 1: Application Analysis, 4.2% benefit (281 seconds)
ACTION: Investigate appropriateness of DDL operations.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Hard parsing of SQL statements was consuming significant
database time. (20% impact [1336 seconds])
SYMPTOM: Contention for latches related to the shared pool was
consuming significant database time. (2% impact [135
seconds])
INFO: Waits for "cursor: pin S wait on X" amounted to 1% of
database time.
SYMPTOM: Wait class "Concurrency" was consuming significant
database time. (2.3% impact [154 seconds])
FINDING 7: 4% impact (266 seconds)
Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
were consuming significant database time.
RECOMMENDATION 1: Host Configuration, 4% benefit (266 seconds)
ACTION: Investigate the possibility of improving the performance of I/O
to the online redo log files.
RATIONALE: The average size of writes to the online redo log files was
26 K and the average time per write was 2 milliseconds.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Wait class "Commit" was consuming significant database time.
(4% impact [266 seconds])
FINDING 8: 2.9% impact (192 seconds)
Soft parsing of SQL statements was consuming significant database time.
RECOMMENDATION 1: Application Analysis, 2.9% benefit (192 seconds)
ACTION: Investigate application logic to keep open the frequently used
cursors. Note that cursors are closed by both cursor close calls and
session disconnects.
RECOMMENDATION 2: DB Configuration, 2.9% benefit (192 seconds)
ACTION: Consider increasing the maximum number of open cursors a session
can have by increasing the value of parameter "open_cursors".
ACTION: Consider increasing the session cursor cache size by increasing
the value of parameter "session_cached_cursors".
RATIONALE: The value of parameter "open_cursors" was "800" during the
analysis period.
RATIONALE: The value of parameter "session_cached_cursors" was "20"
during the analysis period.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Contention for latches related to the shared pool was consuming
significant database time. (2% impact [135 seconds])
INFO: Waits for "cursor: pin S wait on X" amounted to 1% of database
time.
SYMPTOM: Wait class "Concurrency" was consuming significant database
time. (2.3% impact [154 seconds])
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ADDITIONAL INFORMATION
Wait class "Application" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
Wait class "Network" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
The database's maintenance windows were active during 100% of the analysis
period.
The analysis of I/O performance is based on the default assumption that the
average read time for one database block is 10000 micro-seconds.
An explanation of the terminology used in this report is available when you
run the report with the 'ALL' level of detail.user12023161 wrote:
I have upgraded 10.2.0.3.0 to 10.2.0.5.0 and facing same issue. The database is slow in general after upgrade compared to 10.2.0.3.0.Try setting OPTIMIZER_FEATURE_ENABLE parameter to 10.2.0.3.
Refer following link:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams142.htm -
[oracle 10.2.0.4] My view is not merged
Hi,
I have a view which is not merged by the CBO. I mean the CBO decides to apply the filter predicate after the execution of the view.
Here is the definition of the view
CREATE OR REPLACE VIEW VUNSCP AS
SELECT X.DASFM,X.COINT,X.NUCPT,X.RGCOD,X.RGCID,X.CODEV,X.CTDEV,X.CDVRF,X.TXCHJ,X.MTNLV,X.MTVDP,
LEAD(X.MTNLV+X.MTVDP,1) OVER (PARTITION BY X.COINT,X.NUCPT,X.CDVRF,X.CTDEV,X.CODEV,X.RGCOD,X.RGCID ORDER BY X.DASFM DESC),
SUM(X.MTNLV) OVER (PARTITION BY X.COINT,X.CODEV,X.RGCOD)
FROM SFMCPT XThe query is:
explain plan for
select * from VUNSCP where dasfm='30-apr-10';
select * from table(dbms_xplan.display);
Plan hash value: 2545326530
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 13M| 1529M| | 195K (1)| 00:39:11 |
|* 1 | VIEW | VUNSCP | 13M| 1529M| | 195K (1)| 00:39:11 |
| 2 | WINDOW SORT | | 13M| 646M| 1996M| 195K (1)| 00:39:11 |
| 3 | TABLE ACCESS FULL| SFMCPT | 13M| 646M| | 27991 (4)| 00:05:36 |
Predicate Information (identified by operation id):
1 - filter("DASFM"='30-apr-10')You can see that a FTS is performed on SFMCPT (>1 million of rows) and that the filter predicate is applied only after the view has been instantiated.
So the index on DASFM can't be used.
This query is returning about 30 000 rows. We see on the plan that the CBO is mistaken beacause it reckons that there's going to be 13M of rows.
If I add the filter predicate directly on the view'script I get the correct plan:
explain plan for
SELECT X.DASFM,X.COINT,X.NUCPT,X.RGCOD,X.RGCID,X.CODEV,X.CTDEV,X.CDVRF,X.TXCHJ,X.MTNLV,X.MTVDP,
LEAD(X.MTNLV+X.MTVDP,1) OVER (PARTITION BY X.COINT,X.NUCPT,X.CDVRF,X.CTDEV,X.CODEV,X.RGCOD,X.RGCID ORDER BY X.DASFM DESC),
SUM(X.MTNLV) OVER (PARTITION BY X.COINT,X.CODEV,X.RGCOD)
FROM SFMCPT X where dasfm='30-apr-10';
select * from table(dbms_xplan.display);
Plan hash value: 1865390099
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 14357 | 729K| 13271 (1)| 00:02:40 |
| 1 | WINDOW SORT | | 14357 | 729K| 13271 (1)| 00:02:40 |
| 2 | TABLE ACCESS BY INDEX ROWID| SFMCPT | 14357 | 729K| 13269 (1)| 00:02:40 |
|* 3 | INDEX RANGE SCAN | SFMCPT1 | 14357 | | 67 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("DASFM"='30-apr-10')The index is now used and the rows estimated seem closer to the actual rows.
I tried several things:
- disabling the "OPTIMZER_COST_BASED_TRANSFORMATION" hidden parameter
- use the MERGE hint
- alter session set optimizer_features_enable = '9.2.0.8';
All these workarounds don't work => I'm still getting the bad execution plan.
According to Jonathan LEWIS' s book the 9i optimzer always merge views But here even if I set the optimizer_features_enable parameter to 9i the view is not merged.
It's sure that the issue is due to the analytical functions but why ?
Can please someone help me to understand what is going on ?
Edited by: Ahmed AANGOUR on 5 mai 2010 08:41here is the 10053 trace file:
/oracle/app/oracle/admin/UBIXPROD/udump/ubixprod_ora_24971_10053_optimizer_trace.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /oracle/app/oracle/10.2.0
System name: Linux
Node name: lin-ubi-test1.ubitrade.com
Release: 2.6.9-78.0.1.ELsmp
Version: #1 SMP Tue Jul 22 18:01:05 EDT 2008
Machine: x86_64
Instance name: UBIXPROD
Redo thread mounted by this instance: 1
Oracle process number: 26
*** 2010-05-04 12:14:51.450
*** ACTION NAME:() 2010-05-04 12:14:51.450
*** MODULE NAME:([email protected] (TNS V1-V3)) 2010-05-04 12:14:51.450
*** SERVICE NAME:(SYS$USERS) 2010-05-04 12:14:51.450
*** SESSION ID:(135.1512) 2010-05-04 12:14:51.450
Registered qb: SEL$1 0xa9e139a8 (PARSER)
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=5 objn=297481 hint_alias="VUNSCP"@"SEL$1"
Registered qb: SEL$2 0xa9e0bdd0 (PARSER)
signature (): qb_name=SEL$2 nbfros=1 flg=0
fro(0): flg=4 objn=265023 hint_alias="X"@"SEL$2"
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
CBQT: Validity checks failed for 3xakq94fcx4td.
CVM: Considering view merge in query block SEL$1 (#0)
CVM: Checking validity of merging SEL$2 (#0)
CVM: Considering view merge in query block SEL$2 (#0)
CVM: CVM bypassed: Window functions in this view
CBQT: Validity checks failed for 3xakq94fcx4td.
Subquery Unnest
SU: Considering subquery unnesting in query block SEL$1 (#0)
Set-Join Conversion (SJC)
SJC: Considering set-join conversion in SEL$1 (#0).
Set-Join Conversion (SJC)
SJC: Considering set-join conversion in SEL$2 (#0).
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
PM: Passed validity checks.
FPD: Considering simple filter push in SEL$1 (#0)
FPD: Current where clause predicates in SEL$1 (#0) :
"VUNSCP"."DASFM"='30-apr-10'
kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
predicates with check contraints: "VUNSCP"."DASFM"='30-apr-10'
after transitive predicate generation: "VUNSCP"."DASFM"='30-apr-10'
finally: "VUNSCP"."DASFM"='30-apr-10'
JPPD: JPPD bypassed: View not on right-side of outer join
FPD: Considering simple filter push in SEL$2 (#0)
FPD: Current where clause predicates in SEL$2 (#0) :
apadrv-start: call(in-use=2936, alloc=16344), compile(in-use=38784, alloc=44568)
kkoqbc-start
: call(in-use=2936, alloc=16344), compile(in-use=40472, alloc=44568)
kkoqbc-subheap (create addr=0x2a9740c1f0)
Current SQL statement for this session:
EXPLAIN PLAN FOR select * from VUNSCP where dasfm='30-apr-10'
Peeked values of the binds in SQL statement
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
_pga_max_size = 262140 KB
cursor_sharing = similar
_optimizer_cost_based_transformation = off
Column Usage Monitoring is ON: tracking level = 1
QUERY BLOCK TEXT
EXPLAIN PLAN FOR select * from VUNSCP where dasfm='30-apr-10'
QUERY BLOCK SIGNATURE
qb name was generated
signature (optimizer): qb_name=SEL$2 nbfros=1 flg=0
fro(0): flg=0 objn=265023 hint_alias="X"@"SEL$2"
SYSTEM STATISTICS INFORMATION
Using NOWORKLOAD Stats
CPUSPEED: 2503 millions instruction/sec
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
BASE STATISTICAL INFORMATION
Table Stats::
Table: SFMCPT Alias: X
#Rows: 13036040 #Blks: 122880 AvgRowLen: 358.00
Index Stats::
Index: SFMCPT1 Col#: 2 3 4 8 5 7 118
LVLS: 2 #LB: 58758 #DK: 13013072 LB/K: 1.00 DB/K: 1.00 CLUF: 11983641.00
Index: SFMCPT2 Col#: 1
LVLS: 2 #LB: 30031 #DK: 13483987 LB/K: 1.00 DB/K: 1.00 CLUF: 2410599.00
Index: SFMCPT3 Col#: 3 4 8 5 7 2 118
LVLS: 2 #LB: 39065 #DK: 13013072 LB/K: 1.00 DB/K: 1.00 CLUF: 12583891.00
SINGLE TABLE ACCESS PATH
BEGIN Single Table Cardinality Estimation
Table: SFMCPT Alias: X
Card: Original: 13036040 Rounded: 13036040 Computed: 13036040.00 Non Adjusted: 13036040.00
END Single Table Cardinality Estimation
Access Path: TableScan
Cost: 27991.05 Resp: 27991.05 Degree: 0
Cost_io: 26881.00 Cost_cpu: 33334822147
Resp_io: 26881.00 Resp_cpu: 33334822147
Best:: AccessPath: TableScan
Cost: 27991.05 Degree: 1 Resp: 27991.05 Card: 13036040.00 Bytes: 0
OPTIMIZER STATISTICS AND COMPUTATIONS
GENERAL PLANS
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: SFMCPT[X]#0
WiF sort
SORT resource Sort statistics
Sort width: 766 Area size: 1048576 Max Area size: 134215680
Degree: 1
Blocks to Sort: 108528 Row size: 68 Total Rows: 13036040
Initial runs: 7 Merge passes: 1 IO Cost / pass: 58786
Total IO sort cost: 167314 Total CPU sort cost: 16584848017
Total Temp space used: 2093966000
Best so far: Table#: 0 cost: 195857.3183 card: 13036040.0000 bytes: 677874080
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:80000
Number of join permutations tried: 1
SORT resource Sort statistics
Sort width: 766 Area size: 1048576 Max Area size: 134215680
Degree: 1
Blocks to Sort: 108528 Row size: 68 Total Rows: 13036040
Initial runs: 7 Merge passes: 1 IO Cost / pass: 58786
Total IO sort cost: 167314 Total CPU sort cost: 16584848017
Total Temp space used: 2093966000
Final - All Rows Plan: Best join order: 1
Cost: 195857.3183 Degree: 1 Card: 13036040.0000 Bytes: 677874080
Resc: 195857.3183 Resc_io: 194195.0000 Resc_cpu: 49919670164
Resp: 195857.3183 Resp_io: 194195.0000 Resc_cpu: 49919670164
kkoipt: Query block SEL$2 (#0)
******* UNPARSED QUERY IS *******
SELECT "X"."DASFM" "DASFM","X"."COINT" "COINT","X"."NUCPT" "NUCPT","X"."RGCOD" "RGCOD","X"."RGCID" "RGCID","X"."CODEV" "CODEV","X"."CTDEV" "CTDEV","X"."CDVRF" "CDVRF","X"."TXCHJ" "TXCHJ","X"."MTNLV" "MTNLV","X"."MTVDP" "MTVDP",DECODE(COUNT(*) OVER ( PARTITION BY "X"."COINT","X"."CODEV","X"."RGCOD","X"."CTDEV","X"."NUCPT","X"."CDVRF","X"."RGCID" ORDER BY "X"."DASFM" DESC ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING ),1,FIRST_VALUE("X"."MTNLV"+"X"."MTVDP") OVER ( PARTITION BY "X"."COINT","X"."CODEV","X"."RGCOD","X"."CTDEV","X"."NUCPT","X"."CDVRF","X"."RGCID" ORDER BY "X"."DASFM" DESC ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING ),NULL) "MTUNS",SUM("X"."MTNLV") OVER ( PARTITION BY "X"."COINT","X"."CODEV","X"."RGCOD") "MTNLI" FROM "SFMCPT" "X"
kkoqbc-subheap (delete addr=0x2a9740c1f0, in-use=11856, alloc=12408)
kkoqbc-end
: call(in-use=57760, alloc=81816), compile(in-use=41096, alloc=44568)
kkoqbc-start
: call(in-use=57760, alloc=81816), compile(in-use=41184, alloc=44568)
kkoqbc-subheap (create addr=0x2a9746b058)
QUERY BLOCK TEXT
select * from VUNSCP where dasfm='30-apr-10'
QUERY BLOCK SIGNATURE
qb name was generated
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=1 objn=297481 hint_alias="VUNSCP"@"SEL$1"
SYSTEM STATISTICS INFORMATION
Using NOWORKLOAD Stats
CPUSPEED: 2503 millions instruction/sec
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
BASE STATISTICAL INFORMATION
Table Stats::
Table: VUNSCP Alias: VUNSCP (NOT ANALYZED)
#Rows: 0 #Blks: 0 AvgRowLen: 0.00
OPTIMIZER STATISTICS AND COMPUTATIONS
GENERAL PLANS
Considering cardinality-based initial join order.
Permutations for Starting Table :
Join order[1]: VUNSCP[VUNSCP]#0
Best so far: Table#: 0 cost: 195857.3183 card: 13036040.0000 bytes: 1603432920
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:80000
Number of join permutations tried: 1
Final - All Rows Plan: Best join order: 1
Cost: 195857.3183 Degree: 1 Card: 13036040.0000 Bytes: 1603432920
Resc: 195857.3183 Resc_io: 194195.0000 Resc_cpu: 49919670164
Resp: 195857.3183 Resp_io: 194195.0000 Resc_cpu: 49919670164
kkoipt: Query block SEL$1 (#0)
******* UNPARSED QUERY IS *******
SELECT "VUNSCP"."DASFM" "DASFM","VUNSCP"."COINT" "COINT","VUNSCP"."NUCPT" "NUCPT","VUNSCP"."RGCOD" "RGCOD","VUNSCP"."RGCID" "RGCID","VUNSCP"."CODEV" "CODEV","VUNSCP"."CTDEV" "CTDEV","VUNSCP"."CDVRF" "CDVRF","VUNSCP"."TXCHJ" "TXCHJ","VUNSCP"."MTNLV" "MTNLV","VUNSCP"."MTVDP" "MTVDP","VUNSCP"."MTUNS" "MTUNS","VUNSCP"."MTNLI" "MTNLI" FROM (SELECT "X"."DASFM" "DASFM","X"."COINT" "COINT","X"."NUCPT" "NUCPT","X"."RGCOD" "RGCOD","X"."RGCID" "RGCID","X"."CODEV" "CODEV","X"."CTDEV" "CTDEV","X"."CDVRF" "CDVRF","X"."TXCHJ" "TXCHJ","X"."MTNLV" "MTNLV","X"."MTVDP" "MTVDP",DECODE(COUNT(*) OVER ( PARTITION BY "X"."COINT","X"."CODEV","X"."RGCOD","X"."CTDEV","X"."NUCPT","X"."CDVRF","X"."RGCID" ORDER BY "X"."DASFM" DESC ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING ),1,FIRST_VALUE("X"."MTNLV"+"X"."MTVDP") OVER ( PARTITION BY "X"."COINT","X"."CODEV","X"."RGCOD","X"."CTDEV","X"."NUCPT","X"."CDVRF","X"."RGCID" ORDER BY "X"."DASFM" DESC ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING ),NULL) "MTUNS",SUM("X"."MTNLV") OVER ( PARTITION BY "X"."COINT","X"."CODEV","X"."RGCOD") "MTNLI" FROM "SFMCPT" "X") "VUNSCP" WHERE "VUNSCP"."DASFM"='30-apr-10'
kkoqbc-subheap (delete addr=0x2a9746b058, in-use=11544, alloc=12408)
kkoqbc-end
: call(in-use=63208, alloc=81816), compile(in-use=41688, alloc=44568)
apadrv-end: call(in-use=63208, alloc=81816), compile(in-use=42872, alloc=44568)
sql_id=3xakq94fcx4td.
Current SQL statement for this session:
EXPLAIN PLAN FOR select * from VUNSCP where dasfm='30-apr-10'
============
Plan Table
============
---------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
---------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 191K | |
| 1 | VIEW | VUNSCP | 12M | 1529M | 191K | 00:39:11 |
| 2 | WINDOW SORT | | 12M | 646M | 191K | 00:39:11 |
| 3 | TABLE ACCESS FULL | SFMCPT | 12M | 646M | 27K | 00:06:36 |
---------------------------------------+-----------------------------------+
Predicate Information:
1 - filter("DASFM"='30-apr-10')Edited by: Ahmed AANGOUR on 5 mai 2010 08:43 -
i have removed the RULE hints and tried the below query in 2 different databases both same version(10g)
In one of the database query completed in 4hrs and in another database it completed in 3 days.
Both databases has similar data.
where i am doing wrong?
Any feeedbacks?
Any optimizer parameters needs to be set?
lect /*+ RULE */ 'PO_COMMITMENT' record_type,
b.org_id,
NULL invoice_number,
a.PO_NUMBER,
a.PO_REVISION,
a.RELEASE_NUMBER,
a.CREATION_DATE,
a.APPROVED_DATE,
b.NEED_BY_DATE,
b.PROMISED_DATE,
a.BUYER_NAME,
a.VENDOR_NAME,
a.PO_LINE,
replace(a.ITEM_DESCRIPTION, chr(10),' ') item_description,
a.QUANTITY_ORDERED,
a.AMOUNT_ORDERED,
a.QUANTITY_CANCELLED,
a.AMOUNT_CANCELLED,
a.QUANTITY_DELIVERED,
a.AMOUNT_DELIVERED,
a.QUANTITY_INVOICED ,
a.AMOUNT_INVOICED*nvl(pod.rate,1) AMOUNT_INVOICED,
a.Amount_outstanding_invoice,
a.PROJECT_ID,
a.TASK_ID,
a.EXPENDITURE_ITEM_DATE,
a.ACCT_EXCHANGE_RATE,
a.denom_CURRENCY_CODE,
a.PO_HEADER_ID,
a.PO_RELEASE_ID,
pod.po_line_id REQUISITION_HEADER_ID,
a.po_line_location_id REQUISITION_LINE_ID ,
pod.po_distribution_id invoice_id,
EXPENDITURE_ORGANIZATION ,
null po_status,
pod.accrue_on_receipt_flag po_line_status,
null requisioner_name,
0 commitment_amt,
pod.po_header_id xpo_header_id,
pod.po_distribution_id xpo_distribution_id,
pod.distribution_num DISTRIBUTION_LINE_NUMBER
from pa_proj_appr_po_distributions a,
po_distributions pod,
po_line_locations b
where a.PO_LINE_LOCATION_ID = b.LINE_LOCATION_ID
and a.po_distribution_id = pod.po_distribution_id
and b.line_location_id = pod.line_location_id
and b.org_id = :p_org_id
and a.project_id > 0
UNION ALL
SELECT /*+ RULE */ 'REQ_COMMITMENT' ,
:p_org_id,
NULL,
REQ_NUMBER ,
NULL ,
NULL,
CREATION_DATE ,
to_date(null),
NEED_BY_DATE ,
to_date(null),
null,
vendor_name,
REQ_LINE ,
replace(ITEM_DESCRIPTION, chr(10), ' '),
QUANTITY ,
AMOUNT ,
0,
0,
0,
0,
0,
0,
0,
PROJECT_ID ,
TASK_ID ,
EXPENDITURE_ITEM_DATE ,
ACCT_EXCHANGE_RATE ,
denom_CURRENCY_CODE,
0,
0,
REQUISITION_HEADER_ID,
REQUISITION_LINE_ID ,
0 invoice_id,
EXPENDITURE_ORGANIZATION ,
null po_status,
null po_line_status,
REQUESTOR_NAME requisioner_name,
AMOUNT ,
0,
0,
0 DISTRIBUTION_LINE_NUMBER
FROM pa_proj_appr_req_distributions
union all
select /*+ RULE */ 'INVOICE_COMMITMENT' ,
:p_org_id,
b.INVOICE_NUM,
NULL,
NULL,
NULL,
b.creation_date,
a.INVOICE_DATE,
a.GL_DATE ,
to_date(null),
NULL,
a.VENDOR_NAME,
0,
replace(a.DESCRIPTION,chr(10),' '),
0,
0,
0,
0,
0,
0,
a.QUANTITY,
a.AMOUNT ,
0,
a.PROJECT_ID ,
a.TASK_ID ,
a.EXPENDITURE_ITEM_DATE ,
a.ACCT_EXCHANGE_RATE ,
a.denom_CURRENCY_CODE ,
0,
0,
aid.po_distribution_id,
aid.invoice_distribution_id,
a.invoice_id,
a.EXPENDITURE_ORGANIZATION ,
null,
null,
null,
a.amount ,
0,
0,
a.DISTRIBUTION_LINE_NUMBER
FROM
pa_proj_ap_inv_distributions a,
ap_invoices b,
ap_invoice_distributions aid
where a.invoice_id = b.invoice_id
and aid.invoice_id = b.invoice_id
and a.distribution_line_number = aid.distribution_line_numberHello,
It's duplicate post, please close this and provide requested information in your previous post.
Regards -
The danger of memory target in Oracle 11g - request for discussion.
Hello, everyone.
This is not a question, but kind of request for discussion.
I believe that many of you heard something about automatic memory management in Oracle 11g.
The concept is that Oracle manages the target size of SGA and PGA. Yes, believe it or not, all we have to do is just to tell Oracle how much memory it can use.
But I have a big concern on this. The optimizer takes the PGA size into consideration when calculating the cost of sort-related operations.
So what would happen when Oracle dynamically changes the target size of PGA? Following is a simple demonstration of my concern.
UKJA@ukja116> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
-- Configuration
*.memory_target=350m
*.memory_max_target=350m
create table t1(c1 int, c2 char(100));
create table t2(c1 int, c2 char(100));
insert into t1 select level, level from dual connect by level <= 10000;
insert into t2 select level, level from dual connect by level <= 10000;
-- First 10053 trace
alter session set events '10053 trace name context forever, level 1';
select /*+ use_hash(t1 t2) */ count(*)
from t1, t2
where t1.c1 = t2.c1 and t1.c2 = t2.c2
alter session set events '10053 trace name context off';
-- Do aggressive hard parse to make Oracle dynamically change the size of memory segments.
declare
pat1 varchar2(1000);
pat2 varchar2(1000);
va number;
vc sys_refcursor;
vs varchar2(1000);
begin
select ksppstvl into pat1
from sys.xm$ksppi i, sys.xm$ksppcv v -- views for x$ table
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
for idx in 1 .. 10000000 loop
execute immediate 'select count(*) from t1 where rownum = ' || (idx+1)
into va;
if mod(idx, 1000) = 0 then
sys.dbms_system.ksdwrt(2, idx || 'th execution');
select ksppstvl into pat2
from sys.xm$ksppi i, sys.xm$ksppcv v -- views for x$ table
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
if pat1 <> pat2 then
sys.dbms_system.ksdwrt(2, 'yep, I got it!');
exit;
end if;
end if;
end loop;
end;
-- As to alert log file,
25000th execution
26000th execution
27000th execution
28000th execution
29000th execution
30000th execution
yep, I got it! <-- the pga target changed with 30000th hard parse
-- Second 10053 trace for same query
alter session set events '10053 trace name context forever, level 1';
select /*+ use_hash(t1 t2) */ count(*)
from t1, t2
where t1.c1 = t2.c1 and t1.c2 = t2.c2
alter session set events '10053 trace name context off';With above test case, I found that
1. Oracle invalidates the query when internal pga aggregate size changes, which is quite natural.
2. With changed pga aggregate size, Oracle recalculates the cost. These are excerpts from the both of the 10053 trace files.
-- First 10053 trace file
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
_smm_max_size = 11468 KB
_smm_px_max_size = 28672 KB
optimizer_use_sql_plan_baselines = false
optimizer_use_invisible_indexes = true
-- Second 10053 trace file
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
_smm_max_size = 13107 KB
_smm_px_max_size = 32768 KB
optimizer_use_sql_plan_baselines = false
optimizer_use_invisible_indexes = true
Bug Fix Control Environment10053 trace file clearly says that Oracle recalculates the cost of the query with the change of internal pga aggregate target size. So, there is a great danger of unexpected plan change while Oracle dynamically controls the memory segments.
I believe that this is a desinged behavior, but the negative side effect is not negligible.
I just like to hear your opinions on this behavior.
Do you think that this is acceptable? Or is this another great feature that nobody wants to use like automatic tuning advisor?
================================
Dion Cho - Oracle Performance Storyteller
http://dioncho.wordpress.com (english)
http://ukja.tistory.com (korean)
================================I made a slight modification with my test case to have mixed workloads of hard parse and logical reads.
*.memory_target=200m
*.memory_max_target=200m
create table t3(c1 int, c2 char(1000));
insert into t3 select level, level from dual connect by level <= 50000;
declare
pat1 varchar2(1000);
pat2 varchar2(1000);
va number;
begin
select ksppstvl into pat1
from sys.xm$ksppi i, sys.xm$ksppcv v
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
for idx in 1 .. 1000000 loop
-- try many patterns here!
execute immediate 'select count(*) from t3 where 10 = mod('||idx||',10)+1' into va;
if mod(idx, 100) = 0 then
sys.dbms_system.ksdwrt(2, idx || 'th execution');
for p in (select ksppinm, ksppstvl
from sys.xm$ksppi i, sys.xm$ksppcv v
where i.indx = v.indx
and i.ksppinm in ('__shared_pool_size', '__db_cache_size', '__pga_aggregate_target')) loop
sys.dbms_system.ksdwrt(2, p.ksppinm || ' = ' || p.ksppstvl);
end loop;
select ksppstvl into pat2
from sys.xm$ksppi i, sys.xm$ksppcv v
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
if pat1 <> pat2 then
sys.dbms_system.ksdwrt(2, 'yep, I got it! pat1=' || pat1 ||', pat2='||pat2);
exit;
end if;
end if;
end loop;
end;
/This test case showed expected and reasonable result, like following:
100th execution
__shared_pool_size = 92274688
__db_cache_size = 16777216
__pga_aggregate_target = 83886080
200th execution
__shared_pool_size = 92274688
__db_cache_size = 16777216
__pga_aggregate_target = 83886080
300th execution
__shared_pool_size = 88080384
__db_cache_size = 20971520
__pga_aggregate_target = 83886080
400th execution
__shared_pool_size = 92274688
__db_cache_size = 16777216
__pga_aggregate_target = 83886080
500th execution
__shared_pool_size = 88080384
__db_cache_size = 20971520
__pga_aggregate_target = 83886080
1100th execution
__shared_pool_size = 92274688
__db_cache_size = 20971520
__pga_aggregate_target = 83886080
1200th execution
__shared_pool_size = 92274688
__db_cache_size = 37748736
__pga_aggregate_target = 58720256
yep, I got it! pat1=83886080, pat2=58720256Oracle continued being bounced between shared pool and buffer cache size, and about 1200th execution Oracle suddenly stole some memory from PGA target area to increase db cache size.
(I'm still in dark age on this automatic memory target management of 11g. More research in need!)
I think that this is very clear and natural behavior. I just want to point out that this would result in unwanted catastrophe under special cases, especially with some logic holes and bugs.
================================
Dion Cho - Oracle Performance Storyteller
http://dioncho.wordpress.com (english)
http://ukja.tistory.com (korean)
================================ -
High waits on scatter read and buffer busy wait ..
one of my system is undergoing some serious problem ~
i checked when some sql querying few large tables ( 30 millions rows ,partioned )
system waits became significant .. the ones stand out are scatter read and buffer busy wait ...
on the longop , you can see those sessions waitting to scan the large table ..
buffer get ratio dropped from 90% to like 30% ,average buffer hit ratio is like 55%
mu questions are
1.what's your view of throwing the problem away ?
add physical memory, then increase data buffer ?
it's getting scanned very often , but it's too large to pin it in the keep pool .
2. why buffer busy wait ?
i don't fully understand why this happened ?
becase what they are doing are just querying ,nobody is modifing any data ?
will increase freelist help this ??
thanks your talneted ideas
br/rickyHave these SQL statements been added recently, or have they been around for a long time. If the latter, then it seems likely that they've changed their execution plan to start doing tablescans (hence db file scattered read waits) or index fast full scans.
One of the causes of buffer busy waits is concurrent tablescans/index fast full scans - in 10g this particular cause has been split and reported as "read by other session".
Your first action should be to see if there is a more efficient execution plan for these queries. If the problem is due to a change in execution plan, common causes are: changes in statistics on the objects, failure to change statistics on the objects, changes in input value on the queries, changes in optimizer parameters, unlucky changes in data volumes.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
Index not being used in group by.
Here is the scenario with examples. Big table 333 to 500 million rows in the table. Statistics are gathered. Histograms are there. Index is not being used though. Why?
CREATE TABLE "XXFOCUS"."some_huge_data_table"
( "ORG_ID" NUMBER NOT NULL ENABLE,
"PARTNERID" VARCHAR2(30) NOT NULL ENABLE,
"EDI_END_DATE" DATE NOT NULL ENABLE,
"CUSTOMER_ITEM_NUMBER" VARCHAR2(50) NOT NULL ENABLE,
"STORE_NUMBER" VARCHAR2(10) NOT NULL ENABLE,
"EDI_START_DATE" DATE,
"QTY_SOLD_UNIT" NUMBER(7,0),
"QTY_ON_ORDER_UNIT" NUMBER(7,0),
"QTY_ON_ORDER_AMT" NUMBER(10,2),
"QTY_ON_HAND_AMT" NUMBER(10,2),
"QTY_ON_HAND_UNIT" NUMBER(7,0),
"QTY_SOLD_AMT" NUMBER(10,2),
"QTY_RECEIVED_UNIT" NUMBER(7,0),
"QTY_RECEIVED_AMT" NUMBER(10,2),
"QTY_REQUISITION_RDC_UNIT" NUMBER(7,0),
"QTY_REQUISITION_RDC_AMT" NUMBER(10,2),
"QTY_REQUISITION_RCVD_UNIT" NUMBER(7,0),
"QTY_REQUISITION_RCVD_AMT" NUMBER(10,2),
"INSERTED_DATE" DATE,
"UPDATED_DATE" DATE,
"CUSTOMER_WEEK" NUMBER,
"CUSTOMER_MONTH" NUMBER,
"CUSTOMER_QUARTER" NUMBER,
"CUSTOMER_YEAR" NUMBER,
"CUSTOMER_ID" NUMBER,
"MONTH_NAME" VARCHAR2(3),
"ORG_WEEK" NUMBER,
"ORG_MONTH" NUMBER,
"ORG_QUARTER" NUMBER,
"ORG_YEAR" NUMBER,
"SITE_ID" NUMBER,
"ITEM_ID" NUMBER,
"ITEM_COST" NUMBER,
"UNIT_PRICE" NUMBER,
CONSTRAINT "some_huge_data_table_PK" PRIMARY KEY ("ORG_ID", "PARTNERID", "EDI_END_DATE", "CUSTOMER_ITEM_NUMBER", "STORE_NUMBER")
USING INDEX TABLESPACE "xxxxx" ENABLE,
CONSTRAINT "some_huge_data_table_CK_START_DATE" CHECK (edi_end_date - edi_start_date = 6) ENABLE
SQL*Plus: Release 11.2.0.2.0 Production on Fri Sep 14 12:11:16 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SELECT num_rows FROM user_tables s WHERE s.table_name = 'some_huge_data_table';
NUM_ROWS
333338434
SQL> SELECT MAX(edi_end_date)
2 FROM some_huge_data_table p
3 WHERE p.org_id = some_number
4 AND p.partnerid = 'some_string';
MAX(EDI_E
13-MAY-12
Elapsed: 00:00:00.00
SQL> explain plan for
2 SELECT MAX(edi_end_date)
3 FROM some_huge_data_table p
4 WHERE p.org_id = some_number
5 AND p.partnerid = 'some_string';
Explained.
SQL> /
PLAN_TABLE_OUTPUT
Plan hash value: 2104157595
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 22 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 22 | | |
| 2 | FIRST ROW | | 1 | 22 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN (MIN/MAX)| some_huge_data_table_PK | 1 | 22 | 4 (0)| 00:00:01 |
SQL> explain plan for
2 SELECT MAX(edi_end_date),
3 org_id,
4 partnerid
5 FROM some_huge_data_table
6 GROUP BY org_id,
7 partnerid;
Explained.
PLAN_TABLE_OUTPUT
Plan hash value: 3950336305
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 44 | 1605K (1)| 05:21:03 |
| 1 | HASH GROUP BY | | 2 | 44 | 1605K (1)| 05:21:03 |
| 2 | TABLE ACCESS FULL| some_huge_data_table | 333M| 6993M| 1592K (1)| 05:18:33 |
------------------------------------------------------------------------------- Why wouldn't it use the index in the group by? If I write a loop to query for different partnerid (there are only three), the whole things takes less than a second. Any help is appreciated.
btw, I gave the index hint too. Didn't work. Version mentioned in the example.
Edited by: RPuttagunta on Sep 14, 2012 11:24 AM
Edited by: RPuttagunta on Sep 14, 2012 11:26 AM
the actual names are 'scrubbed' for obvious reasons. Don't worry, I didn't name the tables in mixed case.Jonathan,
Thank you for your input. Forgot about this issue since ended up creating an MV since, the view was slower. But either way, I am curious. Here are the results for your questions.
SQL> SELECT last_analyzed,
2 blocks
3 FROM user_tables s
4 WHERE s.table_name = 'huge_data';
LAST_ANAL BLOCKS
14-MAY-12 5869281
SQL> SELECT last_analyzed,
2 leaf_blocks
3 FROM user_indexes i
4 WHERE i.table_name = 'huge_data';
LAST_ANAL LEAF_BLOCKS
14-MAY-12 2887925
SQL>It looks like stale statistics from the last_analyzed, but, they really aren't. This is a development database and that was the last time around which it was refreshed. And the stats are right (at least the approx_no_of_blocks and num_rows etc).
No other data came into the table after.
Also,
1). I thought I don't have any particular optimizer parameters, but, checking back I do. fastfull_scan_enabled = false. Could that be it?
SQL> SELECT a.name,
2 a.value,
3 a.display_value,
4 a.isdefault,
5 a.isses_modifiable
6 FROM v$parameter a
7 WHERE a.name LIKE '\_%' ESCAPE '\';
NAME VALUE DISPLAY_VALUE ISDEFAULT ISSES
_disable_fast_validate TRUE TRUE FALSE TRUE
_system_trig_enabled TRUE TRUE FALSE FALSE
_sort_elimination_cost_ratio 5 5 FALSE TRUE
_b_tree_bitmap_plans FALSE FALSE FALSE TRUE
_fast_full_scan_enabled FALSE FALSE FALSE TRUE
_index_join_enabled FALSE FALSE FALSE TRUE
_like_with_bind_as_equality TRUE TRUE FALSE TRUE
_optimizer_autostats_job FALSE FALSE FALSE FALSE
_connect_by_use_union_all OLD_PLAN_MODE OLD_PLAN_MODE FALSE TRUE
_trace_files_public TRUE TRUE FALSE FALSE
10 rows selected.
SQL>As, you might have guessed, I am not the dba for this db. Should pay more attention to these optimizer parameters.
I know why we had to set connectby_use_union_all hint (due to a bug in 11gR2).
Also, vaguely remember something about the disablefast_validate (something about another major db bug in 11gR2 again), but, not sure why those other parameters are set.
2). Also, I have tried this
SQL> SELECT /*+ index_ss(huge_data_pk) gather_plan_statistics*/
2 MAX(edi_end_date),
3 org_id,
4 partnerid
5 FROM huge_data
6 GROUP BY org_id,
7 partnerid;
MAX(EDI_E ORG_ID PARTNERID
2 rows
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(null,null,'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
SQL_ID f3kk8skdyvz7c, child number 0
SELECT /*+ index_ss(huge_data_pk) gather_plan_statistics*/
MAX(edi_end_date), org_id, partnerid FROM huge_data GROUP BY
org_id, partnerid
Plan hash value: 3950336305
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 1 | | 2 |00:05:11.31 | 5905K| 5897K| | | |
| 1 | HASH GROUP BY | | 1 | 2 | 2 |00:05:11.31 | 5905K| 5897K| 964K| 964K| 2304K (0)|
| 2 | TABLE ACCESS FULL| hug_DATA | 1 | 333M| 334M|00:04:31.44 | 5905K| 5897K| | | |
16 rows selected.But, then, I tried this too.
SQL> alter session set "_fast_full_scan_enabled"=true;
Session altered.
SQL> SELECT MAX(edi_end_date),
2 org_id,
3 partnerid
4 FROM hug_data
5 GROUP BY org_id,
6 partnerid;
MAX(EDI_E ORG_ID PARTNERID
2 rowsAnd this took around 5 minutes too.
PS: This has nothing to do with original question, but, it is plausible to derive the 'huge_data' table name from the sql_id? Just curious. -
This is my first post in this forum regarding query tuning, so my sincere apologies in advance if I have:
1) not included sufficient information,
2) included too much information,
3) not posted to the correct forum
I read through Randolf Geist's web page on instructions to post a query tuning request
and attempted to follow it as closely as possible.
I am attempting to figure out where a view I have constructed can be optimized.
It takes approx. 45 seconds to 1 minute to run; I would like to cut that down to 10 seconds if possible.
The view itself is somewhat complex; I will post the actual code if it will help you help me. Please advise.
I was under the impression that posting the code was not necessary, but if it is, let me know and I will post it.
I have been doing SQL development for a few years, but only recently in Oracle.
I have no experience in looking through the following output and being able to tell where I can improve performance,
so this will be a learning experience for me. Thanks in advance for your help - I appreciate it.
Some additional information - my view is based on tables over which I have no control - it is a third-party application
which I do reporting from. I do have the freedom to create indexes on columns within the tables if necessary.
The statement is simply
SELECT * FROM LLU_V_PRODUCTION_DETAIL_03
which is the name of my view.
here's all the information I've been able to retrieve by following Randolf's instructions:
Oracle version is 10.2.0.1.0 - 64bit
Here are optimizer parameters:
NAME TYPE VALUE
user_dump_dest string C:\ORACLE\PRODUCT\10.2.0\ADMIN
\AXIUMPRODUCTION\UDUMP
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.1
optimizer_index_caching integer 90
optimizer_index_cost_adj integer 20
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
NAME TYPE VALUE
db_file_multiblock_read_count integer 16
NAME TYPE VALUE
db_block_size integer 8192
NAME TYPE VALUE
cursor_sharing string EXACT
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 10-29-2005 01:36
SYSSTATS_INFO DSTOP 10-29-2005 01:36
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 1298.56584
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.Here is the output of EXPLAIN PLAN:
PLAN_TABLE_OUTPUT
Plan hash value: 662813077
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 23M| 53G| | 62330 (2)| 00:12:28 |
| 1 | VIEW | LLU_V_PRODUCTION_DETAIL_03 | 23M| 53G| | 62330 (2)| 00:12:28 |
| 2 | UNION-ALL | | | | | | |
|* 3 | HASH JOIN | | 18M| 5062M| | 1525 (10)| 00:00:19 |
| 4 | VIEW | index$_join$_007 | 1725 | 25875 | | 4 (25)| 00:00:01 |
|* 5 | HASH JOIN | | | | | | |
| 6 | INDEX FAST FULL SCAN | USERS_PRIMARY | 1725 | 25875 | | 1 (0)| 00:00:01 |
| 7 | INDEX FAST FULL SCAN | USERS_PRODUCER | 1725 | 25875 | | 2 (0)| 00:00:01 |
|* 8 | HASH JOIN | | 416K| 105M| | 1399 (2)| 00:00:17 |
| 9 | TABLE ACCESS FULL | PRODUCER | 1396 | 118K| | 24 (0)| 00:00:01 |
|* 10 | HASH JOIN | | 29819 | 5183K| | 1372 (2)| 00:00:17 |
| 11 | TABLE ACCESS FULL | CLASS | 20 | 1660 | | 3 (0)| 00:00:01 |
|* 12 | TABLE ACCESS FULL | QR_PRODUCTION | 149K| 13M| | 1367 (2)| 00:00:17 |
|* 13 | FILTER | | | | | | |
|* 14 | HASH JOIN | | 16M| 5651M| | 32983 (2)| 00:06:36 |
| 15 | VIEW | index$_join$_014 | 1725 | 25875 | | 4 (25)| 00:00:01 |
|* 16 | HASH JOIN | | | | | | |
| 17 | INDEX FAST FULL SCAN | USERS_PRIMARY | 1725 | 25875 | | 1 (0)| 00:00:01 |
| 18 | INDEX FAST FULL SCAN | USERS_PRODUCER | 1725 | 25875 | | 2 (0)| 00:00:01 |
|* 19 | HASH JOIN | | 149K| 49M| | 32874 (1)| 00:06:35 |
| 20 | TABLE ACCESS FULL | CLASS | 20 | 1660 | | 3 (0)| 00:00:01 |
|* 21 | HASH JOIN | | 149K| 37M| | 32870 (1)| 00:06:35 |
| 22 | TABLE ACCESS FULL | PRODUCER | 1396 | 118K| | 24 (0)| 00:00:01 |
|* 23 | HASH JOIN | | 222K| 37M| 12M| 32844 (1)| 00:06:35 |
| 24 | TABLE ACCESS FULL | PATIENT | 188K| 10M| | 6979 (1)| 00:01:24 |
|* 25 | HASH JOIN | | 222K| 24M| | 23860 (2)| 00:04:47 |
|* 26 | TABLE ACCESS FULL | PROCEDUR | 888 | 44400 | | 11 (0)| 00:00:01 |
|* 27 | TABLE ACCESS FULL | TRX | 442K| 28M| | 23845 (2)| 00:04:47 |
|* 28 | TABLE ACCESS FULL | USERS | 1 | 11 | | 55 (0)| 00:00:01 |
| 29 | NESTED LOOPS | | 1 | 473 | | 25798 (1)| 00:05:10 |
| 30 | NESTED LOOPS | | 1 | 413 | | 25797 (1)| 00:05:10 |
| 31 | NESTED LOOPS | | 1 | 398 | | 25796 (1)| 00:05:10 |
| 32 | NESTED LOOPS | | 1 | 390 | | 25795 (1)| 00:05:10 |
|* 33 | HASH JOIN | | 1 | 303 | | 25794 (1)| 00:05:10 |
| 34 | TABLE ACCESS FULL | LLU_EVALUATION_DESCRIPTIONS | 95 | 6175 | | 3 (0)| 00:00:01 |
|* 35 | HASH JOIN | | 4630 | 1076K| | 25791 (1)| 00:05:10 |
|* 36 | HASH JOIN | | 9607 | 1623K| | 23834 (1)| 00:04:47 |
| 37 | MERGE JOIN | | 888 | 91464 | | 13 (8)| 00:00:01 |
| 38 | TABLE ACCESS BY INDEX ROWID | CLASS | 20 | 1660 | | 1 (0)| 00:00:01 |
| 39 | INDEX FULL SCAN | CLASS_PRIMARY | 20 | | | 1 (0)| 00:00:01 |
|* 40 | SORT JOIN | | 888 | 17760 | | 12 (9)| 00:00:01 |
|* 41 | TABLE ACCESS FULL | PROCEDUR | 888 | 17760 | | 11 (0)| 00:00:01 |
|* 42 | TABLE ACCESS FULL | TRX | 19125 | 1307K| | 23820 (1)| 00:04:46 |
|* 43 | TABLE ACCESS FULL | GRADITEM | 655K| 40M| | 1952 (1)| 00:00:24 |
| 44 | TABLE ACCESS BY INDEX ROWID | PRODUCER | 1 | 87 | | 1 (0)| 00:00:01 |
|* 45 | INDEX UNIQUE SCAN | PRODUCER_PRIMARY | 1 | | | 1 (0)| 00:00:01 |
|* 46 | TABLE ACCESS BY INDEX ROWID | GRADING | 1 | 8 | | 1 (0)| 00:00:01 |
|* 47 | INDEX UNIQUE SCAN | GRADING_PRIMARY | 1 | | | 1 (0)| 00:00:01 |
| 48 | TABLE ACCESS BY INDEX ROWID | USERS | 221 | 3315 | | 1 (0)| 00:00:01 |
|* 49 | INDEX RANGE SCAN | USERS_PRODUCER | 1 | | | 1 (0)| 00:00:01 |
| 50 | TABLE ACCESS BY INDEX ROWID | PATIENT | 1 | 60 | | 1 (0)| 00:00:01 |
|* 51 | INDEX UNIQUE SCAN | PATIENT_PRIMARY | 1 | | | 1 (0)| 00:00:01 |
| 52 | TABLE ACCESS BY INDEX ROWID | USERS | 109 | 1635 | | 1 (0)| 00:00:01 |
| 53 | NESTED LOOPS | | 1 | 438 | | 2023 (1)| 00:00:25 |
| 54 | NESTED LOOPS | | 1 | 423 | | 2022 (1)| 00:00:25 |
| 55 | NESTED LOOPS | | 1 | 363 | | 2021 (1)| 00:00:25 |
| 56 | NESTED LOOPS | | 1 | 276 | | 2020 (1)| 00:00:25 |
| 57 | NESTED LOOPS | | 1 | 193 | | 2019 (1)| 00:00:25 |
| 58 | NESTED LOOPS | | 1 | 185 | | 2018 (1)| 00:00:25 |
| 59 | NESTED LOOPS | | 1 | 173 | | 2017 (1)| 00:00:25 |
| 60 | NESTED LOOPS | | 1 | 140 | | 2016 (1)| 00:00:25 |
|* 61 | TABLE ACCESS FULL | GRADITEM | 317 | 23141 | | 1953 (2)| 00:00:24 |
|* 62 | TABLE ACCESS BY INDEX ROWID| TRX | 1 | 67 | | 1 (0)| 00:00:01 |
|* 63 | INDEX UNIQUE SCAN | TRX_PRIMARY | 1 | | | 1 (0)| 00:00:01 |
|* 64 | TABLE ACCESS BY INDEX ROWID | TRX | 1 | 33 | | 1 (0)| 00:00:01 |
|* 65 | INDEX UNIQUE SCAN | TRX_PRIMARY | 1 | | | 1 (0)| 00:00:01 |
|* 66 | TABLE ACCESS BY INDEX ROWID | GRADITEM | 1 | 12 | | 1 (0)| 00:00:01 |
|* 67 | INDEX RANGE SCAN | GRADITEM_ID | 19 | | | 1 (0)| 00:00:01 |
|* 68 | TABLE ACCESS BY INDEX ROWID | GRADING | 1 | 8 | | 1 (0)| 00:00:01 |
|* 69 | INDEX UNIQUE SCAN | GRADING_PRIMARY | 1 | | | 1 (0)| 00:00:01 |
| 70 | TABLE ACCESS BY INDEX ROWID | CLASS | 1 | 83 | | 1 (0)| 00:00:01 |
|* 71 | INDEX UNIQUE SCAN | CLASS_PRIMARY | 1 | | | 1 (0)| 00:00:01 |
| 72 | TABLE ACCESS BY INDEX ROWID | PRODUCER | 1 | 87 | | 1 (0)| 00:00:01 |
|* 73 | INDEX UNIQUE SCAN | PRODUCER_PRIMARY | 1 | | | 1 (0)| 00:00:01 |
| 74 | TABLE ACCESS BY INDEX ROWID | PATIENT | 1 | 60 | | 1 (0)| 00:00:01 |
|* 75 | INDEX UNIQUE SCAN | PATIENT_PRIMARY | 1 | | | 1 (0)| 00:00:01 |
|* 76 | INDEX RANGE SCAN | USERS_PRODUCER | 1 | | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("QRP"."ProviderID"="U1"."Producer")
5 - access(ROWID=ROWID)
8 - access(TRIM("QRP"."ProviderID")=TRIM("P"."Producer"))
10 - access(TRIM("QRP"."axiUm_Discipline")=TRIM("CLASS"."Class"))
12 - filter("QRP"."ProviderID" IS NOT NULL AND "QRP"."Location"<'9990' AND "QRP"."ProcedureID"<>185 AND
"QRP"."PatientFirstName"<>'NON-PATIENT' AND ("QRP"."PatientFirstName"<>'TED' OR "QRP"."PatientLastName" NOT LIKE
'CAVENDER%'))
13 - filter( NOT EXISTS (SELECT 0 FROM AXIUM."USERS" "USERS" WHERE "Custom3"='YES' AND LNNVL("User"<>:B1)) OR
TO_CHAR(INTERNAL_FUNCTION("P"."EndDate"),'YYYY')<'2011' OR TRIM(TO_CHAR(INTERNAL_FUNCTION("P"."EndDate"),'YYYY')) IS
NULL OR "TRX"."Procedure"='A0021' OR "TRX"."Procedure"='A0022')
14 - access("TRX"."Producer"="U1"."Producer")
16 - access(ROWID=ROWID)
19 - access("PROC"."Discipline"="CLASS"."Class")
21 - access("TRX"."Producer"="P"."Producer")
23 - access("TRX"."Patient"="PAT"."Patient")
25 - access("TRX"."Procedure"="PROC"."Procedure")
26 - filter("PROC"."Discipline" IS NOT NULL)
27 - filter("TRX"."Deleted"=0 AND "TRX"."Status"='C' AND "TRX"."Procedure" NOT LIKE 'D0149%' AND
"TRX"."Procedure"<>'D5001C')
28 - filter("Custom3"='YES' AND LNNVL("User"<>:B1))
33 - access(TRIM(UPPER("GI"."QuestionText"))=TRIM(UPPER("LLU"."GRADITEM_QuestionText")) AND
TRIM(UPPER("GI"."Text"))=TRIM(UPPER("LLU"."GRADITEM_Text")) AND TRIM("TRX"."Procedure")=TRIM("LLU"."ProcedureCode"))
35 - access("TRX"."Type"="GI"."Type" AND "TRX"."Id"="GI"."Id" AND "TRX"."Treatment"="GI"."Treatment")
36 - access("TRX"."Procedure"="PROC"."Procedure")
40 - access("PROC"."Discipline"="CLASS"."Class")
filter("PROC"."Discipline"="CLASS"."Class")
41 - filter("PROC"."Discipline" IS NOT NULL)
42 - filter("TRX"."Grading"<>0 AND "TRX"."Deleted"=0 AND "TRX"."Status"='C')
43 - filter("GI"."Grading"<>0)
45 - access("TRX"."Producer"="P"."Producer")
46 - filter("G"."Deleted"=0)
47 - access("TRX"."Grading"="G"."Grading")
filter("G"."Grading"<>0 AND "G"."Grading"="GI"."Grading")
49 - access("TRX"."Producer"="U1"."Producer")
51 - access("TRX"."Patient"="PAT"."Patient")
61 - filter("GI"."IsHeading"=3 AND TRIM("GI"."QuestionText")='Comments' AND "GI"."Grading"<>0)
62 - filter("TRX"."Grading"<>0 AND "TRX"."Deleted"=0 AND "TRX"."Status"='C' AND ("TRX"."Procedure"='G1002' OR
"TRX"."Procedure"='G1003'))
63 - access("GI"."Type"="TRX"."Type" AND "GI"."Id"="TRX"."Id" AND "GI"."Treatment"="TRX"."Treatment")
64 - filter("TRX"."Grading"<>0 AND "TRX"."Deleted"=0 AND "TRX"."Status"='C' AND ("TRX"."Procedure"='G1002' OR
"TRX"."Procedure"='G1003') AND "TRX"."Grading"="TRX"."Grading")
65 - access("TRX"."Type"="TRX"."Type" AND "TRX"."Id"="TRX"."Id" AND "TRX"."Treatment"="TRX"."Treatment")
66 - filter("GI"."RelValue"<>0 AND "TRX"."Type"="GI"."Type" AND "TRX"."Treatment"="GI"."Treatment")
67 - access("TRX"."Id"="GI"."Id")
68 - filter("G"."Deleted"=0)
69 - access("TRX"."Grading"="G"."Grading")
filter("G"."Grading"<>0 AND "GI"."Grading"="G"."Grading")
71 - access("TRX"."Discipline"="CLASS"."Class")
73 - access("TRX"."Producer"="P"."Producer")
75 - access("TRX"."Patient"="PAT"."Patient")
76 - access("TRX"."Producer"="U1"."Producer")
138 rows selected.
Elapsed: 00:00:00.62
631015 rows selected.
Elapsed: 00:01:49.13Output from AUTOTRACE (I think)
NOTE: this post was too long for the forum, so I have removed a number of lines in the following output which appeared to be duplicating the above section (EXPLAIN PLAN).
Statistics
2657 recursive calls
0 db block gets
12734113 consistent gets
13499 physical reads
0 redo size
103697740 bytes sent via SQL*Net to client
69744 bytes received via SQL*Net from client
6312 SQL*Net roundtrips to/from client
76 sorts (memory)
0 sorts (disk)
631015 rows processedThe TKPROF output
select * from llu_v_production_detail_03
call count cpu elapsed disk query current rows
Parse 1 0.51 0.51 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 6312 88.09 98.01 13490 12733584 0 631015
total 6314 88.60 98.52 13490 12733584 0 631015
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 57
Rows Row Source Operation
631015 VIEW LLU_V_PRODUCTION_DETAIL_03 (cr=12733584 pr=13490 pw=0 time=92145592 us)
631015 UNION-ALL (cr=12733584 pr=13490 pw=0 time=91514573 us)
125485 HASH JOIN (cr=8099 pr=6396 pw=0 time=1523326 us)
1725 VIEW index$_join$_007 (cr=24 pr=0 pw=0 time=4777 us)
1725 HASH JOIN (cr=24 pr=0 pw=0 time=3051 us)
1725 INDEX FAST FULL SCAN USERS_PRIMARY (cr=7 pr=0 pw=0 time=50 us)(object id 55023)
1725 INDEX FAST FULL SCAN USERS_PRODUCER (cr=17 pr=0 pw=0 time=16 us)(object id 55024)
144513 HASH JOIN (cr=8075 pr=6396 pw=0 time=2326445 us)
1396 TABLE ACCESS FULL PRODUCER (cr=107 pr=0 pw=0 time=29 us)
144513 HASH JOIN (cr=7968 pr=6396 pw=0 time=2035684 us)
20 TABLE ACCESS FULL CLASS (cr=7 pr=0 pw=0 time=71 us)
151043 TABLE ACCESS FULL QR_PRODUCTION (cr=7961 pr=6396 pw=0 time=313553 us)
462685 FILTER (cr=10755862 pr=7094 pw=0 time=58570941 us)
466790 HASH JOIN (cr=145247 pr=7094 pw=0 time=11007301 us)
1725 VIEW index$_join$_014 (cr=24 pr=0 pw=0 time=6817 us)
1725 HASH JOIN (cr=24 pr=0 pw=0 time=5091 us)
1725 INDEX FAST FULL SCAN USERS_PRIMARY (cr=7 pr=0 pw=0 time=35 us)(object id 55023)
1725 INDEX FAST FULL SCAN USERS_PRODUCER (cr=17 pr=0 pw=0 time=19 us)(object id 55024)
485205 HASH JOIN (cr=145223 pr=7094 pw=0 time=10945107 us)
20 TABLE ACCESS FULL CLASS (cr=7 pr=0 pw=0 time=105 us)
507772 HASH JOIN (cr=145216 pr=7094 pw=0 time=11947534 us)
1396 TABLE ACCESS FULL PRODUCER (cr=107 pr=0 pw=0 time=18 us)
507772 HASH JOIN (cr=145109 pr=7094 pw=0 time=10924473 us)
188967 TABLE ACCESS FULL PATIENT (cr=31792 pr=0 pw=0 time=188998 us)
507772 HASH JOIN (cr=113317 pr=7094 pw=0 time=9652037 us)
895 TABLE ACCESS FULL PROCEDUR (cr=46 pr=0 pw=0 time=65 us)
509321 TABLE ACCESS FULL TRX (cr=113271 pr=7094 pw=0 time=5604567 us)
8548 TABLE ACCESS FULL USERS (cr=10610615 pr=0 pw=0 time=39053120 us)
42669 NESTED LOOPS (cr=507317 pr=0 pw=0 time=3686506 us)
42669 NESTED LOOPS (cr=421535 pr=0 pw=0 time=3217140 us)
45269 NESTED LOOPS (cr=301155 pr=0 pw=0 time=2449542 us)
45323 NESTED LOOPS (cr=210131 pr=0 pw=0 time=2134056 us)
45323 HASH JOIN (cr=119056 pr=0 pw=0 time=1635472 us)
95 TABLE ACCESS FULL LLU_EVALUATION_DESCRIPTIONS (cr=7 pr=0 pw=0 time=118 us)
98272 HASH JOIN (cr=119049 pr=0 pw=0 time=1446703 us)
46996 HASH JOIN (cr=109018 pr=0 pw=0 time=944857 us)
786 MERGE JOIN (cr=50 pr=0 pw=0 time=1528 us)
20 TABLE ACCESS BY INDEX ROWID CLASS (cr=4 pr=0 pw=0 time=99 us)
20 INDEX FULL SCAN CLASS_PRIMARY (cr=1 pr=0 pw=0 time=10 us)(object id 53850)
786 SORT JOIN (cr=46 pr=0 pw=0 time=750 us)
895 TABLE ACCESS FULL PROCEDUR (cr=46 pr=0 pw=0 time=18 us)
47196 TABLE ACCESS FULL TRX (cr=108968 pr=0 pw=0 time=805137 us)
696300 TABLE ACCESS FULL GRADITEM (cr=10031 pr=0 pw=0 time=277 us)
45323 TABLE ACCESS BY INDEX ROWID PRODUCER (cr=91075 pr=0 pw=0 time=414937 us)
45323 INDEX UNIQUE SCAN PRODUCER_PRIMARY (cr=45752 pr=0 pw=0 time=198709 us)(object id 54581)
45269 TABLE ACCESS BY INDEX ROWID GRADING (cr=91024 pr=0 pw=0 time=353081 us)
45270 INDEX UNIQUE SCAN GRADING_PRIMARY (cr=45753 pr=0 pw=0 time=173185 us)(object id 54088)
42669 TABLE ACCESS BY INDEX ROWID USERS (cr=120380 pr=0 pw=0 time=703786 us)
42669 INDEX RANGE SCAN USERS_PRODUCER (cr=46127 pr=0 pw=0 time=249186 us)(object id 55024)
42669 TABLE ACCESS BY INDEX ROWID PATIENT (cr=85782 pr=0 pw=0 time=407452 us)
42669 INDEX UNIQUE SCAN PATIENT_PRIMARY (cr=43098 pr=0 pw=0 time=198477 us)(object id 54370)
176 TABLE ACCESS BY INDEX ROWID USERS (cr=49426 pr=0 pw=0 time=1783886 us)
367 NESTED LOOPS (cr=49149 pr=0 pw=0 time=6159428 us)
190 NESTED LOOPS (cr=48953 pr=0 pw=0 time=409391 us)
190 NESTED LOOPS (cr=48569 pr=0 pw=0 time=407105 us)
190 NESTED LOOPS (cr=48185 pr=0 pw=0 time=404820 us)
191 NESTED LOOPS (cr=47991 pr=0 pw=0 time=410291 us)
193 NESTED LOOPS (cr=47603 pr=0 pw=0 time=422507 us)
193 NESTED LOOPS (cr=46979 pr=0 pw=0 time=416890 us)
193 NESTED LOOPS (cr=46396 pr=0 pw=0 time=414374 us)
14285 TABLE ACCESS FULL GRADITEM (cr=9602 pr=0 pw=0 time=85793 us)
193 TABLE ACCESS BY INDEX ROWID TRX (cr=36794 pr=0 pw=0 time=128427 us)
8218 INDEX UNIQUE SCAN TRX_PRIMARY (cr=28576 pr=0 pw=0 time=64353 us)(object id 54930)
193 TABLE ACCESS BY INDEX ROWID TRX (cr=583 pr=0 pw=0 time=2169 us)
193 INDEX UNIQUE SCAN TRX_PRIMARY (cr=390 pr=0 pw=0 time=918 us)(object id 54930)
193 TABLE ACCESS BY INDEX ROWID GRADITEM (cr=624 pr=0 pw=0 time=4840 us)
1547 INDEX RANGE SCAN GRADITEM_ID (cr=395 pr=0 pw=0 time=2910 us)(object id 54093)
191 TABLE ACCESS BY INDEX ROWID GRADING (cr=388 pr=0 pw=0 time=1887 us)
191 INDEX UNIQUE SCAN GRADING_PRIMARY (cr=197 pr=0 pw=0 time=943 us)(object id 54088)
190 TABLE ACCESS BY INDEX ROWID CLASS (cr=194 pr=0 pw=0 time=1306 us)
190 INDEX UNIQUE SCAN CLASS_PRIMARY (cr=4 pr=0 pw=0 time=551 us)(object id 53850)
190 TABLE ACCESS BY INDEX ROWID PRODUCER (cr=384 pr=0 pw=0 time=1617 us)
190 INDEX UNIQUE SCAN PRODUCER_PRIMARY (cr=194 pr=0 pw=0 time=715 us)(object id 54581)
190 TABLE ACCESS BY INDEX ROWID PATIENT (cr=384 pr=0 pw=0 time=1941 us)
190 INDEX UNIQUE SCAN PATIENT_PRIMARY (cr=194 pr=0 pw=0 time=939 us)(object id 54370)
176 INDEX RANGE SCAN USERS_PRODUCER (cr=196 pr=0 pw=0 time=1389 us)(object id 55024)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 6312 0.00 0.00
db file scattered read 1614 0.02 2.08
SQL*Net message from client 6312 0.00 10.11
SQL*Net more data to client 48662 0.00 0.70
db file sequential read 4645 0.02 7.11
latch: shared pool 7 0.00 0.00
latch: cache buffers chains 1 0.00 0.00
********************************************************************************Again, I apologize if this is way more information than is necessary.
All advice/suggestions/assistance will be gratefully accepted.
CarlHi Rob,
Thank you for replying. Here is the view definition . . . it looks pretty convoluted, I know.
I am reporting from a database which I have no control over other than to add indexes where needed.
For reporting purposes, I am needing to create the dataset from a number of different tables; hence all the UNION clauses.
-- CODE FOLLOWS
CREATE OR REPLACE VIEW LLU_V_PRODUCTION_DETAIL_03
AS
SELECT
'QR' AS "Source",
U1."User" AS "UserID",
QRP."ProviderID" AS "ProviderID",
P."LastName" AS "ProviderLastName",
P."FirstName" AS "ProviderFirstName",
TO_CHAR(P."EndDate",'YYYY') AS "GraduationYear",
P."PGroup",
QRP."PatientID",
QRP."ChartNumber",
QRP."PatientLastName",
QRP."PatientFirstName",
QRP."ProcedureID" || '-' || QRP."ProcedureSuffix" AS "Procedure",
QRP."ProcedureDescription",
QRP."Tooth" AS "Site",
QRP."Surface",
QRP."axiUm_Discipline" AS "Discipline",
"CLASS"."Rank" AS "DisciplineSorter",
"CLASS"."Name" AS "DisciplineName",
QRP."Points",
0 AS "Hours",
QRP."ServiceDate",
0 AS "Id",
QRP."UniqueID" AS "Grading",
0 AS "LastSort",
QRP."CategoryID",
'' AS "CPAR comments"
FROM QR_PRODUCTION QRP
INNER JOIN PRODUCER P
ON TRIM(QRP."ProviderID") = TRIM(P."Producer")
INNER JOIN CLASS
ON TRIM(QRP."axiUm_Discipline") = TRIM(CLASS."Class")
INNER JOIN USERS U1
ON QRP."ProviderID" = U1."Producer"
WHERE (QRP."Location" < '9990')
AND (QRP."ProviderID" IS NOT NULL)
AND (QRP."ProcedureID" <> 185)
-- skip the Cavender family - training patients
AND NOT (QRP."PatientLastName" LIKE 'CAVENDER%' AND QRP."PatientFirstName" = 'TED')
AND QRP."PatientFirstName" <> 'NON-PATIENT'
--ORDER BY QRP."ProcedureID"
UNION ALL
SELECT
'axiUm TRX' AS "Source",
U1."User" AS "UserID",
TRX."Producer" AS "ProviderID",
P."LastName",
P."FirstName",
TO_CHAR(P."EndDate",'YYYY') AS "GraduationYear",
P."PGroup",
PAT."Patient",
PAT."Chart",
PAT."Last" AS "PatientLastName",
PAT."First" AS "PatientFirstName",
TRX."Procedure",
PROC."Description",
TRX."Site",
TRX."Surface",
TRIM(PROC."Discipline") AS "Discipline",
"CLASS"."Rank" AS "DisciplineSorter",
"CLASS"."Name",
CASE WHEN
((TRX."Procedure" IN ('A0013','A0019','A0020','A0021','A0023','A0024','A0025','A0026','A0027','A0028','A0029','A0030','O179'))
OR
-- no points are to be awarded for any procedures performed on typodonts
(EXISTS (SELECT 1 FROM PTTYPES PTT WHERE PTT."Patient" = PAT."Patient" and PTT."PatType" = 'TYPO' and PTT."Deleted" = 0)))
AND -- except for the following procedures
(TRX."Procedure" NOT IN ('A0015','A0016','A0018','D0210'))
THEN 0 ELSE PROC."RelValue" END AS "Points",
CASE WHEN TRX."Procedure" IN ('A0013','A0019','A0020','A0021','A0023','A0024','A0025','A0026','A0027','A0028','A0029','A0030','O179')
THEN PROC."RelValue" ELSE 0 END AS "Hours",
TRX."TreatmentDate",
TRX."Id",
TRX."Grading",
-1 AS "LastSort",
-- additional link conditions added and table name changed on 1 July 2009 - cji
SELECT "CategoryID" FROM LLU_CATEGORIES_X_PROCEDURES_03 LLU
WHERE TRX."Procedure" = LLU."ProcedureID"
AND TO_CHAR(P."EndDate",'YYYY') = LLU."GraduationYear"
AND SUBSTR(TRX."Producer",1,1) = LLU."ProviderType"
AND LLU."CategoryID" NOT IN (82)
) AS "CategoryID",
'' AS "CPAR comments"
FROM TRX
INNER JOIN PATIENT PAT
ON TRX."Patient" = PAT."Patient"
INNER JOIN USERS U1
ON TRX."Producer" = U1."Producer"
INNER JOIN PROCEDUR PROC
ON TRX."Procedure" = PROC."Procedure"
INNER JOIN CLASS
ON PROC."Discipline" = CLASS."Class"
INNER JOIN PRODUCER P
ON TRX."Producer" = P."Producer"
WHERE (TRX."Status" = 'C')
AND (TRX."Deleted" = 0)
--AND (TRX."Grading" = 0)
AND (TRX."Procedure" NOT LIKE 'D0149%')
AND (TRX."Procedure" <> 'D5001C')
-- exclude all procedures approved by Peds faculty ONLY FOR CLASS OF 2011 AND LATER
-- EXCEPT FOR the Peds Block code (A0021 and A0022) - always include those
AND NOT
TRX."AppUser" IN (SELECT "User" FROM USERS WHERE "Custom3" = 'YES') -- Peds faculty
AND
TO_CHAR(P."EndDate",'YYYY') >= '2011'
AND
TRIM(TO_CHAR(P."EndDate",'YYYY')) IS NOT NULL
AND
TRX."Procedure" NOT IN ('A0021','A0022')
UNION ALL
SELECT
'axiUm GRADING' AS "Source",
U1."User" AS "UserID",
TRX."Producer" AS "ProviderID",
P."LastName",
P."FirstName",
TO_CHAR(P."EndDate",'YYYY'), -- Graduation year
P."PGroup",
PAT."Patient",
PAT."Chart",
PAT."Last" AS "PatientLastName",
PAT."First" AS "PatientFirstName",
TRX."Procedure",
LLU."Description",
TRX."Site",
TRX."Surface",
TRIM(PROC."Discipline") AS "Discipline",
"CLASS"."Rank" AS "DisciplineSorter",
"CLASS"."Name",
LLU."Points", --LLU_POINTS_FROM_EVALUATIONS(TRX."Procedure", nvl(GI."Text",0)),
0 AS "Hours",
TRX."TreatmentDate",
GI."Id",
GI."Grading",
GI."Row" AS "LastSort",
LLU."CategoryID",
'' AS "CPAR comments"
FROM TRX
INNER JOIN PATIENT PAT
ON TRX."Patient" = PAT."Patient"
INNER JOIN PRODUCER P
ON TRX."Producer" = P."Producer"
INNER JOIN GRADING G
ON TRX."Grading" = G."Grading"
INNER JOIN GRADITEM GI
ON G."Grading" = GI."Grading"
AND TRX."Type" = GI."Type"
AND TRX."Id" = GI."Id"
AND TRX."Treatment" = GI."Treatment"
INNER JOIN PROCEDUR PROC
ON TRX."Procedure" = PROC."Procedure"
INNER JOIN CLASS
ON PROC."Discipline" = CLASS."Class"
INNER JOIN LLU_EVALUATION_DESCRIPTIONS LLU
ON (TRIM(UPPER(GI."QuestionText")) = TRIM(UPPER(LLU."GRADITEM_QuestionText")))
AND (TRIM(UPPER(GI."Text")) = TRIM(UPPER(LLU."GRADITEM_Text")))
AND (TRIM(TRX."Procedure") = TRIM(LLU."ProcedureCode"))
INNER JOIN USERS U1
ON TRX."Producer" = U1."Producer"
WHERE (TRX."Status" = 'C')
AND (TRX."Deleted" = 0)
AND (TRX."Grading" <> 0)
AND (G."Deleted" = 0)
UNION ALL
SELECT 'ClinPtsAdj',
U1."User" AS "UserID",
TRX."Producer",
P."LastName", P."FirstName",
TO_CHAR(P."EndDate",'YYYY'), -- Graduation year
P."PGroup", PAT."Patient", PAT."Chart", PAT."Last", PAT."First",
TRX."Procedure",
'Clinic points adjustment',
'' AS "Site",
'' AS "Surface",
TRIM(TRX."Discipline"),
"CLASS"."Rank",
"CLASS"."Name",
DERIVED."Points",
0 AS "Hours",
GI."Date",
GI."Id",
GI."Grading",
GI."Row",
NULL AS "CategoryID",
TRIM(REPLACE(GI."Note", CHR(13) || CHR(10), ' '))
FROM TRX
INNER JOIN PATIENT PAT
ON (TRX."Patient" = PAT."Patient")
INNER JOIN PRODUCER P
ON (TRX."Producer" = P."Producer")
INNER JOIN CLASS
ON (TRX."Discipline" = CLASS."Class")
INNER JOIN GRADING G
ON (TRX."Grading" = G."Grading")
INNER JOIN GRADITEM GI
ON (GI."Grading" = G."Grading")
AND (GI."Type" = TRX."Type")
AND (GI."Id" = TRX."Id")
AND (GI."Treatment" = TRX."Treatment")
INNER JOIN USERS U1
ON TRX."Producer" = U1."Producer"
INNER JOIN
SELECT TRX."Type", TRX."Id", TRX."Treatment", TRX."Grading",
CASE WHEN TRX."Procedure" = 'G1003' THEN GI."RelValue" * -1 ELSE "RelValue" END AS "Points"
FROM TRX, GRADITEM GI
WHERE (TRX."Type" = GI."Type")
AND (TRX."Id" = GI."Id")
AND (TRX."Treatment" = GI."Treatment")
AND (TRX."Procedure" IN ('G1002','G1003'))
AND (TRX."Status" = 'C')
AND (TRX."Deleted" = 0)
AND (TRX."Grading" <> 0)
AND (GI."RelValue" <> 0)
) DERIVED
ON (TRX."Type" = DERIVED."Type")
AND (TRX."Id" = DERIVED."Id")
AND (TRX."Treatment" = DERIVED."Treatment")
AND (TRX."Grading" = DERIVED."Grading")
WHERE (TRX."Status" = 'C')
AND (TRX."Deleted" = 0)
AND (TRX."Grading" <> 0)
AND (G."Deleted" = 0)
AND (TRX."Procedure" IN ('G1002','G1003'))
AND (GI."IsHeading" = 3)
AND (TRIM(GI."QuestionText") = 'Comments')
--ORDER BY "ProviderID", "DisciplineSorter", "Id", "Grading", "Site", "LastSort";A couple of additional points:
The table USERS already had an index on column "Producer"; I added an index on column "Custom3" but it did not seem to make any difference.
Table USERS has 1,725 rows
Table PRODUCER (aliased as P) has 1,396 rows
Table TRX has 1,443,764 rows.
Any additional information you need I will be glad to provide.
Thanks a bunch; it may not be too late to teach an old dog new tricks.
And thank you all for the kind words about the posting; about the only thing I can do well is follow directions.
Carl -
Database Performance: Large execution time.
Hi,
I have TPC-h database of size 1GB. I am running a nested query having multiple joins between 5 tables and a group by and order by on three attributes. It took around 1 hour for this query to get executed (also it was fired for the point which can be considered as the center of selectivity range.).
Following is the query:
select
supp_nation,
cust_nation,
l_year,
sum(volume)
from
select
n1.n_name as supp_nation,
n2.n_name as cust_nation,
YEAR (l_shipdate) as l_year,
l_extendedprice * (1 - l_discount) as volume
from
supplier,
lineitem,
orders,
customer,
nation n1,
nation n2
where
s_suppkey = l_suppkey
and o_orderkey = l_orderkey
and c_custkey = o_custkey
and s_nationkey = n1.n_nationkey
and c_nationkey = n2.n_nationkey
and (
(n1.n_name = 'FRANCE' and n2.n_name = 'GERMANY')
or (n1.n_name = 'GERMANY' and n2.n_name = 'FRANCE')
and l_shipdate between '1995-01-01' and '1996-12-31'
and o_totalprice <= 246835
and c_acctbal <= -422.16
)as shipping
group by
supp_nation,
cust_nation,
l_year
order by
supp_nation,
cust_nation,
l_year
Moreover it has been observed that such types of queries viz., nested, sub queries, aggregation are taking very high amount of time for execution as compared to other databases. The above mentioned query took only 18 seconds to execute in ORACLE server.
The machine configuration and the database configuration are as follows:
Machine:
64-bit Windows Vista operating System.
RAM: 8GB.
CPU: 3.0 GHZ
Database:
Data Area: No. of Volumes: 1, Size of Volume: 4GB (as mentioned on wiki, for 10 GB database 4 volumes must be assigned.)
Log Area: Volume: 1, Size: 1GB
Data and Log are on same disk.
Caches:
I/O Buffer Cache: 1 GB
Data Cache: 1 GB
Catalog Cache: 30 MB
Parameters:
CacheMemorySize - 131072
ReadAheadLobThreshold- 3000
Also, we have set other optimizer parameters as required and recommended by SAPDB. Even then I am not able get better performance.
How to increase or better the performance? Is there any other parameter that remains to be set?> I have TPC-h database of size 1GB. I am running a nested query having multiple joins between 5 tables and a group by and order by on three attributes. It took around 1 hour for this query to get executed (also it was fired for the point which can be considered as the center of selectivity range.).
> Moreover it has been observed that such types of queries viz., nested, sub queries, aggregation are taking very high amount of time for execution as compared to other databases. The above mentioned query took only 18 seconds to execute in ORACLE server.
Such general statements are usually total crap.
MaxDB is running for many SAP customer and SAP internally in many installations - even for BI systems.
We don't know your Oracle server, we don't know the execution plans - so there's nothing to tell why it may be the case here.
> Data Area: No. of Volumes: 1, Size of Volume: 4GB (as mentioned on wiki, for 10 GB database 4 volumes must be assigned.)
It's a rule of thumb - having just one volume is a rather bad idea since you don't get parallel I/O with that.
> Log Area: Volume: 1, Size: 1GB
> Data and Log are on same disk.
Although this is irrelevant for the query performance it's nonsense in productive environments and a performance killer as well.
> I/O Buffer Cache: 1 GB
> Data Cache: 1 GB
Why don't you allow more Cache ?
> Catalog Cache: 30 MB
What for? Do you understand the catalog cache in MaxDB?
It's a per session setting...
> Also, we have set other optimizer parameters as required and recommended by SAPDB. Even then I am not able get better performance.
Can you be more specific here?
What MaxDB version are you using? What parameter settings do you use?
> How to increase or better the performance? Is there any other parameter that remains to be set?
How about showing us the execution plan for the statement and the index structure?
How should we know what MaxDB does here that takes so much time?
Did you have the DBanalyzer running while the query ran?
TPC-H is a benchmark for ad-hoc, decision making support: did you enable any of the BI feature pack features of MaxDB? What about prefetching? What about table clustering, column compression, star join optimization ...?
All in all - you left us here with "MaxDB is slower than Oracle" and nothing to work on.
That's not useful in any way.
Want some answers - provide some information!
regards,
Lars
Maybe you are looking for
-
Spry accordion widget - SSI (server side include) as content in the panel
After having inserted the following command <!--# include file = "content2.asp" -> in the accordion Panel Content I get an error in Dreamweaver - every time I open the file. See the error in the attached picture. Once all files are uploaded to the se
-
CC desktop is installed but when I try to log in it responds only that I have been logged out
-
How SAP HR Handles dummy clock in/out time data?
Dear all, I have a question whether SAP HR has the function to handle dummy clock in/out time data. The scenario is like this: For an employee, his daily work schedule: 8:00 to 17:30, break time for meal: 12:15 to 13:00. Employees always have dummy c
-
Hi Experts Is there any std report like CN41N to see the structures of standard templates? I am unable to find the std structures in table PROJ. Please let me know see the tables entries for Project definition and WBS for Standard structures. warm re
-
Restoring clients after homogeneous copy
Hi... I have a QAS system with three clients (100, 120 and 200). I have a PRD system with just the client 100. I was asked to perform an homogeneous copy from PRD to QAS, but to make sure clients 120 and 200 are not destroyed. My question is: Would a