Query text and execution plan collection from prod DB Oracle 11g
Hi all,
I would like to collect query text, query execution plan and other statistics for all queries (mainly select queries) from production database.
I am doing this by using OEM by click on Top activity link under performance tab but this gives top sql which is recent.
This approach is helpful only when I need to debug recent queries only. If I need to know slow running queries and their execution plan at the end of day or sometime later then it’s not helpful for me.
Anybody who has some better idea to do this will really be helpful.
we did followings:
1.Used awrextr.sql to export dmp file from production database.(imported snpashot id from 331 to 560)
2.transfer file to test database.
3.Used awrload.sql to import it in test database.
but when we used OEM and went to Automatic Workload Repository link under Server tab
its not showing snapshots of production database (which we have imported in test database )
and showing only snapshot which was already there in test database.
We did not find any error in import/export.
do we need to perform something else also to display snapshots of production database in test database.
Similar Messages
-
How query cost and execution time are releated ?
hi experts,
i am curious to know, how the query cost and execution time is related?
Query taking less time ,query cost is 65%, but query taking more time but query cost is 0%.
how to connect both and improve query performance.
Thanksi think you are refering to cost (relative to the batch) execution 65%, where there are more that one statement, it may compare the cost of each statement with in the batch
i assume it mainly take subtree cost and IO stat as cost, but in some cases i may wrong when there is multi line function and many other facter influence the cost, and i would say it depends on the query
cost is unit-less
The reason these costs exist is because of the query optimization SQL Server does: it does cost-based optimization, which means that the optimizer formulates a lot of different ways to execute the query, assigns a cost to each of these alternatives, and
chooses the one with the least cost. The cost tagged on each alternative is heurestically calculated, and is supposed to roughly reflect the amount of processing and I/O this alternative is going to take.
refer :
http://blogs.msdn.com/b/sqlqueryprocessing/archive/2006/10/11/what-is-this-cost.aspx
Thanks
Saravana kumar C -
If I'm copying text and/or vector elements from Indesign to Photoshop how come their pixel sizes change even though I opened the same sized document and my indesign file is a web file?
>my indesign file is a web file
Pardon?
Or do you mean that, when you created a new document, you choose Web as intent maybe? -
when i restore my backup from computer why do my minutes texts and data gets removed from my pay as you go sim card?
As long as it's unlocked it will work with any SIM, incl. EE SIMs.
If you buy abroad you need to ensure it is compatible with UK mobile frequencies, on which I cannot advise. -
Move database from server running Oracle 11G to server running Oracle 12c
I'm trying to find out the easiest way to migrate a database from a server running Oracle 11G to a server running Oracle 12c. I have tried using RMAN's duplicate database command but have run into far too many issues when trying to setup both servers before running the duplicate command. If someone could provide some clear guidance on configuring both servers for the RMAN Duplicate command then that would be great. The other thing I have tried is performing a cold backup of all the files for the database (control files, data files, etc.). Copying them to the new server and then recreating the control file to include the correct location and names for the datafiles and redologs. After recreating the control file the database will not start up. I'm suspecting this is due to the version differences. If you can provide me clear (ie. migration for dummies) instruction for getting either of these methods to work it would be greatly appreciated. I'm also open to any other method you can think of to achieve what I'm trying to accomplish. Thanks, Paul Noyes
Pl do not post duplicates - Move database from server running Oracle 11G to server running Oracle 12c
-
I had migrated Oracle 10g database from Windows to Oracle 11g Linux
I had migrated Oracle 10g database from Windows to Oracle 11g Linux. The database is performing very slow.
Please guide me where I have to begin (starting point) looking into it.
Some document stated gather system statistics. How to check system statistics is up to date
What are the crucial initialization parameter ?Hi,
Let me just point you out to the documentation, which may concern you:
I had migrated Oracle 10g database from Windows to Oracle 11g Linux. The database is performing very slow.
Managing Optimizer Statistics
How to check system statistics is up to date
Managing Optimizer Statistics
What are the crucial initialization parameter ?
Configuring a Database for Performance
Thanks &
Best Regards, -
Steps for migrating from Oracle10g to Oracle 11g
Hi,
Please send me steps to migrate from Oracle10g to Oracle 11g on Redhat Linux x86_64 on RAC with ASM.
Regards,
Tusharhttp://download.oracle.com/docs/cd/B28359_01/server.111/b28300/toc.htm
-
SQL Query C# Using Execution Plan Cache Without SP
I have a situation where i am executing an SQL query thru c# code. I cannot use a stored procedure because the database is hosted by another company and i'm not allowed to create any new procedures. If i run my query on the sql mgmt studio the first time
is approx 3 secs then every query after that is instant. My query is looking for date ranges and accounts. So if i loop thru accounts each one takes approx 3 secs in my code. If i close the program and run it again the accounts that originally took 3 secs
now are instant in my code. So my conclusion was that it is using an execution plan that is cached. I cannot find how to make the execution plan run on non-stored procedure code. I have created a sqlcommand object with my queary and 3 params. I loop thru each
one keeping the same command object and only changing the 3 params. It seems that each version with the different params are getting cached in the execution plans so they are now fast for that particular query. My question is how can i get sql to not do this
by either loading the execution plan or by making sql think that my query is the same execution plan as the previous? I have found multiple questions on this that pertain to stored procedures but nothing i can find with direct text query code.
Bob;
I did the query running different accounts and different dates with instant results AFTER the very first query that took the expected 3 secs. I changed all 3 fields that i've got code for parameters for and it still remains instant in the mgmt studio but
still remains slow in my code. I'm providing a sample of the base query i'm using.
select i.Field1, i.Field2,
d.Field3 'Field3',
ip.Field4 'Field4',
k.Field5 'Field5'
from SampleDataTable1 i,
SampleDataTable2 k,
SampleDataTable3 ip,
SampleDataTable4 d
where i.Field1 = k.Field1 and i.Field4 = ip.Field4
i.FieldDate between '<fromdate>' and '<thrudate>'
and k.Field6 = <Account>
Obviously the field names have been altered because the database is not mine but other then the actual names it is accurate. It works it just takes too long in code as described in the initial post.
My params setup during the init for the connection and the command.
sqlCmd.Parameters.Add("@FromDate", SqlDbType.DateTime);
sqlCmd.Parameters.Add("@ThruDate", SqlDbType.DateTime);
sqlCmd.Parameters.Add("@Account", SqlDbType.Decimal);
Each loop thru the code changes these 3 fields.
sqlCommand.Parameters["@FromDate"].Value = dtFrom;
sqlCommand.Parameters["@ThruDate"].Value = dtThru;
sqlCommand.Parameters["@Account"].Value = sAccountNumber;
SqlDataReader reader = sqlCommand.ExecuteReader();
while (reader.Read())
reader.Close();
One thing i have noticed is that the account field is decimal(20,0) and by default the init i'm using defaults to decimal(10) so i'm going to change the init to
sqlCmd.Parameters["@Account"].Precision = 20;
sqlCmd.Parameters["@Account"].Scale = 0;
I don't believe this would change anything but at this point i'm ready to try anything to get the query running faster.
Bob; -
Explain Plan and Execution Plan in 10gR2.
Hi,
Version 10.2.0.1.0.
I have two questions:
1) If the explain plan differs from the execution path in this version, then, is it safe to assume that the statistics are stale (or not gathered at all) on the underlying tables?
2) Can you in any way make a query use RBO instead of CBO? (I know it doesn't make any sense since CBO is lot smarter, but, for purely academic reasons).
Thank you,
Rahul.The rule based optimizer is most definitely present in 10gR2. It might not be in the documentation, but it is still there.
C:\sql>sqlplus test/test
SQL*Plus: Release 10.2.0.2.0 - Production on Tue Oct 10 15:43:34 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
test@SVTEST> set autotrace traceonly
test@SVTEST> alter session set optimizer_mode=rule;
Session altered.
Elapsed: 00:00:00.01
test@SVTEST> select * from dual;
Elapsed: 00:00:00.03
Execution Plan
Plan hash value: 272002086
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| DUAL |
Note
- rule based optimizer used (consider using cbo)
Statistics
1 recursive calls
0 db block gets
3 consistent gets
2 physical reads
0 redo size
407 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
test@SVTEST> -
Query parse and execution order
Hi,
In the below SQL, I could understand that it would parse from Right to Left, first filter condition fieldname, Syntax is verified and then join column names.
From the below execution plan, I believe it executes from right to left, first filters the data and then joins the two tables, is this correct.
Is this in the fixed order or keep changing based on the statistics or any other parameter.
I would like to know how this SQL Select statement is executed in the run-time, whether data is joined first and then the filter condition is applied or
the other way round. Please give me more details on this, thank you.
SELECT * FROM EMP E, DEPT D
WHERE E.DEPTID = D.DEPTID AND
D.DEPTNAME = 'DEPT1';
Below is the execution plan,
SQL > SELECT * FROM EMP E, DEPT D
2 WHERE E.DEPTID = D.DEPTID AND
3 D.DEPTNAME = 'DEPT1';
Execution Plan
Plan hash value: 1123238657
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 143 | 5 (20)| 00:00:01 |
|* 1 | HASH JOIN | | 1 | 143 | 5 (20)| 00:00:01 |
| 2 | TABLE ACCESS FULL| EMP | 1 | 78 | 2 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| DEPT | 1 | 65 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("E"."DEPTID"="D"."DEPTID")
3 - filter("D"."DEPTNAME"='DEPT1')
Note
- dynamic sampling used for this statement>
- Oracle does the full table scan of EMP table and makes an in-memory hash table with DEPTID as the hash key
- DEPT table is being read; as Oracle reads DEPT table, applies the filter predicate ("D"."DEPTNAME"='DEPT1'), applies the hashing function to the join key (DEPTID)
and uses it to locate the matching row from EMP
- rows are returned to the client
>
I believe that is correct for this particular query and plan only because there is only one row in each table. If the tables had many more records then the smaller of the two tables would be chosen to create the hash table and the following should apply.
The DEPT table is the smaller of the two tables so Oracle would do a full table scan of DEPT to make the in-memory hash with DEPTID as the hash key.
Then the EMP table (the larger table) is scanned and the DEPTID value used to probe the hash table to find the matching record and then the other filter predicate ("D"."DEPTNAME"='DEPT1') used to eliminate rows.
See section 11.6.4 Hash Joins in the Performance Tuning Guide
>
Hash joins are used for joining large data sets. The optimizer uses the smaller of two tables or data sources to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find the joined rows.
This method is best used when the smaller table fits in available memory. The cost is then limited to a single read pass over the data for the two tables.
>
This example uses a copy of emp and dept with no primary key or constraints
SQL> SELECT * FROM EMP1 E, DEPT1 D
2 WHERE E.DEPTNO = D.DEPTNO AND
3 D.DNAME = 'RESEARCH';
Execution Plan
Plan hash value: 619452140
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 380 | 7 (15)| 00:00:01 |
|* 1 | HASH JOIN | | 5 | 380 | 7 (15)| 00:00:01 |
|* 2 | TABLE ACCESS FULL| DEPT1 | 1 | 30 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL| EMP1 | 14 | 644 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("E"."DEPTNO"="D"."DEPTNO")
2 - filter("D"."DNAME"='RESEARCH')
Note
- dynamic sampling used for this statement (level=2) -
Same query with different execution plan
Hello All,
I wonder why does sql server create different execution plan for these below queries ?
Thanks.You can look at the expected query plan. Either visually in SSMS, or alternatively, you can run the query after the instruction SET SHOWPLAN_TEXT ON.
The Optimizer is the component of SQL Server that determines how the query is executed. It is cost based. It will assess different execution plans, estimate the cost of each of them and then select the cheapest. In this context, cheapest means the one with
the shortest runtime.
In your particular case, the estimation for the second query is, that scanning just a small part of the nonclustered index and then looking up the table data of the qualifying rows is the cheapest approach, because the estimated number of qualifying rows
is low.
In the first query, it estimated that looking up the many qualifying rows there would be too expensive, and that it would be cheaper to simply scan the entire clustered index, and simply filter out all unwanted rows. Note that the clustered index includes
the actual table data.
Gert-Jan -
HT1766 How can I transfer texts and phone history? From iPhone 4 to 5s
Does anyone know how to transfer texts and call history (ph#'s especially) from former iPhone 4 to iPhone 5s?
Make a backup of the old device and restore the new device to that backup. You will get an exact duplicate of the old phone.
-
Powershell and SCCM 2012 Collections from AD
Greetings everyone, I am trying to take a select of groups in a certain OU within AD to make SCCM 2012 user collections from. I been playing around with just creating the collections based upon a txt file which would work and is probably easier. Can someone
go through this mess I have down here and make this work. I found some scripts that do what I need but they just aren't flowing together well. Thanks for any help you can give. I think the 'DOMAIN\\'$ADName isn't going to work either. Thoughts?
import-module ($Env:SMS_ADMIN_UI_PATH.Substring(0,$Env:SMS_ADMIN_UI_PATH.Length-5) + '\ConfigurationManager.psd1')
$PSD = Get-PSDrive -PSProvider CMSite
CD "$($PSD):"
$ADName = Get-Content -Path C:\fso\ADgroups.txt
New-CMUserCollection -Name $ADName -LimitingCollectionName "All Users and User Groups" -RefreshType Both
Add-CMUserCollectionQueryMembershipRule -CollectionName $ADName -QueryExpression "select * from SMS_R_User where SMS_R_User.UserGroupName = 'DOMAIN\\'$ADName" -RuleName "QueryRuleName1"Hi,
I highly recommend asking this question in the ConfigMgr 2012 SDK/PowerShell forum:
http://social.technet.microsoft.com/Forums/en-US/home?forum=configmanagersdk&filter=alltypes&sort=lastpostdesc
This forum is meant for general PowerShell questions. Hopefully you'll get a good response from someone here, but the people who are familiar with the ConfigMgr cmdlets will be more likely to see your question in the specialized forum.
Don't retire TechNet! -
(Don't give up yet - 12,950+ strong and growing) -
How can I get an execution plan for a Function in oracle 10g
Hi
I have:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
I would like to know if is possible to get an EXECUTION PLAN for a FUNCTION if so, how can I get it ?
RegardsYou can query the AWR data if your interesting SQL consumes enough resources.
Here is a SQL*Plus script I call MostCPUIntensiveSQLDuringInterval.sql (nice name eh?)
You'll need to know the AWR snap_id numbers for the time period of interest, then run it like this to show the top 20 SQLs during the interval:
@MostCPUIntensiveSQLDuringInterval 20The script outputs a statement to run when you are interested in looking at the plan for an interesting looking statement.
-- MostCPUintesticeSQLDuringInterval: Report on the top n SQL statements during an AWR snapshot interval.
-- The top statements are ranked by CPU usage
col inst_no format 999 heading 'RAC|Node'
col sql_id format a16 heading 'SQL_ID'
col plan_hash_value format 999999999999 heading 'Plan|hash_value'
col parsing_schema_name format a12 heading 'Parsing|Schema'
col module format a10 heading 'Module'
col pct_of_total format 999.99 heading '% Total'
col cpu_time format 999,999,999 heading 'CPU |Time (ms)'
col elapsed_time format 999,999,999 heading 'Elapsed |Time (ms)'
col lios format 9,999,999,999 heading 'Logical|Reads'
col pios format 999,999,999 heading 'Physical|Reads'
col execs format 99,999,999 heading 'Executions'
col fetches format 99,999,999 heading 'Fetches'
col sorts format 999,999 heading 'Sorts'
col parse_calls format 999,999 heading 'Parse|Calls'
col rows_processed format 999,999,999 heading 'Rows|Processed'
col iowaits format 999,999,999,999 heading 'iowaits'
set lines 195
set pages 75
PROMPT Top &&1 SQL statements during interval
SELECT diff.*
FROM (SELECT e.instance_number inst_no
,e.sql_id
,e.plan_hash_value
,e.parsing_schema_name
,substr(trim(e.module),1,10) module
,ratio_to_report(e.cpu_time_total - b.cpu_time_total) over (partition by 1) * 100 pct_of_total
,(e.cpu_time_total - b.cpu_time_total)/1000 cpu_time
,(e.elapsed_time_total - b.elapsed_time_total)/1000 elapsed_time
,e.buffer_gets_total - b.buffer_gets_total lios
,e.disk_reads_total - b.disk_reads_total pios
,e.executions_total - b.executions_total execs
,e.fetches_total - b.fetches_total fetches
,e.sorts_total - b.sorts_total sorts
,e.parse_calls_total - b.parse_calls_total parse_calls
,e.rows_processed_total - b.rows_processed_total rows_processed
-- ,e.iowait_total - b.iowait_total iowaits
-- ,e.plsexec_time_total - b.plsexec_time_total plsql_time
FROM dba_hist_sqlstat b -- begining snap
,dba_hist_sqlstat e -- ending snap
WHERE b.sql_id = e.sql_id
AND b.dbid = e.dbid
AND b.instance_number = e.instance_number
and b.plan_hash_value = e.plan_hash_value
AND b.snap_id = &LowSnapID
AND e.snap_id = &HighSnapID
ORDER BY e.cpu_time_total - b.cpu_time_total DESC
) diff
WHERE ROWNUM <=&&1
set define off
prompt to get the text of the SQL run the following:
prompt @id2sql &SQL_id
prompt .
prompt to obtain the execution plan for a session run the following:
prompt select * from table(DBMS_XPLAN.DISPLAY_AWR('&SQL_ID'));
prompt or
prompt select * from table(DBMS_XPLAN.DISPLAY_AWR('&SQL_ID',NULL,NULL,'ALL'));
prompt .
set define on
undefine LowSnapID
undefine HighSnapIDI guess you'll need the companion script id2sql.sql, so here it is:
set lines 190
set verify off
declare
maxDisplayLine NUMBER := 150; --max linesize to display the SQL
WorkingLine VARCHAR2(32000);
CurrentLine VARCHAR2(64);
LineBreak NUMBER;
cursor ddl_cur is
select sql_id
,sql_text
from v$sqltext_with_newlines
where sql_id='&1'
order by piece
ddlRec ddl_cur%ROWTYPE;
begin
WorkingLine :='.';
OPEN ddl_cur;
LOOP
FETCH ddl_cur INTO ddlRec;
EXIT WHEN ddl_cur%NOTFOUND;
IF ddl_cur%ROWCOUNT = 1 THEN
dbms_output.put_line('.');
dbms_output.put_line(' sql_id: '||ddlRec.sql_id);
dbms_output.put_line('.');
dbms_output.put_line('.');
dbms_output.put_line('SQL Text');
dbms_output.put_line('----------------------------------------------------------------');
END IF;
CurrentLine := ddlRec.sql_text;
WHILE LENGTH(CurrentLine) > 1 LOOP
IF INSTR(CurrentLine,CHR(10)) > 0 THEN -- if the current line has an embeded newline
WorkingLine := WorkingLine||SUBSTR(CurrentLine,1,INSTR(CurrentLine,CHR(10))-1); -- append up to new line
CurrentLine := SUBSTR(CurrentLine,INSTR(CurrentLine,CHR(10))+1); -- strip off up through new line character
dbms_output.put_line(WorkingLine); -- print the WorkingLine
WorkingLine :=''; -- reset the working line
ELSE
WorkingLine := WorkingLine||CurrentLine; -- append the current line
CurrentLine :=''; -- the rest of the line has been processed
IF LENGTH(WorkingLine) > maxDisplayLine THEN -- the line is morethan the display limit
LineBreak := instr(substr(WorkingLine,1,maxDisplayLine),' ',-1); --find the last space before the display limit
IF LineBreak = 0 THEN -- there is no space, so look for a comma instead
LineBreak := substr(WorkingLine,instr(substr(WorkingLine,1,maxDisplayLine),',',-1));
END IF;
IF LineBreak = 0 THEN -- no space or comma, so force the line break at maxDisplayLine
LineBreak := maxDisplayLine;
END IF;
dbms_output.put_line(substr(WorkingLine,1,LineBreak));
WorkingLine:=substr(WorkingLine,LineBreak);
END IF;
END IF;
END LOOP;
--dbms_output.put(ddlRec.sql_text);
END LOOP;
dbms_output.put_line(WorkingLine);
dbms_output.put_line('----------------------------------------------------------------');
CLOSE ddl_cur;
END;
/ -
Hi,
I copied data from 10.2.0.4 database to 11.2.0.3, gathered stats similar to 10.2.0.4 database and running same query against both database.
Bytes shows 257 in 10.2.0.4 and 2263 in 11.2.0.3. Both plans are using index access. Can someone help me why I see this discrepancy?
Thank You
Sarayu
select * from TABLE_1 where column_1 = 12345
Oracle 10.2.0.4
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 1 | 257 | 3 (0)|
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_1 | 1 | 257 | 3 (0)|
|* 2 | INDEX UNIQUE SCAN | IDX_TABLE_1 | 1 | | 2 (0)|
----------------------------------------------------------------------------------Oracle 11.2.0.3
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2263 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_1 | 1 | 2263 | 3 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | IDX_TABLE_1 | 1 | | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------Edited by: user13312943 on Oct 15, 2012 8:35 AMThat would seem to imply that the 11.2.0.3 database expects one row of the table to occupy 2263 bytes while the 10.2.0.4 database expects one row of the table to occupy 257 bytes. Which estimate is closer to the correct value? How much space does the table occupy in each database? How many rows are in each database?
Justin
Maybe you are looking for
-
please help I have to update my clients site tonight I have been using muse subscription since Nov 19, 2012 An unexpected error occurred.An unexpected error occurred processing your request. Please try again later.I-2
-
Hello, After reading through some of the posts that my search for "DBLink" returned, I was wondering if you could give me some general guidelines on troubleshooting. Here's my story: Users report an issue in Production regarding vendors that should s
-
Availability date in Sales Order
Hi......... Can anyone help me to find a BAPI for getting the Availability date for the materials (items) in Sales Order.... Thanks in advance Abhi......
-
SQL Query to find when the data was committed
Hi, In oracle how do we find the when the row as committed into DB(Timestamp): Eg: If insert 1000 records from the application i want to know what is exact time the data was committed in the DB. Regards NM
-
Just Bought MacBook...Came with TIGER!? What do I do?
Hi Against the reccomendations of a poster here, I went ahead and bought a macbook, even tho I knew It wasnt a Santa Rosa. I just wanted one bad, and I'm find with having 2.16, instead of 2.2 ghz. BUT... i thought the Leopard disc would come with it.