Exact time of SQL query execution.
Hi,
Is there any way to get the exact time of execution of particular SQL query.
Oracle Version : 10.2.0.4.0
OS : Sun OS.
Thx,
Gowin.
In general it's pretty hard.
Look at V$SQLSTAT.ELAPSED_TIME and DBA_HIST_SQLSTAT.ELAPSED_TIME_TOTAL/DELTA (need a license).
It will give you accurate results for a non-parallel query. For parallel queries you'll get a total time spent by all slaves.
Also you can enable tracing either on database or session level and analyze the trace files generated.
Edited by: Max Seleznev on Nov 4, 2011 12:06 PM
Similar Messages
-
Execution time of sql query differing a lot between two computer
hi
execution time of a query in my computer and more than 30 different computer is less than one second but on one of our
customers' computers, execution time is more than ten minute. databases and data and queries are same. i re-install sql but problem remains. my sql is ms sql 2008 r2.
any one has idea for this problem?Hi mahdi,
Obviously, we can't get enough information to help you troubleshoot this issue. So, please elaborate your issue with more detail so that the community members can help you in more effecient manner.
In addition, here is a good article regarding checklist for analyzing Slow-Running queries. Please see:
http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
And SQL Server Profiler and Performance Monitor are good tools to troubleshoot performance issue, please see:
Correlating SQL Server Profiler with Performance Monitor:
https://www.simple-talk.com/sql/database-administration/correlating-sql-server-profiler-with-performance-monitor/
Regards,
Elvis Long
TechNet Community Support -
Clustering of SQL query execution times
In doing some query execution experiments I have noted a curious (to me, anyhow) clustering of execution times around two distinct points. Across about 100 tests each running 1000 queries using (pseudo-)randomly generated IDs the following pattern emerges. The queries were run from Java using all combinations of pooled/non-pooled and thin/oci driver combinations:
100 *
90 *
R 80 *
u 70 *
n 60 *
s 50 *
40 * *
30 * *
20 * * * *
10 * * * * * *
0 100 200 300 400 500 600 700 800 900 1000 1100 1200
Time(ms)Where about half of the total execution times cluster strongly about a given (short) time value with a smaller but broader clustering at a significantly slower mark, with zero intermediate values. The last point is the one I find most curious.
What I would have expected is something like this:
100
90
R 80
u 70
n 60
s 50
40 *
30 * * *
20 * * * * * *
10 * * * * * * * * * *
0 100 200 300 400 500 600 700 800 900 1000 1100 1200
Time(ms)The variables I have tentatively discounted thus far:
-query differences (single query used)
-connection differences (using single pooled connection)
-garbage collection (collection spikes independent of query execution times)
-amount of data returned in bytes (single varchar2 returned and size is independent of execution time)
-driver differences (thin and oci compared, overall times differ but pattern of clustering remains)
-differences between Statement and PreparedStatement usage (both show same pattern)
I know this is a rather open-ended question, but does the described pattern seem faniliar or spark any thoughts?
DB-side file I/O?
Thread time-slicing variations (client or DB-side)?
FWIW, the DB is 9.2.0.3 DB and the clients are running on WinXP with Java 5.0 and 9i drivers.
Thanks and regards,
MFurther context:
Are your queries only SELECT queries ?
Yes, the same SELECT query is used for all tests. The only variable is the bind variable used to identify the primary key of the selection set (i.e. SELECT a.* from a, b, c where a.x = b.x and b.y = c.y and c.pk = ?) where all PKs and FKs are indexed.Do the queries always use the same tables, the same where clauses ?
Yes, the same tables are always invoked. The where clauses invoked are identical with the excepton of the single bind variable as described above.Do your queries always use bind variables ?
A single bind variable is used in all invocations as described above.Are your queries also running in single user mode or multi user mode (do you use SELECT FOR UPDATE ?) ?
We are not using SELECT FOR UPDATEDid something else run on the database/on the server hosting the database on the same time ?
I have not eliminated the idea, but the test has been repeated roughly 100 times over the course of a week and at different times of day with the same pattern emerging. I suppose it is not out of the question that a resource-hogging process is running consistently and constantly on the DB-side box.Thanks for the input,
M -
Excel ADODB Sql Query Execution taking hours when manipulate excel tables
Hello All
I have 28000 records with 8 column in an sheet. When I convert the sheet into ADODB database and copy to new
excel using below code it is executing in less than a min
Set Tables_conn_obj = New ADODB.Connection
Tables_conn_str = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & Table_Filename & ";Extended Properties=""Excel 12.0;ReadOnly=False;HDR = Yes;IMEX=1"""
Tables_conn_obj.Open Tables_conn_str
First_Temp_sqlqry = "Select * INTO [Excel 12.0;DATABASE=C:\Prod Validation\Database\Second Acat Table.xlsb].[Sheet1] Table [first - Table$];" Tables_conn_obj.Execute First_Temp_sqlqry
But when I change the query to manipulate one column in current table based on another table in the same excel
and try to copy the results in another excel, it is taking more than one hour.. why it is taking this much time when both the query results returns the same number of rows and column. I almost spend one week and still not able to resolve this issue.
Even I tried copyfromrecordset, getrows(), getstring(), Looping each recordset fields options all of them taking
same amount of time. Why there is huge difference in execution time.
Important note: Without into statement even below query is executing in few seconds.
select ( ''''''manipulating first column based on other table data''''''''''''''
iif(
[Second - Table$].[Policy Agent] = (select max([ACAT$].[new_Agent_number]) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] > '2014-10-01') ) , (select max([ACAT$].[Old_Agent_number]) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] > '2014-10-01')) ,
iif( [Second - Table$].[Policy Agent] = (select max([ACAT$].[Old_Agent_number]) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$]where [ACA T$].[Old_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] <= '2014-10-01') ), (select max([ACAT$].[new_Agent_number]) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] <= '2014-10-01')) ,
[Second - Table$].[Policy Agent] ))) as [Policy Agent],
''''''summing up all other columns''''''''''''''
(iif(isnull(sum([Second - Table$].[Auto BW-Line Of Business Detail])),0,sum([Second - Table$].[Auto BW-Line Of Business Detail]))) as [Auto BW-Line Of Business Detail],(iif(isnull(sum([Second - Table$].[Auto Farmers])),0,sum([Second - Table$].[Auto Farmers]))) as [Auto Farmers],(iif(isnull(sum([Second - Table$].[MCA])),0,sum([Second - Table$].[MCA]))) as [MCA],(iif(isnull(sum([Second - Table$].[CEA])),0,sum([Second - Table$].[CEA]))) as [CEA],(iif(isnull(sum([Second - Table$].[Commercial P&C])),0,sum([Second - Table$].[Commercial P&C]))) as [Commercial P&C],(iif(isnull(sum([Second - Table$].[Comm WC])),0,sum([Second - Table$].[Comm WC]))) as [Comm WC],(iif(isnull(sum([Second - Table$].[Fire Farmers])),0,sum([Second - Table$].[Fire Farmers]))) as [Fire Farmers],(iif(isnull(sum([Second - Table$].[Flood])),0,sum([Second - Table$].[Flood]))) as [Flood],(iif(isnull(sum([Second - Table$].[Kraft Lake])),0,sum([Second - Table$].[Kraft Lake]))) as [Kraft Lake],(iif(isnull(sum([Second - Table$].[Life])),0,sum([Second - Table$].[Life]))) as [Life],(iif(isnull(sum([Second - Table$].[Foremost])),0,sum([Second - Table$].[Foremost]))) as [Foremost],(iif(isnull(sum([Second - Table$].[Umbrella])),0,sum([Second - Table$].[Umbrella]))) as [Umbrella],(iif(isnull(sum([Second - Table$].[MCNA])),0,sum([Second - Table$].[MCNA]))) as [MCNA]
INTO [Excel 12.0;DATABASE=C:\Prod Validation\Database\Second Acat Table.xlsb].[Sheet1]
from [Second - Table$] group by [Second - Table$].[Policy Agent] ;Hi Fei,
Thank you so much for the reply post. I just executed the same above SQL without INTO Statement and assigned the SQL result to ADODB recordset as below. If the time difference is due to the SQL query then below statements also should execute for hours
right, but it gets executed in seconds. But to copy the recordset to excel again it is taking hours. I tried copyfromrecordset,
getrows(), getstring(), Looping each recordset fields options and all of them taking same amount of time. Please let me know there is delay in time for this much small data
Even I tried to typecast all columns to double, string in SQL and still the execution time is not reduced.
First_Temp_Recordset.Open sql_qry, Tables_conn_obj, adOpenStatic, adLockOptimistic ''' OR SET First_Temp_Recordset = Tables_conn_obj.Execute sql_qry -
Excel ADODB Sql Query Execution taking hours when manipulate excel tables why?
I have 28000 records with 8 column in an sheet. When I convert the sheet into ADODB database and copy to new excel using below code it is executing in less than a min
Set Tables_conn_obj = New ADODB.Connection Tables_conn_str = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & Table_Filename & ";Extended Properties=""Excel 12.0;ReadOnly=False;HDR = Yes;IMEX=1""" Tables_conn_obj.Open
Tables_conn_str First_Temp_sqlqry = "Select * INTO [Excel 12.0;DATABASE=C:\Prod Validation\Database\Second Acat Table.xlsb].[Sheet1] Table [first - Table$];" Tables_conn_obj.Execute First_Temp_sqlqry
But when I change the query to manipulate one column in current table based on another table in the same excel and try to copy the results in another excel, it is taking more than one hour.. why it is taking this much time when both the query results returns
the same number of rows and column. I almost spend one week and still not able to resolve this issue.
Even I tried copyfromrecordset, getrows(), getstring(), Looping each recordset fields options all of them taking same amount of time. Appreciate any inputs...
select ( ''''''manipulating first column based on other table data''''''''''''''
iif( [Second - Table$].[Policy Agent] = (select max([ACAT$].[new_Agent_number]) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where
[ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] > '2014-10-01') ) , (select max([ACAT$].[Old_Agent_number]) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate]
= ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] > '2014-10-01')) ,
iif( [Second - Table$].[Policy Agent] = (select max([ACAT$].[Old_Agent_number]) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$]where
[ACA T$].[Old_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] <= '2014-10-01') ), (select max([ACAT$].[new_Agent_number]) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate]
= ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] <= '2014-10-01')) ,
[Second - Table$].[Policy Agent] ))) as [Policy Agent],
''''''summing up all other columns''''''''''''''
(iif(isnull(sum([Second - Table$].[Auto BW-Line Of Business Detail])),0,sum([Second - Table$].[Auto BW-Line Of Business Detail]))) as [Auto BW-Line Of Business Detail],(iif(isnull(sum([Second - Table$].[Auto Farmers])),0,sum([Second - Table$].[Auto Farmers])))
as [Auto Farmers],(iif(isnull(sum([Second - Table$].[MCA])),0,sum([Second - Table$].[MCA]))) as [MCA],(iif(isnull(sum([Second - Table$].[CEA])),0,sum([Second - Table$].[CEA]))) as [CEA],(iif(isnull(sum([Second - Table$].[Commercial P&C])),0,sum([Second
- Table$].[Commercial P&C]))) as [Commercial P&C],(iif(isnull(sum([Second - Table$].[Comm WC])),0,sum([Second - Table$].[Comm WC]))) as [Comm WC],(iif(isnull(sum([Second - Table$].[Fire Farmers])),0,sum([Second - Table$].[Fire Farmers]))) as [Fire
Farmers],(iif(isnull(sum([Second - Table$].[Flood])),0,sum([Second - Table$].[Flood]))) as [Flood],(iif(isnull(sum([Second - Table$].[Kraft Lake])),0,sum([Second - Table$].[Kraft Lake]))) as [Kraft Lake],(iif(isnull(sum([Second - Table$].[Life])),0,sum([Second
- Table$].[Life]))) as [Life],(iif(isnull(sum([Second - Table$].[Foremost])),0,sum([Second - Table$].[Foremost]))) as [Foremost],(iif(isnull(sum([Second - Table$].[Umbrella])),0,sum([Second - Table$].[Umbrella]))) as [Umbrella],(iif(isnull(sum([Second - Table$].[MCNA])),0,sum([Second
- Table$].[MCNA]))) as [MCNA]
INTO [Excel 12.0;DATABASE=C:\Prod Validation\Database\Second Acat Table.xlsb].[Sheet1]
from [Second - Table$] group by [Second - Table$].[Policy Agent] ;Hi Fei,
Thank you so much for the reply post. I just executed the same above SQL without INTO Statement and assigned the SQL result to ADODB recordset as below. If the time difference is due to the SQL query then below statements also should execute for hours
right, but it gets executed in seconds. But to copy the recordset to excel again it is taking hours. I tried copyfromrecordset,
getrows(), getstring(), Looping each recordset fields options and all of them taking same amount of time. Please let me know there is delay in time for this much small data
Even I tried to typecast all columns to double, string in SQL and still the execution time is not reduced.
First_Temp_Recordset.Open sql_qry, Tables_conn_obj, adOpenStatic, adLockOptimistic ''' OR SET First_Temp_Recordset = Tables_conn_obj.Execute sql_qry -
Hi,
Facing Database performance issue while runing overnight batches.
Generate tfprof output for that batch and found some sql query which is having high elapsed time. Could any one please let me know what is the issue for this. It will also be great help if anyone suggest what need to be done as per tuning of this sql queries so as to get better responce time.
Waiting for your reply.
Effected SQL List:
INSERT INTO INVTRNEE (TRANS_SESSION, TRANS_SEQUENCE, TRANS_ORG_CHILD,
TRANS_PRD_CHILD, TRANS_TRN_CODE, TRANS_TYPE_CODE, TRANS_DATE, INV_MRPT_CODE,
INV_DRPT_CODE, TRANS_CURR_CODE, PROC_SOURCE, TRANS_REF, TRANS_REF2,
TRANS_QTY, TRANS_RETL, TRANS_COST, TRANS_VAT, TRANS_POS_EXT_TOTAL,
INNER_PK_TECH_KEY, TRANS_INNERS, TRANS_EACHES, TRANS_UOM, TRANS_WEIGHT,
TRANS_WEIGHT_UOM )
VALUES
(:B22 , :B1 , :B2 , :B3 , :B4 , :B5 , :B21 , :B6 , :B7 , :B8 , :B20 , :B19 ,
NULL, :B9 , :B10 , :B11 , 0.0, :B12 , :B13 , :B14 , :B15 , :B16 , :B17 ,
:B18 )
call count cpu elapsed disk query current rows
Parse 722 0.09 0.04 0 0 0 0
Execute 1060 7.96 83.01 11442 21598 88401 149973
Fetch 0 0.00 0.00 0 0 0 0
total 1782 8.05 83.06 11442 21598 88401 149973
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
UPDATE /*+ ROWID(TRFDTLEE) */TRFDTLEE SET TRF_STATUS = :B2
WHERE
ROWID = :B1
call count cpu elapsed disk query current rows
Parse 635 0.03 0.01 0 0 0 0
Execute 49902 14.48 271.25 41803 80704 355837 49902
Fetch 0 0.00 0.00 0 0 0 0
total 50537 14.51 271.27 41803 80704 355837 49902
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
DECLARE
var_trans_session invtrnee.trans_session%TYPE;
BEGIN
-- ADDED BY SHANKAR ON 08/29/97
-- GET THE NEXT AVAILABLE TRANS_SESSION
bastkey('trans_session',0,var_trans_session,'T');
-- MAS001
uk_trfbapuo_auto(var_trans_session,'UPLOAD','T',300);
-- MAS001 end
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 24191.23 24028.57 8172196 10533885 187888 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 24191.23 24028.57 8172196 10533885 187888 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
SELECT INNER_PK_TECH_KEY
FROM
PRDPCDEE WHERE PRD_LVL_CHILD = :B1 AND LOOSE_PACK_FLAG = 'T'
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 56081 1.90 2.03 0 0 0 0
Fetch 56081 11.07 458.58 53792 246017 0 56081
total 112163 12.98 460.61 53792 246017 0 56081
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
******************First off, be aware of the assumptions I'm making. The SQL you presented above strongly suggests (to me at least) that you have cursor for loops. If that's the case, you need to review what their purpose is and look to convert them into single statement DML commands. For example if you have something like this
DECLARE
ln_Count NUMBER;
ln_SomeValue NUMBER;
BEGIN
FOR lcr_Row IN ( SELECT pk_id,col1,col2 FROM some_table)
LOOP
SELECT
COUNT(*)
INTO
ln_COunt
FROM
target_table
WHERE
pk_id = lcr_Row.pk_id;
IF ln_Count = 0 THEN
SELECT
some_value
INTO
ln_SomeValue
FROM
some_other_table
WHERE
pk_id = lcr_Row.col1
INSERT
INTO
target_table
( pk_id,
some_other_value,
col2
VALUES
( lcr_Row.col1,
ln_SomeValue,
lcr_Row.col2
ELSE
UPDATE
target_table
SET
some_other_value = ln_SomeValue
WHERE
pk_id = lcr_Row.col1;
END IF;
END LOOP;
END; it could be rewritten as
DECLARE
BEGIN
MERGE INTO target_table b
USING ( SELECT
a.pk_id,
a.col2,
b.some_value
FROM
some_table a,
some_other_table b
WHERE
b.pk_id = a.col1
) e
ON (b.pk_id = e.pk_id)
WHEN MATCHED THEN
UPDATE SET b.some_other_value = e.some_value
WHEN NOT MATCHED THEN
INSERT ( b.pk_id,
b.col2,
b.some_other_value)
VALUES( b.pk_id,
b.col2,
b.some_value);
END;It's going to take a bit of analysis and work but the fastest and most scalable way to approach processing data is to use SQL rather than PL/SQL. PL/SQL data processing i.e. cursor loops should be an option of last resort.
HTH
David -
SQL query execution in DB02 hangs if record set is more than 50000
Hi,
We are facing issue in a report performance. The return is using native SQL query.
There are custom views created ar database level for pricing/maetrial and stock. The native sql query is written on these views. The report takes around 15 mins to run in background .
We are trying to analyse the native SQL query through DB02. I tried fetching records for one particular
custom view to make out if its indexing issue or something else.When i using TOP 35000 records with select query runs fine with this dataset or less than this . If i increase it to 40000 system doesn;t show anything in SQL ouptut. And above one lakh records system gives timeout.
The count in this view gives some 10 lakh records which I don't feel is v.v.huge that query that too native sql takes so much time.
Any help on this will be highly appreciated.
Regards
Madhuwhat do you expect from that poor information.
do you change data or onyl select.
If you use SAP and ABAP, then you should also use Open SQL.
Otherwise it is possible to run the SQL Trace with Native SQL, it is anyway only Native SQL, what the trace sees.
Use package size and it will probably work fine.
Siegfried -
Reg : Exection time of SQL query
Hi All,
How to identify the oracle sql query exection time.
Thanks in Advance.Hi,
try This
SQL> set timing on;
SQL> select * from dual;
D
X
Elapsed: 00:00:00.00
SQL> set timing off;
SQL> select * from dual;
D
XReagrds
umi -
can u help me to print multiple rows resulted from a sql query using jsp.
Map the ResultSet to a Collection of DTO's, use the JSTL's c:forEach tag to iterate through it inside a JSP page, use the HTML table, tr and td elements to present the data in a table.
-
SQL Query execution with LISTAGG
Hello,
I am trying to get some data using LISTAGG functionality.
I have below query with works fine with the table, but not with the rowsource which is based on same table.
SELECT COL1,LISTAGG(COL3, ',') WITHIN GROUP (ORDER BY COL2) AS fieldName FROM IOP_Sample_RS GROUP BY COL1it gives parsing error
line 1:22: expecting "FROM", found '('col 22 is '(' of LISTAGG function.
One more Question
Does UNION, MINUS works with RSQL in IOP?
Thanks,
Sumant Chhunchha.
Edited by: Sumant on Jun 15, 2011 5:04 AMLISTAGG, UNION, MINUS is not supported with RSQL, instead you can massage your SQL query to load data into IOP and then use a simple RSQL query.
-
How to reduce execution time of SQL Query
hi ,
i'm working on oracle ERP application i wanna to create an OAF page that shows some data on tables .
i've wirte the query but it take long time . .
any body can help :
SELECT *
FROM (SELECT person_id,
transaction_id,
segment1 AS TA_number,
segment9 AS Travel_Distination,
SUBSTR (segment5, 0, 10) AS Travel_Date,
creation_date AS request_date,
status,
full_name AS Current_Approver
FROM ( (SELECT PPF.PERSON_ID,
ht.TRANSACTION_ID,
pac.segment1,
pac.segment2,
pac.segment3,
pac.segment4,
pac.segment5,
pac.segment6,
pac.segment7,
pac.segment8,
pac.segment9,
pac.creation_date,
DECODE (al.approval_status,
NULL, 'Pending For Approval',
'APPROVE', 'Finally Approved',
al.approval_status)
status,
al.order_number,
almin.order_number approver_order,
ppf2.full_name
FROM HR_API_TRANSACTION_Values htv,
HR_API_TRANSACTIONS ht,
HR_API_TRANSACTION_STEPS hts,
PER_ANALYSIS_CRITERIA pac,
hr.Ame_Approvals_History ah,
per_people_f ppf,
per_people_f ppf2,
apps.fnd_user fu,
apps.AME_TEMP_OLD_APPROVER_LISTS al,
apps.AME_TEMP_OLD_APPROVER_LISTS almin
WHERE al.application_id = '-81'
AND al.transaction_id = ht.TRANSACTION_ID
AND (al.approval_status NOT LIKE '%REP%'
OR al.approval_status IS NULL)
AND al.order_number =
(SELECT MAX (ao.order_number)
FROM apps.AME_TEMP_OLD_APPROVER_LISTS ao
WHERE ao.transaction_id =
ht.TRANSACTION_ID
AND (ao.approval_status NOT LIKE
'%REP%'
OR ao.approval_status IS NULL))
AND ht.creator_person_id = PPF.PERSON_ID
AND ht.TRANSACTION_ID = ah.transaction_id
AND HT.TRANSACTION_ID = HTS.TRANSACTION_ID
AND fu.employee_id = PPF2.person_id
AND almin.order_number =
(SELECT MIN (aomin.order_number)
FROM apps.AME_TEMP_OLD_APPROVER_LISTS aomin
WHERE aomin.transaction_id =
ht.TRANSACTION_ID
AND aomin.approval_status IS NULL)
AND almin.transaction_id = ht.TRANSACTION_ID
AND almin.name = fu.user_name
AND hts.TRANSACTION_STEP_ID =
HTV.TRANSACTION_STEP_ID
AND HTV.NAME = 'P_ANALYSIS_CRITERIA_ID'
AND HTV.NUMBER_VALUE =
PAC.ANALYSIS_CRITERIA_ID
AND SYSDATE BETWEEN ppf.effective_start_date
AND ppf.effective_end_date
AND PROCESS_NAME = 'TA_AEC')
UNION
(SELECT PPF.PERSON_ID,
ht.TRANSACTION_ID,
pac.segment1,
pac.segment2,
pac.segment3,
pac.segment4,
pac.segment5,
pac.segment6,
pac.segment7,
pac.segment8,
pac.segment9,
pac.creation_date,
DECODE (al.approval_status,
NULL, 'Pending For Approval',
'APPROVE', 'Finally Approved',
al.approval_status)
status,
al.order_number,
al.order_number AS approver_order,
'' AS name
FROM HR_API_TRANSACTION_Values htv,
HR_API_TRANSACTIONS ht,
HR_API_TRANSACTION_STEPS hts,
PER_ANALYSIS_CRITERIA pac,
hr.Ame_Approvals_History ah,
per_people_f ppf,
per_people_f ppf2,
apps.fnd_user fu,
apps.AME_TEMP_OLD_APPROVER_LISTS al,
apps.AME_TEMP_OLD_APPROVER_LISTS almin
WHERE al.application_id = '-81'
AND al.approval_status IS NOT NULL
AND al.transaction_id = ht.TRANSACTION_ID
AND ht.creator_person_id = PPF.PERSON_ID
AND ht.TRANSACTION_ID = ah.transaction_id
AND HT.TRANSACTION_ID = HTS.TRANSACTION_ID
AND PROCESS_NAME = 'TA_AEC'
AND fu.employee_id = PPF2.person_id
AND al.order_number =
(SELECT MAX (ao.order_number)
FROM apps.AME_TEMP_OLD_APPROVER_LISTS ao
WHERE ao.transaction_id =
ht.TRANSACTION_ID
AND (ao.approval_status NOT LIKE
'%REP%'
OR ao.approval_status IS NULL))
AND al.name = fu.user_name
AND almin.transaction_id = ht.TRANSACTION_ID
AND hts.TRANSACTION_STEP_ID =
HTV.TRANSACTION_STEP_ID
AND HTV.NAME = 'P_ANALYSIS_CRITERIA_ID'
AND HTV.NUMBER_VALUE = PAC.ANALYSIS_CRITERIA_ID
AND SYSDATE BETWEEN ppf.effective_start_date
AND ppf.effective_end_date
AND PROCESS_NAME = 'TA_AEC'))) QRSLT
WHERE (person_id = 26773)
ORDER BY request_date DESCsee also this . .
Optimizer Environment (10053)
# Is
Default Parameter Current
Value
1 N _sort_elimination_cost_ratio 5
2 N _pga_max_size 838860 KB
3 N _b_tree_bitmap_plans false
4 N _fast_full_scan_enabled false
5 N _like_with_bind_as_equality true
6 N optimizer_secure_view_merging false
7 Y optimizer_mode_hinted false
8 Y optimizer_features_hinted 0.0.0
9 Y parallel_execution_enabled true
10 Y parallel_query_forced_dop 0
11 Y parallel_dml_forced_dop 0
12 Y parallel_ddl_forced_degree 0
13 Y parallel_ddl_forced_instances 0
14 Y _query_rewrite_fudge 90
15 Y optimizer_features_enable 10.2.0.4
16 Y _optimizer_search_limit 5
17 Y cpu_count 4
18 Y active_instance_count 1
19 Y parallel_threads_per_cpu 2
20 Y hash_area_size 131072
21 Y bitmap_merge_area_size 1048576
22 Y sort_area_size 65536
23 Y sort_area_retained_size 0
24 Y _optimizer_block_size 8192
25 Y _sort_multiblock_read_count 2
26 Y _hash_multiblock_io_count 0
27 Y _db_file_optimizer_read_count 8
28 Y _optimizer_max_permutations 2000
29 Y pga_aggregate_target 4194304 KB
30 Y _query_rewrite_maxdisjunct 257
# Is
Default Parameter Current
Value
31 Y _smm_auto_min_io_size 56 KB
32 Y _smm_auto_max_io_size 248 KB
33 Y _smm_min_size 1024 KB
34 Y _smm_max_size 419430 KB
35 Y _smm_px_max_size 2097152 KB
36 Y _cpu_to_io 0
37 Y _optimizer_undo_cost_change 10.2.0.4
38 Y parallel_query_mode enabled
39 Y parallel_dml_mode disabled
40 Y parallel_ddl_mode enabled
41 Y optimizer_mode all_rows
42 Y sqlstat_enabled false
43 Y _optimizer_percent_parallel 101
44 Y _always_anti_join choose
45 Y _always_semi_join choose
46 Y _optimizer_mode_force true
47 Y _partition_view_enabled true
48 Y _always_star_transformation false
49 Y _query_rewrite_or_error false
50 Y _hash_join_enabled true
51 Y cursor_sharing exact
52 Y star_transformation_enabled false
53 Y _optimizer_cost_model choose
54 Y _new_sort_cost_estimate true
55 Y _complex_view_merging true
56 Y _unnest_subquery true
57 Y _eliminate_common_subexpr true
58 Y _pred_move_around true
59 Y _convert_set_to_join false
60 Y _push_join_predicate true
# Is
Default Parameter Current
Value
61 Y _push_join_union_view true
62 Y _optim_enhance_nnull_detection true
63 Y _parallel_broadcast_enabled true
64 Y _px_broadcast_fudge_factor 100
65 Y _ordered_nested_loop true
66 Y _no_or_expansion false
67 Y optimizer_index_cost_adj 100
68 Y optimizer_index_caching 0
69 Y _system_index_caching 0
70 Y _disable_datalayer_sampling false
71 Y query_rewrite_enabled true
72 Y query_rewrite_integrity enforced
73 Y _query_cost_rewrite true
74 Y _query_rewrite_2 true
75 Y _query_rewrite_1 true
76 Y _query_rewrite_expression true
77 Y _query_rewrite_jgmigrate true
78 Y _query_rewrite_fpc true
79 Y _query_rewrite_drj true
80 Y _full_pwise_join_enabled true
81 Y _partial_pwise_join_enabled true
82 Y _left_nested_loops_random true
83 Y _improved_row_length_enabled true
84 Y _index_join_enabled true
85 Y _enable_type_dep_selectivity true
86 Y _improved_outerjoin_card true
87 Y _optimizer_adjust_for_nulls true
88 Y _optimizer_degree 0
89 Y _use_column_stats_for_function true
90 Y _subquery_pruning_enabled true
# Is
Default Parameter Current
Value
91 Y _subquery_pruning_mv_enabled false
92 Y _or_expand_nvl_predicate true
93 Y _table_scan_cost_plus_one true
94 Y _cost_equality_semi_join true
95 Y _default_non_equality_sel_check true
96 Y _new_initial_join_orders true
97 Y _oneside_colstat_for_equijoins true
98 Y _optim_peek_user_binds true
99 Y _minimal_stats_aggregation true
100 Y _force_temptables_for_gsets false
101 Y workarea_size_policy auto
102 Y _smm_auto_cost_enabled true
103 Y _gs_anti_semi_join_allowed true
104 Y _optim_new_default_join_sel true
105 Y optimizer_dynamic_sampling 2
106 Y _pre_rewrite_push_pred true
107 Y _optimizer_new_join_card_computation true
108 Y _union_rewrite_for_gs yes_gset_mvs
109 Y _generalized_pruning_enabled true
110 Y _optim_adjust_for_part_skews true
111 Y _force_datefold_trunc false
112 Y statistics_level typical
113 Y _optimizer_system_stats_usage true
114 Y skip_unusable_indexes true
115 Y _remove_aggr_subquery true
116 Y _optimizer_push_down_distinct 0
117 Y _dml_monitoring_enabled true
118 Y _optimizer_undo_changes false
119 Y _predicate_elimination_enabled true
120 Y _nested_loop_fudge 100
# Is
Default Parameter Current
Value
121 Y _project_view_columns true
122 Y _local_communication_costing_enabled true
123 Y _local_communication_ratio 50
124 Y _query_rewrite_vop_cleanup true
125 Y _slave_mapping_enabled true
126 Y _optimizer_cost_based_transformation linear
127 Y _optimizer_mjc_enabled true
128 Y _right_outer_hash_enable true
129 Y _spr_push_pred_refspr true
130 Y _optimizer_cache_stats false
131 Y _optimizer_cbqt_factor 50
132 Y _optimizer_squ_bottomup true
133 Y _fic_area_size 131072
134 Y _optimizer_skip_scan_enabled true
135 Y _optimizer_cost_filter_pred false
136 Y _optimizer_sortmerge_join_enabled true
137 Y _optimizer_join_sel_sanity_check true
138 Y _mmv_query_rewrite_enabled true
139 Y _bt_mmv_query_rewrite_enabled true
140 Y _add_stale_mv_to_dependency_list true
141 Y _distinct_view_unnesting false
142 Y _optimizer_dim_subq_join_sel true
143 Y _optimizer_disable_strans_sanity_checks 0
144 Y _optimizer_compute_index_stats true
145 Y _push_join_union_view2 true
146 Y _optimizer_ignore_hints false
147 Y _optimizer_random_plan 0
148 Y _query_rewrite_setopgrw_enable true
149 Y _optimizer_correct_sq_selectivity true
150 Y _disable_function_based_index false
# Is
Default Parameter Current
Value
151 Y _optimizer_join_order_control 3
152 Y _optimizer_cartesian_enabled true
153 Y _optimizer_starplan_enabled true
154 Y _extended_pruning_enabled true
155 Y _optimizer_push_pred_cost_based true
156 Y _sql_model_unfold_forloops run_time
157 Y _enable_dml_lock_escalation false
158 Y _bloom_filter_enabled true
159 Y _update_bji_ipdml_enabled 0
160 Y _optimizer_extended_cursor_sharing udo
161 Y _dm_max_shared_pool_pct 1
162 Y _optimizer_cost_hjsmj_multimatch true
163 Y _optimizer_transitivity_retain true
164 Y _px_pwg_enabled true
165 Y _optimizer_join_elimination_enabled true
166 Y flashback_table_rpi non_fbt
167 Y _optimizer_cbqt_no_size_restriction true
168 Y _optimizer_enhanced_filter_push true
169 Y _optimizer_filter_pred_pullup true
170 Y _rowsrc_trace_level 0
171 Y _simple_view_merging true
172 Y _optimizer_rownum_pred_based_fkr true
173 Y _optimizer_better_inlist_costing all
174 Y _optimizer_self_induced_cache_cost false
175 Y _optimizer_min_cache_blocks 10
176 Y _optimizer_or_expansion depth
177 Y _optimizer_order_by_elimination_enabled true
178 Y _optimizer_outer_to_anti_enabled true
179 Y _selfjoin_mv_duplicates true
180 Y _dimension_skip_null true
# Is
Default Parameter Current
Value
181 Y _force_rewrite_enable false
182 Y _optimizer_star_tran_in_with_clause true
183 Y _optimizer_complex_pred_selectivity true
184 Y _optimizer_connect_by_cost_based true
185 Y _gby_hash_aggregation_enabled true
186 Y _globalindex_pnum_filter_enabled true
187 Y _fix_control_key 0
188 Y _optimizer_skip_scan_guess false
189 Y _enable_row_shipping false
190 Y _row_shipping_threshold 80
191 Y _row_shipping_explain false
192 Y _optimizer_rownum_bind_default 10
193 Y _first_k_rows_dynamic_proration true
194 Y _px_ual_serial_input true
195 Y _optimizer_native_full_outer_join off
196 Y _optimizer_star_trans_min_cost 0
197 Y _optimizer_star_trans_min_ratio 0
198 Y _optimizer_fkr_index_cost_bias 10
199 Y _optimizer_connect_by_combine_sw true
200 Y _optimizer_use_subheap true
201 Y _optimizer_or_expansion_subheap true
202 Y _optimizer_sortmerge_join_inequality true
203 Y _optimizer_use_histograms true
204 Y _optimizer_enable_density_improvements false -
Hello All,
I have the following query as part of other three queries for a report. While as the other two take less than 3 seconds to execute, this one goes on for about an hour.
Environment is 9i/11.5.9 apps on HP Ux 11.0.
SELECT a.move_order_line_id,c.inventory_location_id
,c.segment1||'.'||c.segment2||'.'||c.segment3||'.'||c.segment4 Loc,
b.lot_number Lot,
-1*b.primary_quantity lot_qty
FROM mtl_material_transactions a,
mtl_transaction_lot_numbers b,
mtl_item_locations c
WHERE a.transaction_id = b.transaction_id
AND a.locator_id = c.inventory_location_id
UNION
SELECT a.move_order_line_id,c.inventory_location_id
,c.segment1||'.'||c.segment2||'.'||c.segment3||'.'||c.segment4 Loc,
b.lot_number Lot,
-1*b.primary_quantity lot_qty
FROM mtl_material_transactions a,
mtl_transaction_lot_numbers b,
mtl_item_locations c
WHERE a.transaction_id = b.transaction_id
AND a.locator_id = c.inventory_location_id
UNION
SELECT a.move_order_line_id,c.inventory_location_id
,c.segment1||'.'||c.segment2||'.'||c.segment3||'.'||c.segment4 Loc,
b.lot_number Lot,
b.primary_quantity lot_qty
FROM mtl_material_transactions_temp a,
mtl_transaction_lots_temp b,
mtl_item_locations c
WHERE a.transaction_temp_id = b.transaction_temp_id
AND a.locator_id = c.inventory_location_id
UNION
SELECT line_id move_order_line_id, null inventory_location_id,
null Loc,
null Lot,
quantity lot_qty
FROM mtl_txn_request_lines
WHERE line_status = 7
and quantity_detailed is null
order by 1; None of the tables involved has more than 300K rows in it.
Would appreciate some help.
A/AHello,
In her attempt to help, need to know the answers to the following questions:
1 .- The 3 queries may return independently repeated same records or records?
1.a. - If you were to take the case, you would want that before two results appear equal only one row?
1.b. - If it is impossible to duplicated rows of each of the three sentences, then use and replace
operator 'UNION' with 'UNION ALL'.
Since the operator 'UNION' operations makes 'SORT' to make the distinction between
results and penalizes volume data on performance.
2 .- He says that it is a 'report', no? Consultations are united? Because columns?
3 .- Do you know the 'EXPLAIN PLAN' of the sentence? If known, please send an email.Waiting for news, a greeting.
Edited by: RamonRQ on 21-oct-2011 3:54 -
SYS_REFCURSOR takes more time than direct query execution
I have a stored proc which has 4 inputs and 10 output and all outputs are sys_refcursor type.
Among 10 ouputs, 1 cursor returns 4k+ records and all other cursors has 3 or 4 records and average 5 columns in each cursors. For this, it takes 8 sec to complete the execution. If we directly query, it gives output in .025 sec.
I verified code located the issue with cursor which returns 4k+ only.
The cursor opening from a temporary table (which has 4k+ records ) without any filter. The query which inserted into temporary is direct inserts only and i found nothing to modify there.
Can anyone suggest, how we can bring the results in less than 3 sec? This is really a challenge since the code needs to go live next week.
Any help appreciated.
Thanks
RenjishI've just repeated the test in SQL*Plus on my test database.
Both the ref cursor and direct SQL took 4.75 seconds.
However, that time is not the time to execute the SQL statement, but the time it took SQL*Plus in my command window to print out the 3999 rows of results.
SQL> create or replace PROCEDURE TEST_PROC (O_OUTPUT OUT SYS_REFCURSOR) is
2 BEGIN
3 OPEN O_OUTPUT FOR
4 select 11 plan_num, 22 loc_num, 'aaa' loc_nm from dual connect by level < 4000;
5 end;
6 /
Procedure created.
SQL> set timing on
SQL> set linesize 1000
SQL> set serverout on
SQL> var o_output refcursor;
SQL> exec test_proc(:o_output);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.04
SQL> print o_output;
PLAN_NUM LOC_NUM LOC
11 22 aaa
11 22 aaa
11 22 aaa
11 22 aaa
11 22 aaa
3999 rows selected.
Elapsed: 00:00:04.75
SQL> select 11 plan_num, 22 loc_num, 'aaa' loc_nm from dual connect by level < 4000;
PLAN_NUM LOC_NUM LOC
11 22 aaa
11 22 aaa
11 22 aaa
11 22 aaa
11 22 aaa
11 22 aaa
3999 rows selected.
Elapsed: 00:00:04.75
That's the result I expect to see, both taking the same amount of time to do the same thing.
Please demonstrate how you are running it and getting different results. -
Ensuring an sql query execution is successful
Hi Guys
I have a bean which handles SQL statements. At the moment the methods do not return anything to the servlet, but I was thinking, if an error occurs whilst the statement is running then how should I inform the user to tell him an error has occured? Here is one of my current methods:
public void executeUpdate(String Query) {
try {
stmt.executeUpdate( Query );
} catch(SQLException e) { System.out.println("Exception : "+ e.getMessage()); }
}Your method effectively just calls executeUpdate and does no additional work. Why do you do that? executeUpdate() already has a pretty clear contract:
It returns the number of effected rows. Some callers might want to check that and treat it as an error if it returns an unexpected value (0 or > 1 usually).
It throws an Exception if something is wrong with the statement.
Additionally you almost never want to use this kind of update. You almost always want to create a PreparedStatement with a (compile-time) fixed String, set the arguments and call executeUpdate() on that. This way you won't have to think about escaping the values and are automatically safe from SQL injection attacks. -
Explain SQL Query execution plan: Oracle
Dear Masters,
Kindly help me to understand execution plan for an SQL statement. I have following SQL execution plan for a query in system. How should I interpret it. I thank You in advace for your guidance.
SELECT STATEMENT ( Estimated Costs = 1.372.413 , Estimated #Rows = 0 )
5 NESTED LOOPS
( Estim. Costs = 1.372.413 , Estim. #Rows = 3.125 )
Estim. CPU-Costs = 55.798.978.498 Estim. IO-Costs = 1.366.482
2 TABLE ACCESS BY INDEX ROWID MSEG
( Estim. Costs = 1.326.343 , Estim. #Rows = 76.717 )
Estim. CPU-Costs = 55.429.596.575 Estim. IO-Costs = 1.320.451
Filter Predicates
1 INDEX RANGE SCAN MSEG~R
( Estim. Costs = 89.322 , Estim. #Rows = 60.069.500 )
Search Columns: 1
Estim. CPU-Costs = 2.946.739.229 Estim. IO-Costs = 89.009
Access Predicates
4 TABLE ACCESS BY INDEX ROWID MKPF
( Estim. Costs = 1 , Estim. #Rows = 1 )
Estim. CPU-Costs = 4.815 Estim. IO-Costs = 1
Filter Predicates
3 INDEX UNIQUE SCAN MKPF~0
Search Columns: 3
Estim. CPU-Costs = 3.229 Estim. IO-Costs = 0
Access PredicatesHi Panjak,
Yeahh, there's a huge unperformatic SQL statment, what I can see from this acces plan is:
1 DBO decided to start the query on index R on MSEG, using only part of the index (only one column) with no good uniqueness, accessing disk IO-Costs for this (60mi records), and expecting many interactions (loops) in memory to filter, see CPU-Costs.
So with the parameters you gave to SQL, they start in a very bad way.
2 After that program will access the MSEG commanded by what was found on First step, also with a huge loading from DB and filtering (another where criteria on MSEG fields, not found on index R), reducing the result set to 76.717 rows.
3/4 With this, program goes direct to primary key index on MKPF with direct access (optimized access) and follow to access database table MKPF.
5 At last will "loop" the result sets from MSEG and MKPF, mixing the tuplas generating the final result set.
Do you want to share your SQL, the parameters you are sending and code which generate it with us?
Regards, Fernando Da Ró
Maybe you are looking for
-
Hi. i sent someone a text on Thursday but they are on holiday at the moment and their phone is switched off. When i normally text them I get a 'delivered' report when they recieve the message. If there phone is switched off, I get a 'pending' report
-
Hardware scan failed with result code WWF00F008-WJDZ3F
This kept erroring so when I ran this test I had ; Disabled Wired LAN Turned on Wireless Confirmed that I can access Internet Turned off Firewall Product : W520 4282CTO Bios : 8BET62WW (1.42) All drivers are latest and windows up to date. I am using
-
We have a structure like this Parent Direct child 1 2 1 6 2 3 3 4 6 7 we want to generate the output as parent child relation if child is related to parent thro any hierarchy at any level parent child 1 2(since 1->2) 1
-
hello I have a doubt , i have a stored procedure that make a transformation of data and insert in another table .Ok that works ok, but only works to a single row . Now I want to modify my proc , now i want to receive a set of rows , How can I do this
-
Version Change in User Exit , Customer Exit, Screen Exit
Hi All, If Version change then whether the user exit, customer exit & screen exit will remain same? or what needs to be done for that?. Regards, Abhijit