USING DBMS_PROFILER
I want to check performance of several pl/sql procedures.
how can i user dbms_profiler for that?
which tables and which columns are important to check performance and statistics?
thanks
Hope this helps :)
The PL/SQL Code Profiler, the DBMS_PROFILER package
Install the package and supporting elements
Start the profiler
Run your code
Stop the profiler
Analyze the results
Installing the Code Profiler
You will probably have to install the code yourself (in the SYS schema) Check the package specification file for documentation and guidelines
Specification: dbmspbp.sql Body: prvtpbp.plb
Files are located in Rdbms\Admin unless otherwise noted
You must install the profile tables by running the proftab.sql script. Can define them in a shared schema or in separate schemas
Creates three tables:
PLSQL_PROFILER_RUNS: parent table of runs
PLSQL_PROFILER_UNITS: program units executed in run
PLSQL_PROFILER_DATA: profiling data for each line in a program unit
These tables, particularly _DATA, can end up with lots of rows in them
Start the profiler, Run your code and Stop the profiler
BEGIN
DBMS_OUTPUT.PUT_LINE (
DBMS_PROFILER.START_PROFILER (
’showemps ‘ ||
TO_CHAR (SYSDATE, ‘YYYYMMDD HH24:MI:SS’)
showemps;
DBMS_OUTPUT.PUT_LINE (
DBMS_PROFILER.STOP_PROFILER);
END;
Interpreting Code Profiler Results
To make it easier to analyze the data produced by the profiler, Oracle offers the following files in the $ORACLE_HOME\plsql\demo directory:
profrep.sql: Creates a number of views and a package named prof_report_utilities to help extract data from profiling tables
profsum.sql: series of canned queries and programs using prof_report_utilities Don’t run them all; pick the ones that look most useful
update plsql_profiler_units set total_time = 0;
execute prof_report_utilities.rollup_all_runs;
set pagesize 999
set linesize 120
column unit format a20
column line# format 99999
column time_Count format a15
column text format a60
spool slowest2.txt
select
to_char(p1.total_time/10000000, ‘99999999′) || ‘-’ ||
TO_CHAR (p1.total_occur) as time_count,
substr(p2.unit_owner, 1, 20) || ‘.’ ||
decode(p2.unit_name, ‘’, ‘‘,
substr(p2.unit_name,1, 20)) as unit,
TO_CHAR (p1.line#) || ‘-’ || p3.text text
from
plsql_profiler_data p1,
plsql_profiler_units p2,
all_source p3, plsql_profiler_grand_total p4
where
p2.unit_owner NOT IN (’SYS’, ‘SYSTEM’) AND
p1.runID = &&firstparm AND
(p1.total_time >= p4.grand_total/100) AND
p1.runID = p2.runid and
p2.unit_number=p1.unit_number and
p3.type=’PACKAGE BODY’ and
p3.owner = p2.unit_owner and
p3.line = p1.line# and
p3.name=p2.unit_name
order by p1.total_time desc;
spool off
cat slowest2.txt
TIME_COUNT UNIT TEXT
304858-21760 PLG.PLGDOIR 2661- CLOSE alias_cur;
244576-10880 PLG.PLGDOIR 2659- FETCH alias_cur INTO alias_rec;
132498-10881 PLG.PLGDOIR 77- SELECT objid, owner, objname, info, doc
91790-4771 PLG.PLGGEN 3424- v_nth PLS_INTEGER := 1;
70045-27043 PLG.PLGGEN 1014- THEN
69153-79781 PLG.PLGADMIN 447- decrypted_text := decrypted_text ||
58988-4010 PLG.PLGDOIR 450- SELECT just_like
50566-3 PLG.PLGGEN 241- THEN
50404-45983 PLG.PLGSTR 50- RETURN (SUBSTR (string_in, v_start, v_numchar
s));
41622-1 PLG.PLGGEN 2625-
“Best of” Oracle PL/SQL
The most wonderful features of this most wonderful programming language
by Steven Feuerstein
Similar Messages
-
Grant for to use DBMS_PROFILER
Hi
What are grants that I must to have for to use DBMS_PROFILER ?
I tried to use , and no work
exec dbms_profiler.start_profiler('teste_fatorial');
ERROR at line 1:
ORA-06528: Error executing PL/SQL profiler
ORA-06512: at "SYS.DBMS_PROFILER", line 123
ORA-06512: at "SYS.DBMS_PROFILER", line 132
ORA-06512: at line 1EXECUTEEXECUTE on DBMS_PROFILER is granted to PUBLIC by default. And if user would not have EXECUTE the error would be
PLS-00201: identifier 'DBMS_PROFILER.START_PROFILER' must be declared
In order to run procedure version of dbms_profiler.start_profiler script ?\rdbms\admin\proftab.sql must be run in schema executing dbms_profiler:
SQL> exec dbms_profiler.start_profiler('teste_fatorial');
BEGIN dbms_profiler.start_profiler('teste_fatorial'); END;
ERROR at line 1:
ORA-06528: Error executing PL/SQL profiler
ORA-06512: at "SYS.DBMS_PROFILER", line 123
ORA-06512: at "SYS.DBMS_PROFILER", line 132
ORA-06512: at line 1
SQL> @?\rdbms\admin\proftab.sql
drop table plsql_profiler_data cascade constraints
ERROR at line 1:
ORA-00942: table or view does not exist
drop table plsql_profiler_units cascade constraints
ERROR at line 1:
ORA-00942: table or view does not exist
drop table plsql_profiler_runs cascade constraints
ERROR at line 1:
ORA-00942: table or view does not exist
drop sequence plsql_profiler_runnumber
ERROR at line 1:
ORA-02289: sequence does not exist
Table created.
Comment created.
Table created.
Comment created.
Table created.
Comment created.
Sequence created.
SQL> exec dbms_profiler.start_profiler('teste_fatorial');
PL/SQL procedure successfully completed.
SQL> SY. -
Using User Defined Function is SQL
Hi
I did the following test to see how expensive it is to use user defined functions in SQL queries, and found that it is really expensive.
Calling SQRT in SQL costs less than calling a dummy function that just returns
the parameter value; this has to do with context switchings, but how can we have
a decent performance compared to Oracle provided functions?
Any comments are welcome, specially regarding the performance of UDF in sql
and for solutions.
create or replace function f(i in number) return number is
begin
return i;
end;
declare
l_start number;
l_elapsed number;
n number;
begin
select to_char(sysdate, 'sssssss')
into l_start
from dual;
for i in 1 .. 20 loop
select max(rownum)
into n
from t_tdz12_a0090;
end loop;
select to_char(sysdate, 'sssssss') - l_start
into l_elapsed
from dual;
dbms_output.put_line('first: '||l_elapsed);
select to_char(sysdate, 'sssssss')
into l_start
from dual;
for i in 1 .. 20 loop
select max(sqrt(rownum))
into n
from t_tdz12_a0090;
end loop;
select to_char(sysdate, 'sssssss') - l_start
into l_elapsed
from dual;
dbms_output.put_line('second: '||l_elapsed);
select to_char(sysdate, 'sssssss')
into l_start
from dual;
for i in 1 .. 20 loop
select max(f(rownum))
into n
from t_tdz12_a0090;
end loop;
select to_char(sysdate, 'sssssss') - l_start
into l_elapsed
from dual;
dbms_output.put_line('third: '||l_elapsed);
end;
Results:
first: 303
second: 1051
third: 1515
Kind regards
TaoufikI find that inline SQL is bad for performance but
good to simplify SQL. I keep thinking that it should
be possible somehow to use a function to improve
performance but have never seen that happen.inline SQL is only bad for performance if the database design (table structure, indexes etc.) is poor or the way the SQL is written is poor.
Context switching between SQL and PL/SQL for a User defined function is definitely a way to slow down performance.
Obviously built-in Oracle functions are going to be quicker than User-defined functions because they are written into the SQL and PL/SQL engines and are optimized for the internals of those engines.
There are a few things you can do to improve function
performance, shaving microseconds off execution time.
Consider using the NOCOPY hints for your parameters
to use pointers instead of copying values. NOCOPY
is a hint rather than a directive so it may or may
not work. Optimize any SQL in the called function.
Don't do anything in loops that does not have to be
done inside a loop.Well, yes, but it's even better to keep all processing in SQL where possible and only resort to PL/SQL when absolutely necessary.
The on-line documentation has suggested that using a
DETERMINISTIC function can improve performance but I
have not been able to demonstrate this and there are
notes in Metalink suggesting that this does not
happen. My experience is that DETERMINISTIC
functions always get executed. There's supposed to
be a feature in 11g that acually caches function
return values.Deterministic functions will work well if used in conjunction with a function based index. That can improve access times when querying data on the function results.
You can use DBMS_PROFILER to get run-time statistics
for each line of your function as it is executed to
help tune it.Or code it as SQL. ;) -
Hi,
I want to tune a package. For tuning I am going to use DBMS_PROFILER. But it requires Sys user privilages. But I dont have permissions to use this. Can anyone help me out How to use DBMS_PROFILER.
Why I am using DBMS_PROFILER is, I want to analyze a Program unit execution and determine the runtime behavior.
Please help me out...
Thanks
SateeshHi,
First you must install the DBMS_PROFILER package in your db..... You must connect as sys user... and execute the profload.sql script....
Then you must give the appropriate grant to any db user so as to execute this package ... grant execute on dbms_profiler to scott....
Afterwards , you can connect as user scott for example and execute the sql script "proftab.sql"... and finally you can :
exec dbms_profiler.start_profiler
exec your package
exec dbms_profiler.stop_profiler
Select form the dbms_profiler tables....in scott schema
plsql_profiler_data
plsql_profiler_units
plsql_profiler_runs
Sim -
Dbms_profiler in connection pooling environment
Hello,
I am looking for some experiences on using dbms_profiler in connection pooling environments ?
I have got this working for single user cases but I am not sure how is this going to behave in connection pooling environments ?
If you have done such testing before, please share your thoughts.
Rgds,
GokulHello,
I am looking for some experiences on using dbms_profiler in connection pooling environments ?
I have got this working for single user cases but I am not sure how is this going to behave in connection pooling environments ?
If you have done such testing before, please share your thoughts.
Rgds,
Gokul -
DBMS_PROFILER Query !!! Urgent
I am trying to check the performance of my code using DBMS_PROFILER.
Please tell me the unit of time that the column TOTAL_TIME in PLSQL_PROFILER_DATA table stores the values?
thanks in advance
rollerzHmmm....exclamation points in the subject, along with "Urgent".....
No mention of Oracle version....
For a question that can easily be answered by doing a Tahiti search. It took me about 30 seconds to come up w/ the answer.
Since this is urgent and you obviously need the answer immediately, I'm NOT going to tell you the answer. Instead, I'll tell you this:
1.) Point your browser to http://tahiti.oracle.com/
2.) enter 'plsql_profiler_data' into the search box and click 'Search Doc Libraries' button.
3.) Pick the relevant version for your current database version.
4.) start reading.....
I promise that all shall be revealed. The answer is there in the Oracle documentation.
Hope that helps,
-Mark -
DBMS_PROFILER timing problem
I need help for the use of the package DBSM_PROFILER. When I use DBMS_PROFILER, I get the information about the units, but no information about the time. The time is always "0". Why ?? ...
Oracle server 8.1.7 running on Unix Alpha server.It's already set to TRUE.
I think this problem is not related to this parameter because on another machine running W2000 with this parameter set to false, DBMS_PROFILER run correctly. -
DBMS_PROFILER Best Practices
Can any one share their experiences on how they have used
DBMS_PROFILER for tuning their pl/sql applications.Your data is bigger than I run, but what I have done in the past is to restrict their accounts to a separate datafile and limit its size to the max that I want for them to use: create objects restricted to accommodate the location.
-
Units for RUN_TOTAL_TIME in DBMS_Profiler
Hi
I have run a procedure using DBMS_profiler package
example
DECLARE
l_result BINARY_INTEGER;
BEGIN
l_result := DBMS_PROFILER.start_profiler(run_comment => 'do_something: ' || SYSDATE);
do_something(p_times => 100);
l_result := DBMS_PROFILER.stop_profiler;
END;
I got example
SELECT runid,
run_date,
run_comment,
run_total_time
FROM plsql_profiler_runs
ORDER BY runid;
RUNID RUN_DATE RUN_COMMENT RUN_TOTAL_TIME
1 21-AUG-03 do_something: 21-AUG-2003 14:51:54 131072000
I am not understanding units for RUN_TOTAL_TIME is it in milli sec?
Thanks in AdvanceIt should be same for Mintime or Max time right
SELECT u.runid,
u.unit_number,
u.unit_type,
u.unit_owner,
u.unit_name,
d.line#,
d.total_occur,
(d.total_time),
d.min_time,
d.max_time
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid AND u.unit_number = d.unit_number
WHERE u.runid = 4; -
Hi folks!
I'm at the beginning of analyze PL/SQL code. I'm using dbms_profiler. For one of our packages, dbms_profiler show me:
Top 10 profiled source lines in terms of Total Time (plsql_profiler_data)
Top Total Time1 Times Executed Min Time2 Max Time2 Unit Owner Name Type Line Text
1 75.76 431010 0.00 0.01 1 USER package PACKAGE BODY 165 fetch c_IconsByDist bulk collect into l_IDs;
2 72.13 431010 0.00 0.01 1 USER package PACKAGE BODY 161 fetch c_CountByDist bulk collect into tmp_NumsAs you can see this row of code fetches cursor into collection. Cursor:
cursor c_IconsByDist is
select i.icon_id
from (select rownum as num, column_value as icon_id
from table(tmp_IDs)) i,
(select rownum as num, column_value as dist
from table(tmp_Distances)) d,
(select column_value as dist
from table(tmp_Nums)) t
where i.num = d.num and
d.dist = t.dist
order by d.dist;May be, I can speed up this piece of code? Any ideas :)
Regards,
Pavel.Thanks for reply. Code of this function:
function get_icon(
p_Table varchar2,
p_ID number)
return number
as
l_ID number;
l_x number;
l_y number;
l_flag boolean;
l_Table varchar2(128);
arr_IDs gg_utils.TSuperNumberArray := gg_utils.TSuperNumberArray();
l_IDs TNumberArray := TNumberArray();
tmp_IDs TNumberArray := TNumberArray(null);
tmp_TypeIDs TNumberArray := TNumberArray(null);
cursor c_RegulationProt is
select p.object_id, u.type as object_type_id
from regulation_prot p, units u
where p.id = p_ID AND u.id = p.object_id;
-- more simple than regulation_ascr view
cursor c_Regulations is
SELECT DISTINCT object_id, object_type_id
FROM reg_ascr
WHERE id = p_ID;
begin
DBMS_APPLICATION_INFO.SET_MODULE(G_MODULE_NAME, 'get_icon');
l_Table := upper(p_Table);
if l_Table IN ('PROTS', 'CHEMS', 'UNITS') then
-- Some kind of Units
l_IDs := get_icon_ids(p_ID);
elsif l_Table = 'REACTS' then
-- Reactions
l_IDs := get_icon_ids(p_ID, -1);
elsif l_Table = 'GENES' then
-- Genes
l_IDs := get_icon_ids(p_ID, -2);
elsif l_Table = 'FUNCS' then
-- Enzymes (funxtions)
l_IDs := get_icon_ids(p_ID, -3);
elsif l_Table = 'CLASS' then
-- For this type of p_Table forms Collection of Collections (TSuperNumberArray)
-- First source of object types and IDs
open c_RegulationProt;
fetch c_RegulationProt bulk collect into tmp_IDs, tmp_TypeIDs;
close c_RegulationProt;
if tmp_IDs.count = 0 then
-- Second source of object types and IDs when first source is empty
open c_Regulations;
fetch c_Regulations bulk collect into tmp_IDs, tmp_TypeIDs;
close c_Regulations;
end if;
-- fill arr_IDs with collections of get_icon_ids
if tmp_IDs.count = 0 then
l_IDs := get_icon_ids(p_ID);
arr_IDs.extend;
arr_IDs(1) := l_IDs;
else
arr_IDs.extend(tmp_IDs.count);
for i in 1..tmp_IDs.count loop
l_IDs := get_icon_ids(tmp_IDs(i), tmp_TypeIDs(i));
arr_IDs(i) := l_IDs;
end loop;
end if;
else
sys_error.raise_error(sys_error.UNKNOWN_ICON_TYPE, l_Table);
end if;
l_ID := -1;
-- for 'CLASS' use collection of collections "arr_IDs" insted of single collection "l_IDs"
if l_Table = 'CLASS' then
l_x := arr_IDs.count;
else
l_x := l_IDs.count;
end if;
if l_x = 0 then
l_ID := 3;
elsif l_x = 1 then
l_ID := l_IDs(1);
else
if arr_IDs.count > 0 then
l_IDs := TNumberArray();
-- put in l_IDs only IDs, which exists in all of arrays
-- first, get array with maximum count of elements
l_x := 0;
l_y := 0;
for i in 1..arr_IDs.count loop
if arr_IDs(i).count > l_x then
l_x := arr_IDs(i).count;
l_y := i;
end if;
end loop;
-- now tmp_IDs have maximum of elements
tmp_IDs := arr_IDs(l_y);
-- trying find any elements of tmp_IDs in other arrays
for i in 1..tmp_IDs.count loop
l_flag := true;
for j in 1..arr_ids.count loop
l_flag := l_flag AND (tmp_IDs(i) member of arr_IDs(j));
exit when not l_flag;
end loop;
if l_flag then
l_IDs.extend;
l_IDs(l_IDs.LAST) := tmp_IDs(i);
end if;
end loop;
end if;
if l_IDs.count > 0 then
l_ID := l_IDs(1);
end if;
end if;
if l_ID < 0 then
l_ID := 1;
end if;
DBMS_APPLICATION_INFO.SET_MODULE(null, null);
return l_ID;
end get_icon;And this function call another function as you can see:
function get_icon_ids(
p_ID integer,
p_TypeID integer default null)
return TNumberArray
as
l_TypeID integer;
is_gene integer;
l_IDs TNumberArray := TNumberArray(null);
tmp_IDs TNumberArray := TNumberArray(null);
tmp_Distances TNumberArray := TNumberArray(null);
tmp_Nums TNumberArray;
cursor c_KndUnitIcons(p_TypeID integer) is
select default_icon_id as icon_id, 1000 as dist
from knd_units
where ID = p_TypeID;
cursor c_SelfIcons is
select icon_id, 0 as dist
from proticons
where prot_id = p_ID;
cursor c_UnitIcons is
select distinct i.icon_id, r.dist
from proticons i, unitrelflat r
where r.unitg = i.prot_id and
r.unitm = p_ID and
r.dist >= 0
order by r.dist;
cursor c_IconsG is
select i.icon_id, 0 as dist
from proticons i, geneprot g
where i.prot_id = g.prot and
g.gene = p_ID;
cursor c_IconsG2 is
select i.icon_id, r.dist
from protrelflat r, proticons i, geneprot g
where r.protgrp = i.prot_id and
r.protmbr = g.prot and
r.dist >= 0 and
g.gene = p_ID
order by r.dist;
-- get distances with only icon
-->>>>Spent the most time of execution
cursor c_CountByDist is
select d.dist
from (select rownum as num, column_value as icon_id
from table(tmp_IDs)) i,
(select rownum as num, column_value as dist
from table(tmp_Distances)) d
where i.num = d.num
having count(icon_id) = 1
group by dist
order by dist;
-- found distances for only icons from c_CountByDist
-->>>>Spent the most time of execution
cursor c_IconsByDist is
select i.icon_id
from (select rownum as num, column_value as icon_id
from table(tmp_IDs)) i,
(select rownum as num, column_value as dist
from table(tmp_Distances)) d,
(select column_value as dist
from table(tmp_Nums)) t
where i.num = d.num and
d.dist = t.dist
order by d.dist;
procedure fill_by_default(
p_TypeID integer,
p_DefaultID number default 1)
as
l_ID number;
l_Distance number;
begin
open c_KndUnitIcons(p_TypeID);
fetch c_KndUnitIcons into l_ID, l_Distance;
close c_KndUnitIcons;
tmp_IDs.extend(1);
tmp_IDs(tmp_IDs.count) := nvl(l_ID, p_DefaultID);
tmp_Distances.extend(1);
tmp_Distances(tmp_Distances.count) := nvl(l_Distance, 2000);
end fill_by_default;
procedure fill_for_units(
p_TypeID integer)
as
begin
-- first, find self icon
open c_SelfIcons;
fetch c_SelfIcons bulk collect into tmp_IDs, tmp_Distances;
close c_SelfIcons;
if tmp_IDs.count = 0 then
open c_UnitIcons;
fetch c_UnitIcons bulk collect into tmp_IDs, tmp_Distances;
close c_UnitIcons;
end if;
fill_by_default(p_TypeID);
end fill_for_units;
begin
DBMS_APPLICATION_INFO.SET_MODULE(G_MODULE_NAME, 'get_icon_ids');
l_TypeID := nvl(p_TypeID, unit_service.get_type_id(p_ID));
case l_TypeID
when -2 then
-- for Genes
open c_IconsG;
fetch c_IconsG bulk collect into tmp_IDs, tmp_Distances;
close c_IconsG;
if tmp_IDs.count = 0 then
open c_IconsG2;
fetch c_IconsG2 bulk collect into tmp_IDs, tmp_Distances;
close c_IconsG2;
end if;
if tmp_IDs.count = 0 then
tmp_IDs.extend(1); -- tmp_IDs is null after empty bulk collect
tmp_IDs(1) := 1;
tmp_Distances.extend(1); -- tmp_Distances is null after empty bulk collect
tmp_Distances(1) := 1;
end if;
when -1 then
fill_by_default(l_TypeID);
when -3 then
fill_by_default(l_TypeID);
else
fill_for_units(l_TypeID);
end case;
open c_CountByDist;
fetch c_CountByDist bulk collect into tmp_Nums;
close c_CountByDist;
open c_IconsByDist;
fetch c_IconsByDist bulk collect into l_IDs;
close c_IconsByDist;
if l_IDs.count = 0 then
l_IDs.extend(1);
if p_TypeID = -1 then
l_IDs(1) := 54;
else
l_IDs(1) := 1;
end if;
end if;
DBMS_APPLICATION_INFO.SET_MODULE(null, null);
return l_IDs;
end get_icon_ids;Now, I reviewing the code. Any help is appreciated.
Regards,
Pavel. -
PL/SQL procedure is 10x slower when running from weblogic
Hi everyone,
we've developed a PL/SQL procedure performing reporting - the original solution was written in Java but due to performance problems we've decided to switch this particular piece to PL/SQL. Everything works fine as long as we execute the procedure from SQL Developer - the batch processing 20000 items finishes in about 80 seconds, which is a serious improvement compared to the previous solution.
But once we call the very same procedure (on exactly the same data) from weblogic, the performance seriously drops - instead of 80 seconds it suddenly runs for about 23 minutes, which is 10x slower. And we don't know why this happens :-(
We've profiled the procedure (in both environments) using DBMS_PROFILER, and we've found that if the procedure is executed from Weblogic, one of the SQL statements runs noticeably slower and consumes about 800 seconds (90% of the total run time) instead of 0.9 second (2% of the total run time), but we're not sure why - in both cases this query is executed 32742-times, giving 24ms vs. 0.03ms in average.
The SQL is
SELECT personId INTO v_personId FROM (
SELECT personId FROM PersonRelations
WHERE extPersonId LIKE v_person_prefix || '%'
) WHERE rownum = 1;Basically it returns an ID of the person according to some external ID (or the prefix of the ID). I do understand why this query might be a performance problem (LIKE operator etc.), but I don't understand why this runs quite fast when executed from SQL Developer and 10x slower when executed from Weblogic (exactly the same data, etc.).
Ve're using Oracle 10gR2 with Weblogic 10, running on a separate machine - there are no other intensive tasks, so there's nothing that could interfere with the oracle process. According to the 'top' command, the wait time is below 0.5%, so there should be no serious I/O problems. We've even checked JDBC connection pool settings in Weblogic, but I doubt this issue is related to JDBC (and everything looks fine anyway). The statistics are fresh and the results are quite consistent.
Edited by: user6510516 on 17.7.2009 13:46The setup is quite simple - the database is running on a dedicated database server (development only). Generally there are no 'intensive' tasks running on this machine, especially not when the procedure I'm talking about was executed. The application server (weblogic 10) is running on different machine so it does not interfere with the database (in this case it was my own workstation).
No, the procedure is not called 20000x - we have a table with batch of records we need to process, with a given flag (say processed=0). The procedure reads them using a cursor and processes the records one-by-one. By 'processing' I mean computing some sums, updating other table, etc. and finally switching the record to processed=1. I.e. the procedure looks like this:
CREATE PROCEDURE process_records IS
v_record records_to_process%ROWTYPE;
BEGIN
OPEN records_to_process;
LOOP
FETCH records_to_process INTO v_record;
EXIT WHEN records_to_process%NOTFOUND;
-- process the record (update table A, insert a record into B, delete from C, query table D ....)
-- and finally mark the row as 'processed=1'
END LOOP;
CLOSE records_to_process;
END process_records;The procedure is actually part of a package and the cursor 'records_to_process' is defined in the body. One of the queries executed in the procedure is the SELECT mentioned above (the one that jumps from 2% to 90%).
So the only thing we actually do in Weblogic is
CallableStatement cstmt = connection.prepareCall("{call ProcessPkg.process_records}");
cstmt.execute();and that's it - there is only one call to the JDBC, so the network overhead shouldn't be a problem.
There are 20000 rows we use for testing - we just update them to 'processed=0' (and clear some of the other tables). So actually each run uses exactly the same data, same code paths and produces the very same results. Yet when executed from SQL developer it takes 80 seconds and when executed from Weblogic it takes 800 seconds :-(
The only difference I've just noticed is that when using SQL Developer, we're using PL/SQL notation, i.e. "BEGIN ProcessPkg.process_records; END;" instead of "{call }" but I guess that's irrelevant. And yet another difference - weblogic uses JDBC from 10gR2, while the SQL Developer is bundled with JDBC from 11g. -
How to optimize the select query that is executed in a cursor for loop?
Hi Friends,
I have executed the code below and clocked the times for every line of the code using DBMS_PROFILER.
CREATE OR REPLACE PROCEDURE TEST
AS
p_file_id NUMBER := 151;
v_shipper_ind ah_item.shipper_ind%TYPE;
v_sales_reserve_ind ah_item.special_sales_reserve_ind%TYPE;
v_location_indicator ah_item.exe_location_ind%TYPE;
CURSOR activity_c
IS
SELECT *
FROM ah_activity_internal
WHERE status_id = 30
AND file_id = p_file_id;
BEGIN
DBMS_PROFILER.start_profiler ('TEST');
FOR rec IN activity_c
LOOP
SELECT DISTINCT shipper_ind, special_sales_reserve_ind, exe_location_ind
INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
FROM ah_item --464000 rows in this table
WHERE item_id_edw IN (
SELECT item_id_edw
FROM ah_item_xref --700000 rows in this table
WHERE item_code_cust = rec.item_code_cust
AND facility_num IN (
SELECT facility_code
FROM ah_chain_div_facility --17 rows in this table
WHERE chain_id = ah_internal_data_pkg.get_chain_id (p_file_id)
AND div_id = (SELECT div_id
FROM ah_div --8 rows in this table
WHERE division = rec.division)));
END LOOP;
DBMS_PROFILER.stop_profiler;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
NULL;
WHEN TOO_MANY_ROWS
THEN
NULL;
END TEST;The SELECT query inside the cursor FOR LOOP took 773 seconds.
I have tried using BULK COLLECT instead of cursor for loop but it did not help.
When I took out the select query separately and executed with a sample value then it gave the results in a flash of second.
All the tables have primary key indexes.
Any ideas what can be done to make this code perform better?
Thanks,
Raj.As suggested I'd try merging the queries into a single SQL. You could also rewrite your IN clauses as JOINs and see if that helps, e.g.
SELECT DISTINCT ai.shipper_ind, ai.special_sales_reserve_ind, ai.exe_location_ind
INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
FROM ah_item ai, ah_item_xref aix, ah_chain_div_facility acdf, ah_div ad
WHERE ai.item_id_edw = aix.item_id_edw
AND aix.item_code_cust = rec.item_code_cust
AND aix.facility_num = acdf.facility_code
AND acdf.chain_id = ah_internal_data_pkg.get_chain_id (p_file_id)
AND acdf.div_id = ad.div_id
AND ad.division = rec.division;ALSO: You are calling ah_internal_data_pkg.get_chain_id (p_file_id) every time. Why not do it outside the loop and just use a variable in the inner query? That will prevent context switching and improve speed.
Edited by: Dave Hemming on Dec 3, 2008 9:34 AM -
Stored Procedure is taking too long time to Execute.
Hi all,
I have a stored procedure which executes in 2 hr in one database, but the same stored procedure is taking more than 6 hour in the other database.
Both the database are in oracle 11.2
Can you please suggest what might be the reasons.
Thanks.In most sites I've worked at it's almost impossible to trace sessions, because you don't have read permissions on the tracefile directory (or access to the server at all). My first check would therefore be to look in my session browser to see what the session is actually doing. What is the current SQL statement? What is the current wait event? What cursors has the session spent time on? If the procedure just slogs through one cursor or one INSERT statement etc then you have a straightforward SQL tuning problem. If it's more complex then it will help to know which part is taking the time.
If you have a licence for the diagnostic pack you can query v$active_session_history, e.g. (developed for 10.2.0.3, could maybe do more in 11.2):
SELECT CAST(ash.started AS DATE) started
, ash.elapsed
, s.sql_text
, CASE WHEN ash.sql_id = :sql_id AND :status = 'ACTIVE' THEN 'Y' END AS executing
, s.executions
, CAST(NUMTODSINTERVAL(elapsed_time/NULLIF(executions,0)/1e6,'SECOND') AS INTERVAL DAY(0) TO SECOND(1)) AS avg_time
, CAST(NUMTODSINTERVAL(elapsed_time/1e6,'SECOND') AS INTERVAL DAY(0) TO SECOND(1)) AS total_time
, ROUND(s.parse_calls/NULLIF(s.executions,0),1) avg_parses
, ROUND(s.fetches/NULLIF(s.executions,0),1) avg_fetches
, ROUND(s.rows_processed/NULLIF(s.executions,0),1) avg_rows_processed
, s.module, s.action
, ash.sql_id
, ash.sql_child_number
, ash.sql_plan_hash_value
, ash.started
FROM ( SELECT MIN(sample_time) AS started
, CAST(MAX(sample_time) - MIN(sample_time) AS INTERVAL DAY(0) TO SECOND(0)) AS elapsed
, sql_id
, sql_child_number
, sql_plan_hash_value
FROM v$active_session_history
WHERE session_id = :sid
AND session_serial# = :serial#
GROUP BY sql_id, sql_child_number, sql_plan_hash_value ) ash
LEFT JOIN
( SELECT sql_id, plan_hash_value
, sql_text, SUM(executions) OVER (PARTITION BY sql_id) AS executions, module, action, rows_processed, fetches, parse_calls, elapsed_time
, ROW_NUMBER() OVER (PARTITION BY sql_id ORDER BY last_load_time DESC) AS seq
FROM v$sql ) s
ON s.sql_id = ash.sql_id AND s.plan_hash_value = ash.sql_plan_hash_value
WHERE s.seq = 1
ORDER BY 1 DESC;:sid and :serial# come from v$session. In PL/SQL Developer I defined this as a tab named 'Session queries' in the session browser.
I have another tab named 'Object wait totals this query' containing:
SELECT LTRIM(ep.owner || '.' || ep.object_name || '.' || ep.procedure_name,'.') AS plsql_entry_procedure
, LTRIM(cp.owner || '.' || cp.object_name || '.' || cp.procedure_name,'.') AS plsql_procedure
, session_state
, CASE WHEN blocking_session_status IN ('NOT IN WAIT','NO HOLDER','UNKNOWN') THEN NULL ELSE blocking_session_status END AS blocking_session_status
, event
, wait_class
, ROUND(SUM(wait_time)/100,1) as wait_time_secs
, ROUND(SUM(time_waited)/100,1) as time_waited_secs
, LTRIM(o.owner || '.' || o.object_name,'.') AS wait_object
FROM v$active_session_history h
LEFT JOIN dba_procedures ep
ON ep.object_id = h.plsql_entry_object_id AND ep.subprogram_id = h.plsql_entry_subprogram_id
LEFT JOIN dba_procedures cp
ON cp.object_id = h.plsql_object_id AND cp.subprogram_id = h.plsql_subprogram_id
LEFT JOIN dba_objects o ON o.object_id = h.current_obj#
WHERE h.session_id = :sid
AND h.session_serial# = :serial#
AND h.user_id = :user#
AND h.sql_id = :sql_id
AND h.sql_child_number = :sql_child_number
GROUP BY
ep.owner, ep.object_name, ep.procedure_name
, cp.owner, cp.object_name, cp.procedure_name
, session_state
, CASE WHEN blocking_session_status IN ('NOT IN WAIT','NO HOLDER','UNKNOWN') THEN NULL ELSE blocking_session_status END
, event
, wait_class
, o.owner
, o.object_nameIt's not perfect and the numbers aren't reliable, but it gives me an idea where the time might be going. While I'm at it, v$session_longops is worth a look, so I also have 'Longops' as:
SELECT sid
, CASE WHEN l.time_remaining> 0 OR l.sofar < l.totalwork THEN 'Yes' END AS "Active?"
, l.opname AS operation
, l.totalwork || ' ' || l.units AS totalwork
, NVL(l.target,l.target_desc) AS target
, ROUND(100 * l.sofar/GREATEST(l.totalwork,1),1) AS "Complete %"
, NULLIF(RTRIM(RTRIM(LTRIM(LTRIM(numtodsinterval(l.elapsed_seconds,'SECOND'),'+0'),' '),'0'),'.'),'00:00:00') AS elapsed
, l.start_time
, CASE
WHEN l.time_remaining = 0 THEN l.last_update_time
ELSE SYSDATE + l.time_remaining/86400
END AS est_completion
, l.sql_id
, l.sql_address
, l.sql_hash_value
FROM v$session_longops l
WHERE :sid IN (sid,qcsid)
AND l.start_time >= TO_DATE(:logon_time,'DD/MM/YYYY HH24:MI:SS')
ORDER BY l.start_time descand 'Longops this query' as:
SELECT sid
, CASE WHEN l.time_remaining> 0 OR l.sofar < l.totalwork THEN 'Yes' END AS "Active?"
, l.opname AS operation
, l.totalwork || ' ' || l.units AS totalwork
, NVL(l.target,l.target_desc) AS target
, ROUND(100 * l.sofar/GREATEST(l.totalwork,1),1) AS "Complete %"
, NULLIF(RTRIM(RTRIM(LTRIM(LTRIM(numtodsinterval(l.elapsed_seconds,'SECOND'),'+0'),' '),'0'),'.'),'00:00:00') AS elapsed
, l.start_time
, CASE
WHEN l.time_remaining = 0 THEN l.last_update_time
ELSE SYSDATE + l.time_remaining/86400
END AS est_completion
, l.sql_id
, l.sql_address
, l.sql_hash_value
FROM v$session_longops l
WHERE :sid IN (sid,qcsid)
AND l.start_time >= TO_DATE(:logon_time,'DD/MM/YYYY HH24:MI:SS')
AND l.sql_id = :sql_id
ORDER BY l.start_time descYou can also get this sort of information out of OEM if you're lucky enough to have access to it - if not, ask for it!
Apart from this type of monitoring, you might try using DBMS_PROFILER (point and click in most IDEs, but you can use it from the SQL*Plus prompt), and also instrument your code with calls to DBMS_APPLICATION_INFO.SET_CLIENT_INFO so you can easily tell from v$session which section of code is being executed. -
Insert taking a very long time
I've got an insert statement that inserts into a fairly small table that is taking up to 3 minutes to complete. The table only has 52k records in it. Other tables in this database can be inserted into in less than 1 second.
Running explain plan on the insert only gives 1 line that isn't very helpful and basically says the execution should be very quick.
My question is, if I want to really get to the heart of oracle 11g and debug it, what tools should I use or what should I look at to find out why this insert is taking so long?user6657500 wrote:
your mention of triggers got me looking and indeed there is a trigger. The trigger clears the cache on the table any time a row in inserted or updated. I'm currently trying to figure out if this is where the problem is. I still don't have any tools that I know of where I could watch all of the things that are going on when I hit this.Running a trace will reveal all of the SQL that is executed during the insert - that includes any SQL that is fired as a result of the triggers. So any deletes, updates or additional inserts will all appear in the trace file and TKProf will allow you to analyse the trace file and output a report in an order that makes sense for you. For example you could order it by elapsed time, or number of executions etc. The resulting report will contain the SQL, execution plans and importantly the wait events (the metrics that show you exactly what you statement has been waiting for) ordered in the way you specified.
For example if your insert results in 52000 new rows but you have a row level trigger that issues a delete on another table, your trace file will contain the 1 insert statement and all of its waits, and it would also contain 52000 delete statements and their associated waits.
From a PL/SQL perspective you can profile an application to see exactly where it's spending its time using dbms_profiler
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_profil.htm
or in 11g dbms_hprof
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_hprof.htm
Make trace, TKprof and the profilers your friends! :-)
Good luck. -
Which query is running in package
Hi,
I created one package and run this package in toad
like
select reports_package.func_reports('ABC') from dual;
I have 10 delete and 10 insert statement on this .
How I will check which query is running iin database.799301 wrote:
I have 10 delete and 10 insert statement on this .
How I will check which query is running iin database.V$SESSION_LONGOPS holds information on SQL taking > 6 seconds to run but its not always possible to find expected entries in the V$ views
If you're trying to figure out what is slow a better bet would be to perform trace or use DBMS_PROFILER to analyze the PL/SQL
Maybe you are looking for
-
I finished an iDVD project, saved, and hit the "Burn" button. I insert a blank DVD-R when prompted, close the drive, and after a few seconds get a message that the drive can't locate a disk and to please insert a blank DVD-R. I put it in again, makin
-
I have a problem connecting a canon Pro-100 injet printer to my iMac oS Mavericks.
I have purchased a Canon Pro-100 inkjet printer which I am trying to connect to my printer. The software on the disc was not compatible with OS Mavericks so Canon sent me the updated software. Still won't work. I get an air message which I have sent
-
User exit (PBO) does not work for IT0107 (Planned working time Belgium)
Hello, I have written a simple code in program ZXPADU01 in order to default a field of IT0107 (part of IT0007 for Belgium). However, my field is not defaulted to the correct value. The problem seems to be a CLEAR P0107 statement in module "default" o
-
Hi All, Currently I am working in PCR workflows. All workflows are working fine. I need to update status in UWL. Once any work item is approved or rejected or completed the UWL will NOT update the status. It always open / process status. I used BUS70
-
Oracle Buisness Rule error with BPM
Hi I am trying to invoke a buisness rule from a BPM flow The rule works fine when tested independently using a test function from Jdeveloper which invokes the corresponding rules decison_service.The rule takes input and output of xml type.The facts a