Does cursor hit database everytime ??
Hi seniors,
I am little confused with the concept of the cursor. I juts want to know is cursor has anything to deal with the database hit.
Explanation :
Say for example I have a cursor which is returning the 1000 rows from multiple table then that recordset will stored in some named sql area called cursor.
Now my question is when I loop the cursor will it get the actual data directly from the cursor OR It will just get an address location from the cursor to the actual database table or something.
If possible please help me to clear this doubt.
the reason being I have created one package which will move or drop the table with its all objects from all the schema available on the database server and it makes use of lot of cursor which are based on system views like all_table,all_trigger etc
Thanks in advance.
Example is :
PROCEDURE move_table_pd
cTable in varchar2,
cFromSchema in varchar2 := 'STI_COUNTRY_USA',
cToSchema in varchar2 := 'STI_COMMON',
nVerbosity in number := 0,
nExecuteImmediate in number := 1
IS
BEGIN
if ((cTable is not null) AND (cFromSchema is not null) AND (cToSchema is not null)) then
if (nVerbosity <> 0) then
print_start_time_pd;
end if;
cTableName := upper(cTable);
cSourceSchema := upper(cFromSchema);
cDestinationSchema := upper(cToSchema);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',false);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',false);
-- Step 1 : Create the table at destination schema if needed.
create_table_pd(cTableName,cSourceSchema,cDestinationSchema);
-- Step 2 : Create Sequences and Triggers for the table at destination schema if needed.
create_trigger_and_sequence_pd(cTableName,cSourceSchema,cDestinationSchema);
-- Step 3 : Create Indexes for the table at destination schema if needed and then drop the rest all indexes if any.
create_index_pd(cTableName,cSourceSchema,cDestinationSchema);
-- step 4 : Insert the data at destination schema table
populateTable_pd(cTableName,cSourceSchema,cDestinationSchema);
-- The last step is to Drop the table and we need to really take care here.
-- step 5 : Drop the table from all other schema except destination schema.
drop_table_pd(cTableName,cDestinationSchema);
-- Again create the public synonyms on table
create_and_grant_synonym_pd(cTableName,cTableName);
-- Step 6 : Now execute all the statements from the statement array.
executeStatement_pd(nVerbosity,nExecuteImmediate,cDestinationSchema);
if (nVerbosity <> 0) then
print_end_time_pd;
end if;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END move_table_pd;
PROCEDURE create_table_pd
cTableName in ALL_TABLES.TABLE_NAME%TYPE,
cSourceSchema in ALL_TABLES.OWNER%TYPE,
cDestinationSchema in ALL_TABLES.OWNER%TYPE
IS
BEGIN
-- Step 1 : Create or drop the table depending on the the tables schema.
if ((cTableName is not null) AND (cSourceSchema is not null)) then
FOR REC_TABLE IN cur_get_create_table_detail(cTableName,cSourceSchema)
LOOP
BEGIN
if (REC_TABLE.OWNER = cSourceSchema) then
--Get the DDL of the table
cSqlStatement := getObjectDDL_fd('TABLE',cTableName,REC_TABLE.OWNER);
-- As This sql statement is with the source table schma name we need to replace that with the destination schema
-- And then we should create the table.
cSqlStatement := FindAndReplace_fd(cSqlStatement,cSourceSchema,cDestinationSchema);
-- Now first check whether the same table exist at destination schema or not if yes no need to create the same else create.
nObjectFound := isTableAlreadyExist_fd(cTableName,cDestinationSchema);
if (nObjectFound = 0) then
-- Now we are assured that the same table does not present at cDestinationSchema
-- So now we can push the statement to be executed in statements array.
pushStatement_pd(cSqlStatement);
cSqlStatement := null;
end if;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END;
END LOOP;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END create_table_pd;
PROCEDURE create_trigger_and_sequence_pd
cTableName in ALL_TABLES.TABLE_NAME%TYPE,
cSourceSchema in ALL_TRIGGERS.OWNER%TYPE,
cDestinationSchema in ALL_TRIGGERS.OWNER%TYPE
IS
-- Procedure local variables.
-- for triggers details
cTriggerSchema ALL_TRIGGERS.owner%TYPE;
cDescription ALL_TRIGGERS.description%TYPE;
cTriggerBody ALL_TRIGGERS.trigger_body%TYPE;
cTriggerName ALL_TRIGGERS.trigger_name%TYPE;
-- for sequence details
cSequenceOwner ALL_SEQUENCES.sequence_owner%TYPE ;
cSequenceName ALL_SEQUENCES.sequence_name%TYPE ;
-- Check Trigger count on table
cTriggerCount number :=0;
BEGIN
-- Step 2 : Create the sequences, triggers and there synonyms and grants on the the tables schema.
if ((cTableName is not null) AND (cSourceSchema is not null) and (cDestinationSchema is not null)) then
FOR REC_TRIGGER IN cur_get_create_trigger_detail(cTableName,cSourceSchema)
LOOP
BEGIN
cTriggerSchema := REC_TRIGGER.owner ;
cDescription := REC_TRIGGER.description ;
cTriggerBody := REC_TRIGGER.trigger_body;
cTriggerName := REC_TRIGGER.trigger_name;
if (cTriggerSchema = cSourceSchema) then
-- check the sequences for that trigger if any then create the same
FOR REC_SEQUENCE IN cur_get_create_sequence_detail(cTriggerName,cSourceSchema) LOOP
cSequenceOwner := REC_SEQUENCE.sequence_owner;
cSequenceName := REC_SEQUENCE.sequence_name;
BEGIN
if ((cSequenceName is not null) AND (cSequenceOwner = cSourceSchema)) then
--Get the DDL of the sequence
cSqlStatement := getObjectDDL_fd('SEQUENCE',cSequenceName,cSequenceOwner);
-- As This sql statement is with the source sequence schema name we need to replace that with the destination schema
-- And then we should create the sequence.
cSqlStatement := FindAndReplace_fd(cSqlStatement,cSourceSchema,cDestinationSchema);
-- Now first check whether the same sequence exist at destination schema or not if yes no need to create the same else create.
nObjectFound := isSequenceAlreadyExist_fd(cSequenceName,cDestinationSchema);
if (nObjectFound = 0) then
-- Now we are assured that the same sequence does not present at cDestinationSchema
-- So now we can push the statement to be executed in statements array.
pushStatement_pd(cSqlStatement);
cSqlStatement := null;
-- First drop synonym and then create
drop_synonym_pd(cSequenceName,cDestinationSchema);
-- Create the public synonym for sequence and give the grants to the sequence
-- As we know this sequence is the part of the trigger so we do not need
-- to create the synoyms and grants for the same
--create_and_grant_synonym_pd(cSequenceName,cSequenceName);
-- And now drop this existing sequences
drop_sequence_pd(cSequenceName,cSourceSchema,cDestinationSchema);
end if;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END;
END LOOP;
-- Now first check whether the same table exist at destination schema or not if yes no need to create the same else create.
nObjectFound := isTriggerAlreadyExist_fd(cTriggerName,cDestinationSchema);
if (nObjectFound = 0) then
-- Now we are assured that the same table does not present at cDestinationSchema
-- So now we can push the statement to be executed in statements array.
-- Rather we can create the trigger using some different way as show below
-- Create trigger using different way
cDescription := FindAndReplace_fd(UPPER(cDescription),cTableName,cDestinationSchema||'.'||cTableName);
cSqlStatement :='CREATE OR REPLACE TRIGGER '||cDescription||UPPER(cTriggerBody);
pushStatement_pd(cSqlStatement);
cSqlStatement := null;
end if;
-- Now drop the existing synonyms on triggers if any
-- As we do not create the synonyms for triggers then we dont have to drop the same
--drop_synonym_pd(cTriggerName,cDestinationSchema);
-- Now drop the existing triggers from other schema
-- We do not need to drop the triggers manually as it gets dropped along with the table.
--drop_trigger_pd(cTriggerName,cSourceSchema,cDestinationSchema);
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END;
END LOOP;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END create_trigger_and_sequence_pd;
PROCEDURE create_index_pd
cTableName in ALL_INDEXES.TABLE_NAME%TYPE,
cSourceSchema in ALL_INDEXES.OWNER%TYPE,
cDestinationSchema in ALL_INDEXES.OWNER%TYPE
IS
BEGIN
--cur_get_create_index_detail index_name
if((cTableName is not null) AND (cSourceSchema is not null) AND (cDestinationSchema is not null) ) then
FOR REC_CREATE_INDEX IN cur_get_create_index_detail(cTableName,cSourceSchema)
LOOP
BEGIN
if ((REC_CREATE_INDEX.index_name IS NOT NULL ) AND (REC_CREATE_INDEX.owner = cSourceSchema)) then
--Get the DDL of the Index
cSqlStatement := getObjectDDL_fd('INDEX',REC_CREATE_INDEX.index_name,REC_CREATE_INDEX.owner);
-- As This sql statement is with the source index schema name
-- we need to replace that with the destination schema
-- And then we should create the sequence.
cSqlStatement := FindAndReplace_fd(cSqlStatement,cSourceSchema,cDestinationSchema);
-- Now first check whether the same index exist at destination schema or not
-- if yes no need to create the same else create.
nObjectFound := isIndexAlreadyExist_fd(REC_CREATE_INDEX.index_name,cDestinationSchema);
if (nObjectFound = 0) then
-- Now we are assured that the same index does not present at cDestinationSchema
-- So now we can push the statement to be executed in statements array.
pushStatement_pd(cSqlStatement);
cSqlStatement := null;
-- Now as we have created a statement to create the index
-- So we need to check its existing Synonmyms and drop the same if exist
drop_synonym_pd(REC_CREATE_INDEX.index_name,cDestinationSchema);
-- Guess for Indexes we do not need to creat public synonym and no need to give grant to index
--create_and_grant_synonym_pd(REC_CREATE_INDEX.index_name,REC_CREATE_INDEX.index_name);
-- And now drop this existing indexes if any
-- We do not need to drop the indexes manually as it gets dropped along with the table.
-- drop_index_pd(REC_CREATE_INDEX.index_name,cSourceSchema,cDestinationSchema);
end if;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END;
END LOOP;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END create_index_pd;
PROCEDURE populateTable_pd
cTableName in ALL_TABLES.TABLE_NAME%TYPE,
cSourceSchema in ALL_TABLES.OWNER%TYPE,
cDestinationSchema in ALL_TABLES.OWNER%TYPE
IS
BEGIN
if((cTableName is not null) AND (cSourceSchema is not null) AND (cDestinationSchema is not null) ) then
nObjectFound := isTableAlreadyExist_fd(cTableName,cSourceSchema);
if (nObjectFound <> 0) then
cSqlStatement := 'INSERT INTO ' ||cDestinationSchema||'.'|| cTableName||
' SELECT * FROM '||cSourceSchema||'.'||cTableName;
pushStatement_pd(cSqlStatement);
cSqlStatement := null;
end if;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END populateTable_pd;
PROCEDURE executeStatement_pd
nVerbosity in number :=0,
nExecuteImmediate in number := 1,
cExecuteOn in varchar2:= 'STI_COMMON'
IS
nTotalRecords number :=0;
l_strsql LONG;
cStmt varchar2(200);
cError varchar2(300);
cCurrentSchema varchar2(50);
BEGIN
if (aAllStatement is not null) then
cCurrentSchema := getCurrentSchema_fd;
if (nExecuteImmediate <> 0) then
--altersession_pd;
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',false);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',false);
end if;
nTotalRecords := aAllStatement.COUNT;
if (nVerbosity <> 0) then
DBMS_OUTPUT.PUT_LINE('TOTAL STATEMENTS TO BE EXECUTED :'|| nTotalRecords);
DBMS_OUTPUT.PUT_LINE('---------------------- EXECUTION BEGINS HERE -----------------');
end if;
--FOR cntr in 1..nTotalRecords
FOR cntr in aAllStatement.FIRST..aAllStatement.LAST
LOOP
BEGIN
if aAllStatement.EXISTS(cntr) then
cSqlStatement := aAllStatement(cntr);
l_strsql := dbms_lob.SUBSTR( cSqlStatement, 32765, 1 );
if (nVerbosity <> 0) then
DBMS_OUTPUT.PUT_LINE(cntr||' Now executing : '||cSqlStatement );
end if;
if (nExecuteImmediate <> 0) then
if (l_strsql is not null) then
BEGIN
--EXECUTE IMMEDIATE cSqlStatement;
EXECUTE IMMEDIATE l_strsql;
INSERT INTO gen_sql_log (t_sql_log_time,c_os_user,c_host,c_Server_Host,c_sql) VALUES (LOCALTIMESTAMP,sys_context('USERENV', 'OS_USER'),sys_context('USERENV', 'HOST'),sys_context('USERENV', 'SERVER_HOST'),l_strsql);
EXCEPTION
WHEN OTHERS THEN
cError:=substr(SQLERRM,1,300);
DBMS_OUTPUT.PUT_LINE('-------------<< ERROR >>-------------');
DBMS_OUTPUT.PUT_LINE('Error while running : '|| l_strsql);
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('Error Occured : '|| cError);
DBMS_OUTPUT.PUT_LINE('-------------<< END OF ERROR >>-------------');
END;
end if;
end if;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END;
END LOOP;
aAllStatement.DELETE(aAllStatement.FIRST,aAllStatement.LAST);
--aAllStatement.TRIM(nTotalRecords);
nStatementCounter :=0;
-- Move back to previous session
if (nExecuteImmediate <> 0) then
--altersession_pd(cCurrentSchema);
cCurrentSchema := getCurrentSchema_fd;
if (nVerbosity <> 0) then
DBMS_OUTPUT.PUT_LINE(' CURRENT SCHEMA : '|| cCurrentSchema);
end if;
end if;
commit;
end if;
EXCEPTION
WHEN OTHERS THEN
null;
END executeStatement_pd;
Similar Messages
-
Hi,
DBA has told us that there are 200 open cursors in database and this number is growing everyday.
how to find out which places developers have forgotten to use "Close Cursor" in code.
Thanks
Sandythanks it was useful information. is there a way to find out which procedures are the culprit.If you (or your DBA) are talking about v$open_cursor...
Oracle caches cursors. What you see in this view are not just pl/sql Cursor For Loops but any kind of cursor. Even a plain old select statement "select * from dual;" will show up here.
And when a cursor is closed/completed it does not immediately disappear from this view. It may remain in the cache (and in this view) until a slot is needed for a new cursor. Who knows how long that may take...
Yes, it's possible you may have a bad procedure that isn't doing a proper close of an explicit cursor. But you (or your DBA) should do some research on this view before chasing after something that may not even exist. -
Bea weblogic 6.1 does not oracle Database fail over
Hi We had Concurrency Strategy:excusive . Now we change that to Database for performace
reasons. Since we change that now when we do oracle database fail over weblogic
6.1 does not detect database fail over and it need restart.
how we can resolve this ??mt wrote:
Hi We had Concurrency Strategy:excusive . Now we change that to Database for performace
reasons. Since we change that now when we do oracle database fail over weblogic
6.1 does not detect database fail over and it need restart.
how we can resolve this ??Are your pools set to test connections at reserve time?
Joe -
What does a package database consist of?
What does a package database consist of? can any one give me complete description.
Hi,
This is Prabhuram Devar,
A package database consists of:
/var/sadm/pkg: This is a directory containing one directory entry for every package installed on the system.
/var/sadm/pkg/<packagename>/pkginfo
/var/sadm/pkg/<packagename>/save/<patchid>/undo.Z for the backout packages.
/var/sadm/pkg/<packagename>/save/pspool/<packagename>: A sparse package that is used for non-global zone install.
/var/sadm/install/contents: This file has an entry for every file in the system that has been installed through a package. Entries are added or removed automatically using the binaries installf and removef. -
Hi ALL,
i have implenented LMR1M001---- EXIT_SAPLMRMP_010 user exit, but while miro it does not hit expression:
I put break-point in EXIT_SAPLMRMP_010- ZXM08U16.
I have created project in following way.
In tcode CMOD create project ZMIRO.
In enhancement assigment create LMR1M001.
In exit EXIT_SAPLMRMP_010-ZXM08U16.
I write code.
and activated both project and include.
and i set debug mode active.
But while running miro. it does not hit:
In miro i am doing following operation.
select subsequent debit.
In Basic data tab i insert
date.
In Detail tab i insert Un
planned delevery cose
Insert
Purchase order no.
And press Enter.
After Pressing
enter, It exutue directly without calling user exit.
Please help me,
Thanks,Hi,
May be this exit is not called for your scenario or the condition to trigger that exit may not have met.
Just activate your SQL Trace and then run MIRO. Just check in the tace log whether your exit is called. Else you may use another one.
Regards,
Renjith Michael. -
How does the hit counter work?
hi
a while ago i made a web site with a hit counter on it
i wrote the hit counter using php and a flat text file to store the number of hits
i experienced a number of problems
1. if someone visited the site more than once, the hit counter would go up each time they visited. this is not really desired as i really only want to know how many unique hits i get
2. web bots and web crawlers would also make the counter go up
does the hit counter that iweb generates suffer from this problem?
how does it work - from total hits or only unique hits?
does anyone know it web crawlers effect the counter?
thanks!It is a hit counter and will register even when you visit the website.
I found this annoying, so I ditched the iWeb hit counter and went to Bravenet and downloaded a visit counter instead. You can customise your design and can have a visit counter instead of a hit counter and you can also set it to ignore your own hits - so if you want to visit your site to see how it looks it won't register your hit. The hit counter will register each time and for every page, whereas a visit counter will just register the visit to the first page and then thats it. If you use Bravenet it will give you stats too.
I am sure you can get hit counters and visit counters from lots of other sources though. -
JBuilder Personal does not allow database access ?
Hi friends,
JBuilder Personal does not allow database access ?
Thanks,
MYIJBuilder 5 Personal doesn't allow drag and drop capabilities for Database Express. Borland has disabled many of the programs features in the free downloadable version.
However, there are still many ways to access a database, you just have to do it yourself. For instance you can use the JDBC-ODBC bridge to connect to a MS Access database and there is plenty of on-line documentation on how to do this.
hope this helps. -
Does the Repair Database Option Remove Duplicates?
My main question is does the repair database option also delete some duplicates? I ask because I recently had an issue with importing some duplicates. Before tackling the process of removing the dups, I decided to go ahead and repair the database since I imported a bunch of files and wanted to make sure everything was working correctly. After the repair database completed, I noticed that I had about 2k fewer images in my database. Luckily I had backed it up before I ran the repair. I opened up the backup and so far all I see missing are some of the dups. I spot checked at least 10 different projects and all the original masters seem to be there.
I wasn't aware that repair would actually remove files. I wish it would have removed all of them but I still kind of find this to be weird as that was an unexpected function. What do you guys think?Neither of the first aid options, repair or rebuild, intentionally target duplicates. Any files removed by either of those procedures were removed because of some inconsistency in the database.
After either of those run you might see a new project in your library named Recovered. This contains any orphaned image files, files for which Aperture could not place in a project. -
Does Universal Connector read only the changes or the whole of the database everytime
Does the Universal Connector read only the changes in the external database into the Connector View every scheduled cycle or does it read the whole of the database into the connector view every time ?
I need to write a indirect connector for Lotus Notes database using the universal connector and univeral text parse.So I want to know if the Import and Dump scripts that I will have to write will have to read just the changes that took place or the whole of the Notes DB everytime ?
-AparnaIf it is possible to get just the changes then do that.
-
Callable statements and cursors in database
Hi,
I am using a callable statement (from a JSP) that calls a big stored oracle procedue. The stored procedure inserts/deletes/updates many tables.
But once i close the statement(I am sure i closed it in finally block), when i monitor the database, there are a lot of cursors open,one cursor for every insert/update/select statement in the procedure.
But when i hit the same JSP again, it is not opening more cursors, but reusing them. But when multiple users hit the page simultaneously, it is opening more cursors and we could find that it is a multiple of number of active connections.
Is this a feature (statement cachin??). we can understand if it opens one cursor per procedure. But if we have tens of selects/updates in a procedure, why should it open that many number of cursors?
I need to set the number of open cursors in oracle to a big number for this. Will it eat up my resources?
software
9ias containers 9.0.3
oracle 9i database
Thanks
Srinath...callable statement to pass ...Are you perhaps explicitly writing code to replace a single tick with two single ticks. And also using the setString() method? If so each single tick would then become four ticks in the insert, which would be stored as two ticks. And then when you query it out you would have two ticks.
-
Unknown Query Engine Error when doing a Verify Database
Hi there,
Using Crystal 10. Sybase Server.
Have built a report that connects to a stored procedure. The stored procedure has since been updated, and new fields have been added.
Usual process for me is to simply:
1. Go to Database > Verify Database
2. Enter login credentials for server
3. Enter parameter values (as per stored proc)
4. Report is then updated and new fields display
Currently, I am able to complete step 3. As soon as I hit ok, i get an 'Unknown Query Engine Error' - to which I can only hit OK. Doing so results in no changes - and the new fields of course do not display.
Any clues on what might be going wrong? I am guessing that it is something to do with the stored proc itself - but perhaps it is some sort of permissions issue?
Grateful for any assistance you can provide. Please let me know if you need any more info from me.
Many thanks,
CorinneLikely due to some field type defined incorrectly or not supported. Try creating a new report off the new update connection info. If that fails also then it's the data source causing the problem.
-
Fetch next cursor/ invalid database interruption
Hi Experts,
I would appreciate your help for the following problem:
I am trying to develope something like that:
OPEN CURSOR ... WITH HOLD
loop at ...
FETCH ... PACKAGE SIZE ... into lt_names
insert table ZZ_XXXX ... from lt_names
commit work
endloop.
in the second loop, the FETCH dumps raising the runtime error "Invalid interruption of a database selection"
Runtime Error DBIF_RSQL_INVALID_CURSOR
Exception CX_SY_OPEN_SQL_DB
The cursor is opened "with hold".
Does anybody know, why does it dump?
Cheers
marmsgHi Thomas,
I have four entires in a custom z-table. During FETCH Statements, I use, PACKET SIZE, of 2.
Rough code is as follows:
OPEN CURSOR WITH HOLD w_dbcur1 FOR
SELECT * FROM ztable.
WHILE sy-subrc = 0.
FETCH NEXT CURSOR w_dbcur1 INTO TABLE i_notes PACKAGE SIZE p_pack. "Here packet size is of 2
IF sy-subrc = 0.
Calling some X BAPI Function module.
Calling BAPI_TRANSACTION_COMMIT Function module.
ENDIF.
ENDWHILE.
CLOSE CURSOR w_dbcur1.
Here the problem is, if I give packet size as 30,000, then program is working fine. If I am giving packet size as 2, which less than 4 entries in the table, The program dumps either at first execution of FETCH NEXT CURSOR statement or at the second time of FETCH NEXT CURSOR. Can you provide exact solution for this, and not the links please. Rewards will be provided.
Thanks,
Vijayanand. -
What happens if 100 querys hits database at one time?
I have sever problem. I have java application and having conection pool upto 150. Let's say from the application my database get hit by 100 connection AT A TIME.
my cpu goes 99%.
Now my question is how can i get better performance out of this.
Assumtions: all queries are well written, net work is good
Expected solution: May trying increasing SGA size would increaase some performance?
I heard some thing about doing parralle queries. what 's that?
Thanks in advance.I just ran my programme for one session and here is a trace for one session...
Sorry if i am violating this forum rule by posting a big log....
TKPROF: Release 10.2.0.1.0 - Production on Thu Dec 21 15:44:04 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: j.txt
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
call fl_trans.validate_city(:1,:2,:3,:4,:5,:6)
call count cpu elapsed disk query current rows
Parse 36 0.00 0.00 0 0 0 0
Execute 36 0.10 0.13 2 526 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 72 0.10 0.13 2 526 0 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 70
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 36 0.00 0.00
SQL*Net message from client 36 0.00 0.04
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece
from
idl_sb4$ where obj#=:1 and part=:2 and version=:3 order by piece#
call count cpu elapsed disk query current rows
Parse 6 0.00 0.00 0 0 0 0
Execute 6 0.00 0.00 0 0 0 0
Fetch 16 0.00 0.00 0 42 0 10
total 28 0.00 0.00 0 42 0 10
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
2 TABLE ACCESS BY INDEX ROWID IDL_SB4$ (cr=6 pr=0 pw=0 time=88 us)
2 INDEX RANGE SCAN I_IDL_SB41 (cr=4 pr=0 pw=0 time=61 us)(object id 117)
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece
from
idl_ub1$ where obj#=:1 and part=:2 and version=:3 order by piece#
call count cpu elapsed disk query current rows
Parse 6 0.00 0.00 0 0 0 0
Execute 6 0.00 0.00 0 0 0 0
Fetch 15 0.00 0.00 0 46 0 11
total 27 0.00 0.00 0 46 0 11
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS BY INDEX ROWID IDL_UB1$ (cr=4 pr=0 pw=0 time=93 us)
1 INDEX RANGE SCAN I_IDL_UB11 (cr=3 pr=0 pw=0 time=57 us)(object id 114)
select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece
from
idl_char$ where obj#=:1 and part=:2 and version=:3 order by piece#
call count cpu elapsed disk query current rows
Parse 6 0.00 0.00 0 0 0 0
Execute 6 0.00 0.00 0 0 0 0
Fetch 10 0.00 0.00 0 23 0 4
total 22 0.00 0.00 0 23 0 4
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS BY INDEX ROWID IDL_CHAR$ (cr=4 pr=0 pw=0 time=73 us)
1 INDEX RANGE SCAN I_IDL_CHAR1 (cr=3 pr=0 pw=0 time=50 us)(object id 115)
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece
from
idl_ub2$ where obj#=:1 and part=:2 and version=:3 order by piece#
call count cpu elapsed disk query current rows
Parse 6 0.00 0.00 0 0 0 0
Execute 6 0.00 0.00 0 0 0 0
Fetch 11 0.00 0.00 0 37 0 9
total 23 0.00 0.00 0 37 0 9
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
2 TABLE ACCESS BY INDEX ROWID IDL_UB2$ (cr=5 pr=0 pw=0 time=100 us)
2 INDEX RANGE SCAN I_IDL_UB21 (cr=3 pr=0 pw=0 time=65 us)(object id 116)
SELECT INITCAP(CITY),UPPER(STATE),ZIP
FROM
US_CITIES WHERE (CITY = :B2 AND STATE = :B1 )
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 72 0.01 0.00 0 0 0 0
Fetch 72 0.00 0.02 2 144 0 0
total 144 0.01 0.03 2 144 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 70 (recursive depth: 1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 2 0.02 0.02
SELECT INITCAP(CITY_NAME),UPPER(STATE),ZIP_CODE
FROM
US_ZIP_CODES WHERE (ZIP_CODE = :B1 )
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 72 0.00 0.00 0 0 0 0
Fetch 144 0.00 0.00 0 216 0 72
total 216 0.01 0.01 0 216 0 72
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 70 (recursive depth: 1)
BEGIN fl_trans_test.truck_search_new (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,
:13,:14,:15,:16,:17) ; END;
call count cpu elapsed disk query current rows
Parse 36 0.00 0.00 0 0 0 0
Execute 36 0.07 0.07 0 240 0 36
Fetch 0 0.00 0.00 0 0 0 0
total 72 0.07 0.07 0 240 0 36
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 70
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 36 0.00 0.00
SQL*Net message from client 36 0.00 0.04
SELECT LONGITUDE,LATITUDE
FROM
US_ZIP_CODES WHERE ZIP_CODE = :B1
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 72 0.01 0.00 0 0 0 0
Fetch 72 0.00 0.00 0 216 0 72
total 144 0.01 0.01 0 216 0 72
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 70 (recursive depth: 1)
SELECT A.FROM_CITY ,FROM_ST, DEST_CITY, DECODE (DEST_ST, NULL, ' ', DEST_ST),
EQUIP_TYPE, DECODE (NOTE1, NULL, ' ', NOTE1), DECODE (NOTE2, NULL, ' ',
NOTE2), FULL_OR_PARTIAL, DECODE (LD_REF_NO, NULL, ' ', LD_REF_NO), LD_COUNT,
WEIGHT, EQUIP_LENGTH, A.COMPANY_NM, PHONE1, PHONE2, STATE, MC_NO,
CREDITOR1, CREDITOR2, DECODE (:B11 , 'Z',
TO_CHAR(FL_TRANS_TEST.CALC_DISTANCE_BY_LATI_LONG(FROM_ZIP_LONGITUDE,
FROM_ZIP_LATITUDE, :B14 ,:B13 )), '--' ) AS F_DIST, DECODE
(POSTED_BY_ZIP_OR_ST_ZONE, 'N', '--', DECODE (:B7 , 'Z', TO_CHAR
(FL_TRANS_TEST.CALC_DISTANCE_BY_LATI_LONG(FROM_ZIP_LONGITUDE,
FROM_ZIP_LATITUDE, :B14 ,:B13 )), '--' ) ) AS T_DIST, DECODE
(POSTED_BY_ZIP_OR_ST_ZONE, 'N', '--',
FL_TRANS_TEST.CALC_DISTANCE_BY_LATI_LONG(FROM_ZIP_LONGITUDE,
FROM_ZIP_LATITUDE, DEST_ZIP_LONGITUDE,DEST_ZIP_LATITUDE) ) AS DISTANCE,
AVAILABLE_DATE, TO_CHAR (EXEC_TIME_STAMP, 'YYYY-MM-DD HH24:MI:SS') AS
TIME_STAMP, RATE
FROM
FL_TK_POSTING A WHERE A.USER_NAME <> :B15 AND ( ( ( ( A.FROM_ZIP_LONGITUDE
BETWEEN :B14 -:B12 AND :B14 +:B12 AND A.FROM_ZIP_LATITUDE BETWEEN :B13
-:B12 AND :B13 +:B12 ) ) AND (:B11 = 'Z') ) OR ( UPPER(A.FROM_ST) IN (
SELECT UPPER(C.ZONE_STATE) FROM US_ZONES C WHERE ( ZONE_STATE IN (SUBSTR
(:B16 , 1, 2), SUBSTR (:B16 , 3, 2), SUBSTR (:B16 , 5, 2), SUBSTR (:B16 , 7,
2), SUBSTR (:B16 , 9, 2), SUBSTR (:B16 , 11, 2), SUBSTR (:B16 , 13, 2) )
OR ZONE IN (SUBSTR (:B16 , 1, 2), SUBSTR (:B16 , 3, 2), SUBSTR (:B16 , 5, 2)
, SUBSTR (:B16 , 7, 2), SUBSTR (:B16 , 9, 2), SUBSTR (:B16 , 11, 2), SUBSTR
(:B16 , 13, 2) ) ) ) AND (:B11 = 'N') ) ) AND ( ( A.DEST_ZIP_LONGITUDE
BETWEEN :B10 -:B8 AND :B10 +:B8 AND A.DEST_ZIP_LATITUDE BETWEEN :B9 -:B8
AND :B9 +:B8 AND (:B7 = 'Z')) OR (( :B6 IN ( SELECT D.ZONE_STATE FROM
US_ZONES D WHERE ( D.ZONE_STATE IN (LOWER (A.DEST_ST), TK_POST_ST1,
TK_POST_ST2, TK_POST_ST3, TK_POST_ST4, TK_POST_ST5, TK_POST_ST6,
TK_POST_ST7, TK_POST_ST8 ) OR ZONE IN (TK_POST_A0, TK_POST_A1, TK_POST_A2,
TK_POST_A3, TK_POST_A4, TK_POST_A5, TK_POST_A6, TK_POST_A7, TK_POST_A8 ) ))
) AND (POSTED_BY_ZIP_OR_ST_ZONE = 'N')) OR ( 1 >= ( SELECT MIN(ROWNUM) FROM
US_ZONES C , US_ZONES D WHERE ( C.ZONE_STATE IN (SUBSTR (:B17 , 1, 2),
SUBSTR (:B17 , 3, 2), SUBSTR (:B17 , 5, 2), SUBSTR (:B17 , 7, 2), SUBSTR
(:B17 , 9, 2), SUBSTR (:B17 , 11, 2), SUBSTR (:B17 , 13, 2) ) OR C.ZONE IN
(SUBSTR (:B17 , 1, 2), SUBSTR (:B17 , 3, 2), SUBSTR (:B17 , 5, 2), SUBSTR
(:B17 , 7, 2), SUBSTR (:B17 , 9, 2), SUBSTR (:B17 , 11, 2), SUBSTR (:B17 ,
13, 2) ) ) AND ( D.ZONE_STATE IN (LOWER (A.DEST_ST), TK_POST_ST1,
TK_POST_ST2, TK_POST_ST3, TK_POST_ST4, TK_POST_ST5, TK_POST_ST6,
TK_POST_ST7, TK_POST_ST8 ) OR D.ZONE IN (TK_POST_A0, TK_POST_A1, TK_POST_A2,
TK_POST_A3, TK_POST_A4, TK_POST_A5, TK_POST_A6, TK_POST_A7, TK_POST_A8 ) )
AND C.ZONE_STATE = D.ZONE_STATE ) ) ) AND (DECODE (:B5 , 'A',
FULL_OR_PARTIAL) IN ('F', 'P') OR FULL_OR_PARTIAL = :B5 ) AND ( DECODE
(LTRIM (RTRIM (:B4 )), 'Auto Carrier', EQUIP_TYPE_INDEX) IN (0) OR DECODE
(LTRIM (RTRIM (:B4 )), 'B - Train', EQUIP_TYPE_INDEX) IN (1) OR DECODE
(LTRIM (RTRIM (:B4 )), 'Container', EQUIP_TYPE_INDEX) IN (2) OR DECODE
(LTRIM (RTRIM (:B4 )), 'Double Drop', EQUIP_TYPE_INDEX) IN (3) OR DECODE
(LTRIM (RTRIM (:B4 )), 'Dump Truck', EQUIP_TYPE_INDEX) IN (4) OR DECODE
(LTRIM (RTRIM (:B4 )), 'Flatbed Hazmat', EQUIP_TYPE_INDEX) IN (7) OR DECODE
(LTRIM (RTRIM (:B4 )), 'Reefer Hazmat', EQUIP_TYPE_INDEX) IN (9) OR DECODE
(LTRIM (RTRIM (:B4 )), 'Step Deck', EQUIP_TYPE_INDEX) IN (10, 5, 6) OR
DECODE (LTRIM (RTRIM (:B4 )), 'Tanker', EQUIP_TYPE_INDEX) IN (11) OR DECODE
(LTRIM (RTRIM (:B4 )), 'Van Hazmat', EQUIP_TYPE_INDEX) IN (14) OR DECODE
(LTRIM (RTRIM (:B4 )), 'Power Only', EQUIP_TYPE_INDEX) = 17 OR DECODE
(LTRIM (RTRIM (:B4 )), 'Van', EQUIP_TYPE_INDEX) IN (12, 13, 15, 14) OR
DECODE (LTRIM (RTRIM (:B4 )), 'Van Air-Ride', EQUIP_TYPE_INDEX) IN (12, 13,
15, 14) OR DECODE (LTRIM (RTRIM (:B4 )), 'Vented Van', EQUIP_TYPE_INDEX) IN
(16, 8, 15) OR DECODE (LTRIM (RTRIM (:B4 )), 'Reefer', EQUIP_TYPE_INDEX) IN
(15, 8, 9) OR DECODE (LTRIM (RTRIM (:B4 )), 'Van Or Reefer',
EQUIP_TYPE_INDEX) IN (12, 16, 8, 15, 14, 9) OR DECODE (LTRIM (RTRIM (:B4 )),
'Flatbed', EQUIP_TYPE_INDEX) IN (5, 6, 10, 7) OR DECODE (LTRIM (RTRIM (:B4
)), 'Flatbed Air-Ride', EQUIP_TYPE_INDEX) IN (5, 6, 7) OR DECODE (LTRIM
(RTRIM (:B4 )), 'Other (see Notes)', EQUIP_TYPE_INDEX ) = 18 ) AND
EQUIP_LENGTH <= :B3 AND WEIGHT <= :B2 AND ROWNUM < 201 AND
A.USR_POSTING_POS > -2 AND EXEC_TIME_STAMP >= SYSDATE - ((.041667)*:B1 )
call count cpu elapsed disk query current rows
Parse 36 0.00 0.00 0 0 0 0
Execute 36 0.03 0.03 0 0 0 0
Fetch 36 0.49 0.48 0 3469 0 72
total 108 0.53 0.52 0 3469 0 72
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 70
Rows Row Source Operation
2 COUNT STOPKEY (cr=97 pr=0 pw=0 time=8440 us)
2 FILTER (cr=97 pr=0 pw=0 time=8424 us)
1014 TABLE ACCESS BY INDEX ROWID FL_TK_POSTING (cr=50 pr=0 pw=0 time=15320 us)
1296 INDEX RANGE SCAN IX_TK_POST_EXEC_TIME_STAMP (cr=5 pr=0 pw=0 time=3962 us)(object id 81233)
0 INDEX FULL SCAN PK_US_ZONES (cr=0 pr=0 pw=0 time=0 us)(object id 81106)
1 INDEX RANGE SCAN PK_US_ZONES (cr=16 pr=0 pw=0 time=411 us)(object id 81106)
15 SORT AGGREGATE (cr=31 pr=0 pw=0 time=2277 us)
0 COUNT (cr=31 pr=0 pw=0 time=2042 us)
0 NESTED LOOPS (cr=31 pr=0 pw=0 time=1872 us)
30 INDEX FULL SCAN PK_US_ZONES (cr=15 pr=0 pw=0 time=679 us)(object id 81106)
0 INDEX RANGE SCAN PK_US_ZONES (cr=16 pr=0 pw=0 time=661 us)(object id 81106)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 36 0.00 0.00
SQL*Net message from client 35 4.86 170.13
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 108 0.01 0.01 0 0 0 0
Execute 108 0.21 0.23 2 766 0 36
Fetch 36 0.49 0.48 0 3469 0 72
total 252 0.72 0.74 2 4235 0 108
Misses in library cache during parse: 2
Misses in library cache during execute: 2
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 72 0.00 0.00
SQL*Net message from client 107 4.86 170.22
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 24 0.01 0.01 0 0 0 0
Execute 240 0.05 0.03 0 0 0 0
Fetch 340 0.01 0.03 2 724 0 178
total 604 0.07 0.08 2 724 0 178
Misses in library cache during parse: 4
Misses in library cache during execute: 7
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 2 0.02 0.02
SQL*Net message to client 36 0.00 0.00
114 user SQL statements in session.
24 internal SQL statements in session.
138 SQL statements in session.
Trace file: j.txt
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
114 user SQL statements in trace file.
24 internal SQL statements in trace file.
138 SQL statements in trace file.
10 unique SQL statements in trace file.
2164 lines in trace file.
170 elapsed seconds in trace file. -
Ref cursors in Database adapter
Hi,
Is the database adapter capable of handling ref cursors as the datatype of the output parameters of a pl/sql procedure?
For instance if I have the following in my package-spec:
TYPE SomeRecordType IS RECORD
( record_pk mut_table.record_pk%TYPE
, person_nr person_table.person_nr%TYPE
, field_1 mut_table.field_1%type
, field_2 mut_table.field_2%type
, field_3 mut_table.field_3%type
TYPE SomeCursorType IS REF CURSOR RETURN SomeRecordType;
PROCEDURE read_records
( cursor_out OUT SomeCursorType
, exception_code OUT number
, exception_message OUT varchar2
Can the database adapter call the read_records procedure?
I've never seen this in any doc. I know it can't handle record types (that is in 10.1.2 it couldn't as far as I know). So I figure that the above is not possible.
Thanks in advance.
Regards,
MartienWe have successfully used a sys_refcursor OUT parameter for a database procedure call and which is used by a DBAdapter to return a single dataset.
At the time I remember attempting to use a strongly typed ref cursor as the parameter. I think this is what you are attempting to do. I rejected this approach at the time as I was not able to do this. It was, in our case, as simple as using the system defined ref cursor type (i.e. weakly typed).
The handling of the returned dataset was not immediately obvious, but can be handled by as fairly simple XSL transformation to a locally defined variable of the requisite xml structure. I won't describe in detail how to do it as it is specific to our process. Suffice to say the transformation loops over all result rows assign via a test to the correct result field in our local variable.
e.g.
<xsl:template match="/">
<ns1:BatchRequest004>
<xsl:for-each select="/db:OutputParameters/db:P_SCHSHP_REF_CUR/db:Row">
<ns1:statusRqst>
<xsl:if test='db:Column/@name = "ID"'>
<xsl:attribute name="id">
<xsl:value-of select="db:Column[1.0]"/>
</xsl:attribute>
</xsl:if>
</ns1:statusRqst>
</xsl:for-each>
</ns1:BatchRequest004>
HTH and that I haven't misidentified your problem. -
Return ref cursor from database link/stored proc? do-able?
Is it possible to return a REF CURSOR from a stored procedure that is being called from a remote database via a database link???
We are trying from Pro*Cobol on MVS (which has db link pointing to AIX thru db link) and get a 00993 error, which seems to say this is not possible.
Before I give up, thought I would check with the experts....
Thanks in advance.You can't return Java objects as stored procedure or query results.
Douglas
Maybe you are looking for
-
Officejet 6500 Wireless Problems
Like everyone else I would receive on a whim that error message that solution center could not find the printer. So I would delete all the Hp printers under devices and printers and then go to start->hp->-officejet...->add device and go through the p
-
Connect to Exchange 2013 on local network from a different domain on the same local network.
Hi we have domain a and domain b both on the same local networks using the same ip subnet. domain a is Small Business server 2003 doimain b is Windows 2008 and Exchange 2013 How can I set this up so that client logging into domain a using Windows XP
-
Hello All, Does all the release strategy config for Scheduling Agreements automatically gets updated in POs also? e.g. we have a scenario whereas the once released SAs should not come into release procedure at all. (I know this speaks somewhat uncomm
-
How to show all data when using more than one parameter?
Hi All, I used a query like this to show the data in a report: select col1, col2 // col1 and col2 are columns of tabale tab1 from tab1 where tab1.col1 = (case when :P_COL1 IS NOT NULL then // :P_COL1 IS A USER PARAMETER TO EQUAL COL1 :P_COL1 ELSE tab
-
Hi gyus i have around 16000 line of code which is already broke up in two mxml file and five class file still what ever i want to change in my application flex is taking time why so it is happening i turn off my auto refresh option ,auto built option