Query Optimisation
I am facing some funny problem regarding performance of query.
If I fire query thro' PL/SQL..
Select a.*,b.*
into ..........
from a,b
where a.key between m_min and m_max
and a.col_1 = b.key
This takes a long time for execution. But if I replace m_min and m_max with actual values then Its very fast.
like
Select a.*,b.*
into ..........
from a,b
where a.key between 100 and 100
and a.col_1 = b.key
Has anybody faced such problem. Let me know what can be cause.
Check if your database is running in choose based mode:
sqlplus> show parameter optimizer_mode
CHOOSE
If so, did you do an analyze of one of the tables?
sqlplus> select num_rows from user_tables where table_name='A'
If this returns a value other then NULL, your table is analyzed.
If so, please make sure that you periodically analyze your tables again (especially after a batch upload)
sqlplus> analyze table a compute statistics;
Similar Messages
-
Hi
I have a problem regarding query optimisation, and in particularly one query. The query I have is for a view, and when I ran the cost on it, there were full table scans, and some costs were quite high:
Select a.name, b.id, c.date
from table a,
table b,
table c
where a.id = b.id(+) and
decode(a.version, null, a.num, a.version) = b.version(+) and
a.id = c.id and
b.type is null
My question is whether this query can be made more efficient by removing the outer joins. I was thinking whether this could be carried out by some union or intersect query with an outer select. Is this possible, if so, what would be the best alternative query?
ThanksHi,
Is b.type a NOT null column? Is the reason why you have b.type is null .. is to exclude the records that are present in both tableA and tableB, In that case try and see if this gives you the same result.
select a.name,
a.id,
c.date
from table1 a, table3 c
where a.id = c.id
and not exists
(select 1
from table2 b
where a.id = b.id
and nvl (a.version, a.num) = b.version)Make sure you have gathered statistics on the tables.
And as sb92075 said above. Always mark a thread as answered once you get the answer. You need to help the forum as well rather than just taking help from the forum.
G. -
Can someone explain this crazy query optimisation?
A software company has me trialling a product that has a query optimiser. I can't for the life of me explain what is going on below and would like some help from someone with a bit more SQL experience. I have a query I've been struggling to bring down the time on:
CREATE OR REPLACE VIEW PLATE_STATS_DATA_VIEW AS
SELECT P.Folder_ID, P.expt_or_control_ID, P.Plate_Type, P.Dose_Weight, P.Volume, P.Strain_Code, P.S9_Plus,
P.type_Name as Contents, P.Replicate_ID,
P.Number_Of_Plates, round(avg(P.count)) as mean_count,
min(P.count) as min_count, max(P.count) as max_count, count(P.count) as Plates_Counted
FROM expt_folder_plates P, History_Control_Log L
WHERE P.expt_or_control_ID = L.Control_ID
AND P.Strain_Code = L.Strain_Code
AND P.Plate_Type = L.Type_Code
AND P.S9_Plus = L.S9_Plus
AND L.control_Included > 0
GROUP BY P.Folder_ID, P.expt_or_control_ID, P.Plate_Type, P.Dose_Weight, P.Volume, P.Strain_Code,
P.S9_Plus, P.type_Name, P.Replicate_ID, P.Number_Of_PlatesIt took 20 seconds on my large test database, so I put it through the optimiser. It took it down to 0.1 seconds simply by changing 'WHERE P.expt_or_control_ID = L.Control_ID' to 'WHERE P.expt_or_control_ID = L.Control_ID + 0'.
I have no idea why this would make any difference - adding zero to a value?! Can anyone enlighten me?
Many thanks,
Gary
Message was edited by:
GaryKyleAhhh, thanks guys. I'm a bit of a beginner here. This is my first look at explain plans - just had to work out how to see them! I think I understand what is happening now - it looks like that with the index, it does the group by FIRST on all the data and this takes a large amount of time. Am I right?
Before +0:
SELECT STATEMENT, GOAL = ALL_ROWS Cost=162787Cardinality=1380965Bytes=328669670
SORT GROUP BY Cost=162787 Cardinality=1380965 Bytes=328669670
HASH JOIN Cost=16773 Cardinality=1380965 Bytes=328669670
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=EXPT_FOLDER_DETAILS Cost=29Cardinality=4038Bytes=387648
HASH JOIN Cost=16730 Cardinality=1380965 Bytes=196097030
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=AMES_PLATE_TYPES Cost=2Cardinality=6Bytes=192
HASH JOIN Cost=16715 Cardinality=1380965 Bytes=151906150
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=HISTORY_CONTROL_LOG Cost=2Cardinality=40Bytes=880
HASH JOIN Cost=16694 Cardinality=2002400 Bytes=176211200
HASH JOIN Cost=59 Cardinality=8076 Bytes=282660
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=EXPT_FOLDER_SOLVENTSCost=2Cardinality=3Bytes=51
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=CONTROLSCost=56Cardinality=8078Bytes=145404
TABLE ACCESS FULL Object owner=PI_AMES_BIGObject name=EXPT_FOLDER_PLATESCost=16584Cardinality=5499657Bytes=291481821After +0:
SELECT STATEMENT, GOAL = ALL_ROWS Cost=1655 Cardinality=138 Bytes=45954
HASH JOIN Cost=1655 Cardinality=138 Bytes=45954
HASH JOIN Cost=1625 Cardinality=138 Bytes=33672
HASH JOIN Cost=1569 Cardinality=414 Bytes=96462
MERGE JOIN CARTESIAN Cost=4 Cardinality=18 Bytes=630
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=EXPT_FOLDER_SOLVENTSCost=2Cardinality=3Bytes=30
BUFFER SORT Cost=2 Cardinality=6 Bytes=150
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=AMES_PLATE_TYPESCost=1 Cardinality=6Bytes=150
VIEW Object owner=PI_AMES_BIG Object name=TEST_PLATE_STATSCost=1564Cardinality=138Bytes=27324
SORT GROUP BY Cost=1564 Cardinality=138 Bytes=10350
TABLE ACCESS BY INDEX ROWID Object owner=PI_AMES_BIGObject name=EXPT_FOLDER_PLATESCost=39Cardinality=3Bytes=159
NESTED LOOPS Cost=1563 Cardinality=138 Bytes=10350
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=HISTORY_CONTROL_LOG Cost=2Cardinality=40Bytes=880
INDEX RANGE SCAN Object owner=PI_AMES_BIG Object name=EXPT_CONTROL_ID_INDEXCost=5Cardinality=248
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=CONTROLSCost=56 Cardinality=8078Bytes=88858
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=EXPT_FOLDER_DETAILS Cost=29 Cardinality=4038Bytes=359382Thanks again,
Gary
P.S. looks like the explain plan's made the post horribly wide again ;) sorry. I'll keep it this way though otherwise the plan is hard to read. -
Hi.
Is there any way to optimise the below code. or can i write a join query for this if yes then how.
SELECT land1
zone1
FROM tzone
INTO CORRESPONDING FIELDS OF TABLE t_land
FOR ALL ENTRIES IN int_delivery
WHERE zone1 = int_delivery-zone1.
IF sy-subrc = 0.
SELECT land1
landx
FROM t005t
INTO CORRESPONDING FIELDS OF TABLE t_landx
FOR ALL ENTRIES IN t_land
WHERE land1 = t_land-land1.
ENDIF.
Thanks
Any help will not go unappreciated..hi,
<i>how to optimize ?</i>
1. Avoid
INTO CORRESPONDING
, create an internal table with required fields only.
2.
select tzone~land1 tzone~zone1 t005~landx
from tzone
join t005t
on tzone~land1 = t005t~land1
into table t_landx
for all entries in int_delivery
where
tzone~zone1 = int_delivery-zone1
AND spras = sy-langu.
Regards
Anver -
Need help in query optimisation.
Hi,
A small doubt in increasing the query peformance.
I have a join query operating on multiple tables. The tables contains millions of record. Will it improve the performance if the columns used in joins are sorted. I am using these joins in DW- mappings.
Thanks for any suggestions.
AshishHere are something you can look at:
The best way to optimise a join is to ensure that each join involves only a few tables.
Make sure that the smallest result set is created first (either use hints or rearrange the order of the tables in the join clause).
Ensure that the join columns are indexed and that the query does not inadvertently prevent the use of indexes by using functions such as "upper".
You must also be aware that the creation of new indexes may affect the performance of unrelated queries and that de-normalising tables is a trade off between faster queries and slower updates (and reduced flexibility).
Better use Explain plan to analyse how your query behaves.... -
Where-clause with the in-keyword. Query optimising.
Hi I have following question.
I have a query, e. g.
select x, y from tableA where y in (a_1, a_2, ..., a_n)
I have test it with n=1...5
1. 5-6 ms
2. 5-6 ms
3. 150 - 200 ms
4. 150 - 200 ms
5. 180 - 250 ms
There is a gap between n = 2 and n = 3. According to the execution plan the Oracle make a full table access if n >= 3. If n < 3 a an index is used. Is it possible to ensure that the index is always used?
Additionally I have test the equivalent query:
select x, y from tableA where y = a_1 or y = a_2 ... < = a_n
It showsthe same behaviour.Hi,
I wouldn't say that the optimizer is wrong here. I would look on some more imputs.
How many values of y do you have in the table?
please post the result of:
select count(*),y
from tablea
group by y;
to see how many values of each y-value exists in the table.
For example, if there are only values 1..5 for y in the table a full table scan is the fastet answer for your query, because all values have to read!
Regards
Udo -
Hi all,
our task is to develop a new application based on the data
model of an old application. As the old application must remain
available and running, no changes are possible to the tune the
database! (currently running on 9.2.0.7)
We've put together this example code describing the problem:
-------------------- snip --------------------
-- clean up last run
drop table shares;
drop table files;
drop table families;
-- families table
create table families (
family_id integer,
primary key (family_id)
-- files table: each file belongs to one family
create table files (
file_id integer,
family_id integer not null,
primary key (file_id),
foreign key (family_id) references families (family_id)
-- shares table: shares can be defined on family basis or on file basis.
-- If defined on family and file, only the file shares are relevant!
create table shares (
file_id integer,
family_id integer not null,
person varchar2(100) not null,
percentage number(4,2) not null,
unique (file_id, family_id, person, percentage),
foreign key (file_id) references files (file_id),
foreign key (family_id) references families (family_id)
-- insert one family and two files with shares for demonstration purposes
insert into families values (1);
insert into files values (1,1);
insert into files values (2,1);
insert into shares values (null, 1, 'Thomas',33.4);
insert into shares values (null, 1, 'Marcus',66.6);
insert into shares values (2, 1, 'Thomas',50);
insert into shares values (2, 1, 'Marcus',50);
commit;
-- Goal: determine the shares for each file
-- good query:
select s.family_id, s.file_id, s.person, s.percentage
from shares s
where s.file_id is not null
union
select p.family_id, p.file_id, s.person, s.percentage
from shares s, files p
where p.family_id = p.family_id
and s.file_id is null
and not exists (select 1
from shares p2
where p2.file_id = p.file_id);
-- poor query:
select distinct p.family_id, p.file_id, s.person, s.percentage
from files p, shares s
where p.family_id = s.family_id
and (s.file_id is null or s.file_id = p.file_id)
order by p.family_id, p.file_id;
-------------------- /snip --------------------
Now here's the hook!
Amount of data (rows)
- families: 116.000
- files: 633.000
- shares: 423.000
- avg persons per family: 2.1
We were using the queries above in views for hiding the
complexity to our newly developed application.
A test case containing the view and several other tables performed
like this:
- execution time when view used the poor query: ~ 0.3 seconds
- execution time when view used the good query: ~ 37 seconds
We know the design has it flaws but it evolved with time and
proofed to be useful for the old application.
So my question is: does anyone have a clue how we can combine
performance and correctness within one statement? Any comment
is appreciated!I'm with shoblock. If the "good" query returns the wrong resutls, then its unuable and not really good.
I'm afraid you're stuck with tuning the long-running query which returns the right results. -
I have two tables Table1 and Table 2 with exactly the same structure.
The Table 1 accumulates the data from 00:00 hours at midnight until 6:30 AM and as and when the data is inserted, a flag (Record Status =0) is set for the inserted records.
At 6:30 AM, i have a procedure which does the following.
1) Updated the flag to 1 (i.e Record Status = 1)
2) Selects those which are updated and dump them into table2 for processing
3) Deletes those updated records in table 1
There are as many as 2,400,000 in the first run at 6:30 and the above process alone takes nearly 1:30 hours.
Is there any way by which the process can be made faster.
I have already tried locking the table1 in exclusive mode and dumping the data in table2. Although this is faster than the previous procedure, the application which is inserting the data into table 1 hangs during the period when the table1 is locked and times out frequently
There are no primary keys in the both the tables, as there can be duplicate rows
Any help is appreciatedYes because, I should delete only those records,
which I have selected. By the time, I insert into
table 2, additonal records would have got inserted
with a flag 0.
According to the initial post, Table 1 accumulates data from midnight to 6:30, and the processing starts at 6:30 - How can more data get into Table1? Or is it the case where data is continually inserted into Table 1 and the processing/move to Table 2 starts at 6:30?
Is there a time stamp on Table 1 which denotes when the data was inserted? If there is you could use that to determine the records inserted between midnight and 6:30.
You could bulk collect the rowids then use that to process and delete (but may not be an option depending on the machine resources and the data volumns). -
Table variable query optimisation
I have a table variable which (sometimes) needs to contain a couple of thousand rows, this slews the execution plan. This is fixed by DBCC TRACEON(2453) which causes the
rows in the table variable to be estimated properly, this requires sysadmin rights which I am reluctant to grant to other than this stored proc. The proc is excuted by most users with SQL authentication. I've tried with exec as 'NT AUTHORITY\SYSTEM' but that
does not seem to be recognized as I'm using sql authentication. Cant use a temp table as this breaks asp net Linq which is user to wrap the stored procedure.
Suggestions please? Is there another way to tell the proc to count the row in a table variable? Is there a tight scripted way to grant
sysadmin rights to just this stored proc across a number of databases and servers?Hi Erland, I'm feeling particularly dumb at the moment... I read your article and each step seems clear. so I then tried to put together a script to create a certificate, create a logon, create a (SQL) user, create a proc to run as that user and test. My
dismal attempt is below.
USE master
go
-- Create a test login and test database
CREATE LOGIN testuser WITH PASSWORD = 'CeRT=0=TeST'
CREATE DATABASE certtest
go
-- Move to the test database.
USE certtest
go
-- Create the test user.
CREATE USER testuser
go
-- Create the test table and add some data.
CREATE TABLE testtbl (a int NOT NULL,
b int NOT NULL)
INSERT testtbl (a, b) VALUES (47, 11)
go
-- Create the certificate.
CREATE CERTIFICATE TraceOnCert
ENCRYPTION BY PASSWORD = 'All you need is love'
WITH SUBJECT = 'Certificate for TraceOnProc',
START_DATE = '20020101', EXPIRY_DATE = '20200101'
go
-- Create the login
CREATE LOGIN TraceOnLogin WITH PASSWORD = 'CeRT=0=TeST'
GO
--create user link to login
IF NOT EXISTS (SELECT * FROM sys.database_principals WHERE name = N'TraceOnUser')
CREATE USER TraceOnUser FOR LOGIN TraceOnLogin
--give login sysadmin rights
EXEC master..sp_addsrvrolemember @loginame = N'TraceOnLogin', @rolename = N'sysadmin'
go
GRANT SELECT ON testtbl TO TraceOnUser
go
-- Create two test stored procedures, and grant permission.
CREATE PROCEDURE unsigned_sp AS
DBCC TRACEON(2453)
SELECT SYSTEM_USER, USER, name, type, usage FROM sys.user_token
EXEC ('SELECT a, b FROM testtbl')
DBCC TRACEOFF(2453)
go
CREATE PROCEDURE TraceOnProc WITH EXECUTE AS 'TraceOnUser' AS
DBCC TRACEON(2453)
SELECT SYSTEM_USER, USER, name, type, usage FROM sys.user_token
EXEC ('SELECT a, b FROM testtbl')
DBCC TRACEOFF(2453)
-- EXEC unsigned_sp
go
GRANT EXECUTE ON TraceOnProc TO public
GRANT EXECUTE ON unsigned_sp TO public
go
go
-- Run as the test user, to actually see that this works.
EXECUTE AS USER = 'testuser'
go
-- First run the unsigned procedure. This gives a permission error.
EXEC unsigned_sp
go
-- Then run the signed procedure. Now we get the data back.
EXEC TraceOnProc
go
-- Become ourselves again.
REVERT
go
-- Clean up
USE master
DROP DATABASE certtest
DROP LOGIN testuser
DROP LOGIN TraceOnLogin -
How to determine a sql query size to display a progress bar
I would like to show a progress of an sql query within a jsp page.
Background:
I have a reporting web application, where over 500 contacts can run reports based on different criteria such as date range....
I current display a message stating 'executng query please wait', however the users (hate users) do not seem to wait, thereofore they decide to run the query all over again which affected my reportign sever query size (eventually this crashes, stopping all reports)
Problem:
The progress bar is not a problem, how would I determine the size of the query at runtime therefore adding the time onto my progress bar.Yes it's doable (we do it) but it sure ain't easy.
We've got about 23,500,000 features (and counting) in a geodata database. Precise spatial selection algorithms are expensive. Really expensive.
We cannot impose arbitrary limits on search criteria. If the client requires the whole database we are contractually obligated to provide it...
For online searches We use statistics to approximate the number of features which a given query is likely to return... more or less the same way that the query optimiser behind any half decent (not mysql (5 alteast)) database management system does.
We have a batch job which records how many features are linked to each distinct value of each search criteria... we just do the calculations (presuming a normal (flat) distribution) and...
... if the answer is more than a 100,000 we inform the user that the request must be "batched", and give them a form to fill out confirming there contact details. We run the extract overnight and send the user an email containing a link which allows them to download the result the next morning.
... if the answer is more than a million features we inform the user that the request must batched over the weekend... same deal as above, except we do it over the weekend to (a) discourage this; and (b) the official version ensure we have enough time to run the extract without impinging upon the maintenance window.
... if the answer is more than 5 million we display our brilliant "subscribe to our DVD service to get the latest version of the whole shebang every three months (or so), so you can kill your own blooody server with these ridiculous searches" form.
Edited by: corlettk on Dec 5, 2007 11:12 AM -
Can PQO been used in DML statements to speed up multi-dimension query ?
We have
limit dim1 to var1 ne na
where dim1 is a dimension.
var1 is a variable dimensioned by dim1.
This is pretty much like SQL table scan to find the records. Is there a PQO (parallel query option)-like option in DML to speed up multi-dimension query ?This is one of the beauties of the OLAP Option, all the query optimisation is managed by the engine itself. It resolves the best way to get you the result set. If you have partitioned your cube the query engine will perform the same way as the relational query engine and employ partition elimination.
Where things can slow down is where you used compression, since to answer a question such as "is this cell NA?" all rows in the cube need to be uncompressed before the query can be answered and result set generated. Compression technology works best (i.e. is least intrusive in terms of affecting query performance) where you have a very fast CPU. However, the overall benefits of compression (smaller cubes) nearly always outweigh the impact of having to uncompress data to answer a specific question. Usually the impact is minimal if you partition your cube, since the un-compress function only needs to work on a specific partition.
In summary, the answer to your question is "No", because the OLAP engine automatically optimises the allocation of resources to return the result-set as fast as possible.
Is there a specific problem you are experiencing with this query?
Keith Laker
Oracle EMEA Consulting
OLAP Blog: http://oracleOLAP.blogspot.com/
OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
DM Blog: http://oracledmt.blogspot.com/
OWB Blog : http://blogs.oracle.com/warehousebuilder/
OWB Wiki : http://wiki.oracle.com/page/Oracle+Warehouse+Builder
DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html -
Approach for downloading bulk records
Hi,
We have a join condition which is joining oracle tables alongwith oracle views ( there are around 3 tables and 4/5 views). The total number of records is around 10 +E10. Our problem is related to the performance as the query keeps on executing for more than 2 hours.
On top of that our application is required to work online and not as a part of batch job.
Could anyone help us in providing us a suitable approach to tackle this. It could be administration changes or query optimising techniques. Anyhelp would be appreciated.
Thankx a lot :)I would also look at what is in the views you are accessing to do the query. It is often much more efficient to write the query from scratch than it is to use existing views. In many cases, the existing views contain columns that are not required in your main query, or joins to other tables that are not really required for your main query. You usually can eliminate some extra overhead by re-writing the query using only base tables.
John -
What is exactly STATISTICS in SQL Server
hi all,
What is exactly STATISTICS in SQL server query optimiser ?
Thanks
SelvaSome good content with proper example can help you for sure.
Link:
http://blog.idera.com/sql-server/understanding-sql-server-statistics/
Some part of text may give you idea
If there’s an upcoming election and you are running for office and getting ready to go from town to town city to city with your flyers, you will want to know approximately how many flyers you’re going to bring.
If you’re the coach of a sports team, you will want to know your players’ stats before you decide who to play when, and against who. You will often play a matchup game, even if you have 20 players, you might be allowed to play just 5 at a time, and you will
want to know which of your players will best match up to the other team’s roster. And you don’t want to interview them one by one at game time (table scan), you want to know, based on their statistics, who your best bets are.
Just like the election candidate or the sports coach, SQL Server tries to use statistics to “react intelligently” in its query optimization. Knowing number of records, density of pages, histogram, or available indexes help the SQL Server optimizer “guess”
more accurately how it can best retrieve data. A common misnomer is that if you have indexes, SQL Server will use those indexes to retrieve records in your query.
Not necessarily. If you create, let’s say, an index to a column City and <90% of the values are ‘Vancouver’, SQL Server will most likely opt for a table scan instead of using the index if it knows these stats......
Santosh Singh -
I need help in "HOW TO EXTEND THE DATA TYPE SYSTEM"......
Hi All,
I need to add new data type like SDO_GEOMETRY in spatial Oracle.
I don't mean the User-Defined Datatype but new datatype embed in the database.
I read the Data Cartridge Developer's Guide but it has info how to use more efficiently UDT through indexes.
Anybody used the “Type System” under Databases and Extensibility Services?
It might be written in C++ or Java but I do not know how.
Any hints and help will be appreciated.
Thanks,> In Figure 1-1 Oracle Services, there is a "Type System" as Oracle Services.
The "Type System" is merely a compartmentalising the data type handling and implementation code/modules in Oracle.
This covers all types. Oracle native types, user defined types (UDTs), etc.
Saying that you want to create a new type in the "Type System" is basically saying you want to use the CREATE TYPE SQL command in Oracle.
> So, I want new_type1 and new_type2 behave like other built-in types, just the way
SDO_GEOMETRY works in Oracle Spatial Cartridge.
Not familiar with the SDO_GEOMETRY. Why do you mention this type specifically? What do you want to do similar?
> I have already done it with user-defined types. Now I need to do this way so that I can
compare and contrast in terms of speed, space...etc for part of my research.
Hmm.. I'm not sure what you are trying to compare ito of a UDT and Data Cartridge extensions. It is not one or the other. What research is this if I may ask?
Simplistically you extend the Type System with a new UDT/ADT. And there you have a new type data can be used in the SQL Engine and the PL/SQL engine. The OCI (Oracle Call Interface) supports such types via the OCI object API calls - which means this new type can also be used from Oracle clients written in Delphi, C++, Java, etc.
That new type can be a complex type. This type may need specific management code (memory management, context management, internationalisation, etc). To support this you can extend the UDT/ADT further by developing Cartridge Basic Service interfaces for the type - designing and writing management code to, for example, support specific internationalisation requirements when dealing with multibyte character sets.
You can also extend the UDT/ADT to be custom managed ito indexing, determining query optimisation, computing costs of custom methods/functions/operators on this custom data type. These extensions are done by developing Data Cartridge interfaces for the type.
Thus I cannot see what you are trying to compare. It is not one layer/interface versus another. These layers exist for all Oracle types (in the Type System). Oracle not only allows you to add to the top layer by adding/defining a new data type. It also allows you to customise (if needed) the layers below it. -
Performance issue with using out parameter sys_refcursor
Hello,
I'm using Oracle 10g, with ODP.Net in a C# application.
I'm using about 10 stored procedures, each having one out parameter of sys_refcursor type. when I use one function in C# and call these 10 sp's, it takes about 78ms to excute the function.
Now to improve the performance, I created one SP with 10 output parameters of sys_refcursor type. and i just call this one sp. the time taken has increased , not it takes abt 95ms.
is this the right approach or how can we improve the performance by using sys_refcursor.
please suggest, it is urgent, i'm stuck up with this issue.
thanks
shrutiWith 78ms and 95ms are you talking about milliseconds or minutes? If it's milliseconds then what's the problem and does it really matter if there is a difference of 17 milliseconds as that could just be caused by network traffic or something. If you're talking minutes, then we would need more information about what you are attempting to do, what tables and processing is going on, what indexes you have etc.
Query optimisation tips can be found on this thread.. When your query takes too long ....
Without more information we can't really tell what's happening.
Maybe you are looking for
-
Am getting PCM output error message when trying to play a video from iPad2 on my TV
The movie was purchased on iTunes store. I connected the iPad2 through the adapter via HDMI straight to the TV and I have a Bose Lifestyle system for surround 5.1 sound. After a few minutes and every few minutes after that the movie would blur and
-
This is the entire message I get, but if click "stop" or "continue" nothing is fixed and the message is repeating forever... A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see
-
J_security_check ques ???
Hi , I'm trying to transplant my working test platform on winNT onto hp/ux 10.20. I've now got to the point where I have the url http://www.xxx.xxx/uni/course/login.jsp. When I enter some data, instead of the authentication happening, the same side n
-
Hi all In 11g, will the dbms_stats.export_database_stats procedure also export pending stats? There is an export_pending_stats procedure but we want to export both published and unpublished stats. Thanks, Louiza
-
I have two apps that are always downloading...
I moved my apps from my MBA to my new iPhone and two of them (Yelp and Zillow) show downloading but they are available to use on another screen. What is the deal?