Oracle Query Performance
hello,
I want to optimize my sql queries. I want to know what statement impact query retrieval / processing. Cany any one help me to guide what to avoid while writing sql queries?
Hi,
I want to optimize my sql queriesOk, the work with autotrace, or sql trace and tkprof.
I want to know what statement impact query retrieval / processing.Do you want to know which are the "slow" queries ? Then you can use STATSPACK and/or AWR.
Cany any one help me to guide what to avoid while writing sql queries? I'm not sure to understand you here. Please, explain a little bit more.
Nicolas.
Similar Messages
-
Oracle Query Performance While calling a function in a View
Hi,
We have a performance issue in one of our Oracle queries.
Here is the scenario
We use a hard coded value (which is the maximum value from a table) in couple of DECODE statements in our query. We would like to remove this hard coded value from the query. So we wrote a function which will return a maximum value from the table. Now when we execute the query after replacing the hard coded value with the function, this function is called four times which hampers the query performance.
Pl find below the DECODE statements in the query. This query is part of a main VIEW.
Using Hardcoded values
=================
DECODE(pro_risk_weighted_ctrl_scr, 10, 9.9, pro_risk_weighted_ctrl_scr)
DECODE(pro_risk_score, 46619750, 46619749, pro_risk_score)
Using Functions
============
DECODE (pro_risk_weighted_ctrl_scr, rprowbproc.fn_max_rcsa_range_values ('CSR'), rprowbproc.fn_max_rcsa_range_values('CSR')- 0.1, pro_risk_weighted_ctrl_scr)
DECODE (pro_risk_score, rprowbproc.fn_max_rcsa_range_values ('RSR'), rprowbproc.fn_max_rcsa_range_values ('RSR') - 1, pro_risk_score)
Can any one suggest a way to improve the performance of the query.
Thanks & Regards,
Rajidrop table max_demo;
create table max_demo
(rcsa varchar2(10)
,value number);
insert into max_demo
select case when mod(rownum,2) = 0
then 'CSR'
else 'RSR'
end
, rownum
from dual
connect by rownum <= 10000;
create or replace function f_max (
i_rcsa in max_demo.rcsa%TYPE
return number
as
l_max number;
begin
select max(value)
into l_max
from max_demo
where rcsa = i_rcsa;
return l_max;
end;
-- slooooooooooooowwwwww
select m.*
, f_max(rcsa)
, decode(rcsa,'CSR',decode(value,f_max('CSR'),'Y - max is '||f_max('CSR'),'N - max is '||f_max('CSR'))) is_max_csr
, decode(rcsa,'RSR',decode(value,f_max('RSR'),'Y - max is '||f_max('RSR'),'N - max is '||f_max('RSR'))) is_max_rsr
from max_demo m
order by value desc;
-- ssllooooowwwww
with subq_max as
(select f_max('CSR') max_csr,
f_max('RSR') max_rsr
from dual)
select m.*
, decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
, decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
, decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
from max_demo m
, subq_max s
order by value desc;
-- faster
with subq_max as
(select /*+materialize */
f_max('CSR') max_csr,
f_max('RSR') max_rsr
from dual)
select m.*
, decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
, decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
, decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
from max_demo m
, subq_max s
order by value desc;
-- faster
with subq_max as
(select f_max('CSR') max_csr,
f_max('RSR') max_rsr,
rownum
from dual)
select m.*
, decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
, decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
, decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
from max_demo m
, subq_max s
order by value desc;
-- sloooooowwwwww
select m.*
, decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
, decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
, decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
from max_demo m
, (select /*+ materialize */
f_max('CSR') max_csr,
f_max('RSR') max_rsr
from dual) s
order by value desc;
-- faster
select m.*
, decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
, decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
, decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
from max_demo m
, (select f_max('CSR') max_csr,
f_max('RSR') max_rsr,
rownum
from dual) s
order by value desc; -
Oracle query performance tuning
Hi
I am doing Oracle programming.Iwould like to learn Query Performance Tuning.
Could you guide me , like how could i learn this online, which books to refer.
Thank youI would recommend purchasing a copy of Cary Millsap's book now:
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/059600527X/ref=sr_1_1?ie=UTF8&qid=1248985270&sr=8-1
And Jonathan Lewis' when you feel you are at a slightly more advanced level.
http://www.amazon.com/Cost-Based-Oracle-Fundamentals-Experts-Voice/dp/1590596366/ref=pd_sim_b_2
Both belong in everyone's bookcase. -
Query Performance issue in Oracle Forms
Hi All,
I am using oracle 9i DB and forms 6i.
In query form ,qry took long time to load the data into form.
There are two tables used here.
1 table(A) contains 5 crore records another table(B) has 2 crore records.
The recods fetching range 1-500 records.
Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
But DBA team dont want to create index on table A.Because of table space problem.
If create the index on main table (A) ,then performance overhead in production.
Concurrent user capacity is 1500.
Is there any alternative methods to handle this problem.
Regards,
RS1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
http://en.wikipedia.org/wiki/Crore
I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
Justin -
Weblogic 8.1.6 and Oracle 9.2.0.8 - query performance
Folks,
We are upgrading WebLogic from 8.1.5 to 8.1.6 and Oracle from 9.2.0.6 to 9.2.0.8. We use the Oracle thin client driver for 9.2.0.8 to connect from the application to Oracle.
When we use the following combination of the stack we see SQL query performance degradation: -
Oracle 9.2.0.8 database, Oracle 9.2.0.8 driver, WL 8.1.6
Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.6
We do not see the degradation in case of the following: -
Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.5
Oracle 9.2.0.6 database, Oracle 9.2.0.1 driver, WL 8.1.5
This shows that the problem could be with the WL 8.1.6 version and I was wondering if any of you have faced this before? The query retrieves a set of data from Oracle none of which contain the AsciiStream data type, which is noted as a problem in WL 8.1.6, but that too, only for WL JDBC drivers.
Any ideas appreciated.Folks,
We are upgrading WebLogic from 8.1.5 to 8.1.6 and Oracle from 9.2.0.6 to 9.2.0.8. We use the Oracle thin client driver for 9.2.0.8 to connect from the application to Oracle.
When we use the following combination of the stack we see SQL query performance degradation: -
Oracle 9.2.0.8 database, Oracle 9.2.0.8 driver, WL 8.1.6
Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.6
We do not see the degradation in case of the following: -
Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.5
Oracle 9.2.0.6 database, Oracle 9.2.0.1 driver, WL 8.1.5
This shows that the problem could be with the WL 8.1.6 version and I was wondering if any of you have faced this before? The query retrieves a set of data from Oracle none of which contain the AsciiStream data type, which is noted as a problem in WL 8.1.6, but that too, only for WL JDBC drivers.
Any ideas appreciated. -
OAF page : How to get its query performance from Oracle Apps Screen?
Hi Team,
How to get the query performance of an OAF page using Oracle Apps Screen ??
regards
sridharGo through this link
Any tools to validate performance of an OAF Page?
However do let us know as these queries performance can be check through backend also
Thanks
--Anil
http://oracleanil.blogspot.com/ -
Hi
I have created a procedure that accepts two bind variables from a report. The user will select one or the other, both or neither of the variables. To return the appropriate results i have created a view with the entire result set and in the procedure are a number of if statements that determine what to place in the where clause selecting from the view, depending on what variables populated.
My concern is that the query that generates the view includes several joins and in total outputs around 150,000 records and seems to be rather slow to run.
Would you recommend another solution such as placing the query in the procedure itself repeated for every if statement?
Or should I work on the query performance?
What would be the most efficient solution for my problem?
Any advice would be greatly appreciated.
Thanks[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long
-
Oracle query taking more time in SSRS
Hi All
we have report which connects to Orale DB . the Query for the datset is running in 9 SecsĀ in PL/sql developerĀ but the same query is taking more than 150 secs in SSRS. we are using Oracleclient type provider.
Surendra ThotaHi Surendra,
Based on the current description, I understand that you may use Oracle Provider for OLE DB to connect Oracle database. If so, I suggest you using Microsoft OLE DB Provider for Oracle to compare the query time between SQLPlus and SSRS.
Here is a relative thread (SSRS Report with Oracle database performance problems) for your reference.
Hope this helps.
Regards,
Heidi Duan
If you have any feedback on our support, please click
here.
Heidi Duan
TechNet Community Support -
Poor query performance when using date range
Hello,
We have the following ABAP code:
select sptag werks vkorg vtweg spart kunnr matnr periv volum_01 voleh
into table tab_aux
from s911
where vkorg in c_vkorg
and werks in c_werks
and sptag in c_sptag
and matnr in c_matnr
that is translated to the following Oracle query:
SELECT
"SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN 000000000100000000 AND 000000000999999999;
Because the field SPTAG is not enclosed by apostropher, the oracle query has a very bad performance. Below the execution plans and its costs, with and without the apostrophes. Please help me understanding why I am getting this behaviour.
##WITH APOSTROPHES
SQL> EXPLAIN PLAN FOR
2 SELECT
3 "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN '20101201' AND '20101231' AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
Explained.
SQL> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
PLAN_TABLE_OUTPUT
Id
Operation
Name
Rows
Bytes
Cost (%CPU)
0
SELECT STATEMENT
932
62444
150 (1)
1
TABLE ACCESS BY INDEX ROWID
S911
932
62444
149 (0)
2
INDEX RANGE SCAN
S911~VAC
55M
5 (0)
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - filter("VKORG"='D004' AND "SPTAG">='20101201' AND
"SPTAG"<='20101231')
2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
"MATNR"<='000000000999999999')
##WITHOUT APOSTROPHES
SQL> EXPLAIN PLAN FOR
2 SELECT
3 "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
Explained.
SQL>
PLAN_TABLE_OUTPUT
Id
Operation
Name
Rows
Bytes
Cost (%CPU)
0
SELECT STATEMENT
2334
152K
150 (1)
1
TABLE ACCESS BY INDEX ROWID
S911
2334
152K
149 (0)
2
INDEX RANGE SCAN
S911~VAC
55M
5 (0)
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - filter("VKORG"='D004' AND TO_NUMBER("SPTAG")>=20101201 AND
TO_NUMBER("SPTAG")<=20101231)
2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
"MATNR"<='000000000999999999')
Best Regards,
Daniel G.Volker,
Answering your question, regarding the explain from ST05. As a quick work around I created an index (S911~Z9), but still I'd like to solve this issue without this extra index, as primary index would work ok, as long as date was correctly sent to oracle as string and not as number.
SELECT
"SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" ,
"PERIV" , "VOLUM_01" , "VOLEH"
FROM
"S911"
WHERE
"MANDT" = :A0 AND "VKORG" = :A1 AND "SPTAG" BETWEEN :A2 AND :A3 AND "MATNR"
BETWEEN :A4 AND :A5
A0(CH,3) = 003
A1(CH,4) = D004
A2(NU,8) = 20101201 (NU means number correct?)
A3(NU,8) = 20101231
A4(CH,18) = 000000000100000000
A5(CH,18) = 000000000999999999
SELECT STATEMENT ( Estimated Costs = 10 , Estimated #Rows = 6 )
5 3 FILTER
Filter Predicates
5 2 TABLE ACCESS BY INDEX ROWID S911
( Estim. Costs = 10 , Estim. #Rows = 6 )
Estim. CPU-Costs = 247.566 Estim. IO-Costs = 10
1 INDEX RANGE SCAN S911~Z9
( Estim. Costs = 7 , Estim. #Rows = 20 )
Search Columns: 4
Estim. CPU-Costs = 223.202 Estim. IO-Costs = 7
Access Predicates Filter Predicates
The table originally includes the following indexes:
###S911~0
MANDT
SSOUR
VRSIO
SPMON
SPTAG
SPWOC
SPBUP
VKORG
VTWEG
SPART
VKBUR
VKGRP
KONDA
KUNNR
WERKS
MATNR
###S911~VAC
MANDT
MATNR
Number of entries: 61.303.517
DISTINCT VKORG: 65
DISTINCT SPTAG: 3107
DISTINCT MATNR: 2939 -
Poor query performance when joining CONTAINS to another table
We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
For example, we can find all the records a user has access to from our base table by the following query:
SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID;
This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
Our search query looks like this:
SELECT score(1), d.*
FROM duns d
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
WITH subset
AS
(SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID
SELECT score(1), d.*
FROM duns d
JOIN subset s
ON d.duns_loc = s.duns_loc
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
Thanks!!Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
SCOTT@orcl_11gR2> -- tables:
SCOTT@orcl_11gR2> CREATE TABLE duns
2 (duns_loc NUMBER,
3 text_key VARCHAR2 (30))
4 /
Table created.
SCOTT@orcl_11gR2> CREATE TABLE primary_contact
2 (duns_loc NUMBER,
3 emp_id NUMBER)
4 /
Table created.
SCOTT@orcl_11gR2> -- data:
SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO duns
2 SELECT object_id, object_name
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact
2 SELECT object_id, namespace
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> -- indexes:
SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
2 ON duns (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
2 ON primary_contact (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
SCOTT@orcl_11gR2> -- as suggested by Roger:
SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
2 ON duns (text_key)
3 INDEXTYPE IS CTXSYS.CONTEXT
4 FILTER BY duns_loc
5 /
Index created.
SCOTT@orcl_11gR2> -- gather statistics:
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- variables:
SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
SCOTT@orcl_11gR2> EXEC :employeeid := 1
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
SCOTT@orcl_11gR2> EXEC :search := 'highway'
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- original query:
SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
SCOTT@orcl_11gR2> WITH
2 subset AS
3 (SELECT d.duns_loc
4 FROM duns d
5 JOIN primary_contact pc
6 ON d.duns_loc = pc.duns_loc
7 AND pc.emp_id = :employeeID)
8 SELECT score(1), d.*
9 FROM duns d
10 JOIN subset s
11 ON d.duns_loc = s.duns_loc
12 WHERE CONTAINS (TEXT_KEY, :search,1) > 0
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 4228563783
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 84 | 121 (4)| 00:00:02 |
| 1 | SORT ORDER BY | | 2 | 84 | 121 (4)| 00:00:02 |
|* 2 | HASH JOIN | | 2 | 84 | 120 (3)| 00:00:02 |
| 3 | NESTED LOOPS | | 38 | 1292 | 50 (2)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 5 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | DUNS_DUNS_LOC_IDX | 1 | 5 | 1 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
SCOTT@orcl_11gR2> WITH
2 subset1 AS
3 (SELECT pc.duns_loc
4 FROM primary_contact pc
5 WHERE pc.emp_id = :employeeID),
6 subset2 AS
7 (SELECT score(1), d.*
8 FROM duns d
9 WHERE CONTAINS (TEXT_KEY, :search,1) > 0)
10 SELECT subset2.*
11 FROM subset1, subset2
12 WHERE subset1.duns_loc = subset2.duns_loc
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
SCOTT@orcl_11gR2> SELECT subset2.*
2 FROM (SELECT pc.duns_loc
3 FROM primary_contact pc
4 WHERE pc.emp_id = :employeeID) subset1,
5 (SELECT score(1), d.*
6 FROM duns d
7 WHERE CONTAINS (TEXT_KEY, :search,1) > 0) subset2
8 WHERE subset1.duns_loc = subset2.duns_loc
9 ORDER BY score(1) DESC
10 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- ansi join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 JOIN primary_contact
4 ON duns.duns_loc = primary_contact.duns_loc
5 WHERE CONTAINS (duns.text_key, :search, 1) > 0
6 AND primary_contact.emp_id = :employeeid
7 ORDER BY SCORE(1) DESC
8 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- old join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns, primary_contact
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc = primary_contact.duns_loc
5 AND primary_contact.emp_id = :employeeid
6 ORDER BY SCORE(1) DESC
7 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- in clause:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc IN
5 (SELECT primary_contact.duns_loc
6 FROM primary_contact
7 WHERE primary_contact.emp_id = :employeeid)
8 ORDER BY SCORE(1) DESC
9 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 3825821668
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN SEMI | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -
SELECT query performance : One big table Vs many small tables
Hello,
We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
multiple small tables will help ?
For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
Thanks.Hello,
There is some information on this topic in the FAQ at:
http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
If this does not address your question, please just let me know.
Thanks,
Sandra -
Hi,
We are using Oracle Express 6.3.2 & Oracle 8.1.5 for a decision
support system that we are building.
We are using a server with 2GB RAM, Dual CPU( 550MHz) along with
more than 100 GB HDD.The OS being used is Windows NT enterprise
edition server 4.0.On which both Oracle Relational database and
Express express server resides.
The building of multi-dimensional database(MDDB) doesn't take
time, we are using around 9 dimensions with total data in MDDB
being in the range of 5- 10 Lakhs records.The building time is
about 3 minutes.
However, the running of an application(XPJ) based on the MDDB
takes lot of time. The report takes about 2 Minutes to appear
and situtation is even worse in what-if analysis kind of
reports!.The MDDB is being accessed from Windows NT Client
workstation 4.0 with 128 MB RAM.
We have read Oracle Express Performance Tuning and Database
Design Guide which is available from both Oracle Technology
Network (technet.oracle.com) and Metalink, however it doesn't
seem to be of much help.The sparsity etc. have been taken care
of by us.
CAN ANYBODY PROVIDE ASSISTANCE ON THIS ISSUE OR IS IT NORMAL
THING TO EXPECT FROM ORACLE EXPRESS????...., .Request someone to
respond to this issue, as it very urgent & bothersome to us.
ThanksDear Friend,
I am trying to configure reports for express data and in this regard I have done the following :
1.Machine A having reports 9i Developer suite Release 2 and Pluggable Express Connection Editor installed on Windows 2000 professional.
2.Machine B having which Express Server 6.3.4 installed on Windows 2000 professional.
3.In Machine A created user OESDBA with Administrator privilige.
In the reports I am able to connect to the Express Server and attach the database and choose the measure.Once all the other steps are done I am able to print the data on the reports.
But if I use a parameter form in which I give the HOST,USER and PASSWORD then substitute the same in the express_server parameter value in the AFTER PARAMETER FORM trigger block then the report gives an error saying that
Express query contains incorrect,missing or damaged information.
How to solve this error ?
Can you please help.
Regds,
Ramakrishnan
[email protected] -
hi all.
i have useing orcle text for the indexing purposes ..
the below query goes for an wildcard search and it performance is very poor ..
/* Formatted on 2009/08/11 16:06 (Formatter Plus v4.8.5) */
SELECT *
FROM (SELECT z.*, ROWNUM r
FROM (SELECT *
FROM (SELECT *
FROM (SELECT score (1) score,
SUBSTR (note, 1, 200) note, tmplt_id,
appl_name, appl_use, appl_reference,
appl_entity, effctv_start_date,
effctv_end_date, eq_name, note_id,
Bfcommon.getvaluebasedontemplate
(tmplt_id,
note_id
) VALUE,
(SELECT em_name
FROM ip_eq
WHERE eq_name = a.eq_name) em_name,
Bfcommon.is_edit_allowed_note1
(a.note_id,
'PACRIM1\E317329',
SYSDATE,
SYSDATE
) LOCKED
FROM bf_note a
WHERE tmplt_id NOT IN (19, 14, 16)
AND effctv_start_date >=
TO_DATE ('08/12/2008 00:00:00',
'MM/DD/YYYY HH24:MI:SS'
AND ( effctv_end_date <=
TO_DATE
('08/12/2009 00:00:00',
'MM/DD/YYYY HH24:MI:SS'
OR effctv_end_date IS NULL
AND contains (note, '%test%', 1) > 0) r
UNION
SELECT score (1) score, SUBSTR (note, 1, 200) note,
tmplt_id, appl_name, appl_use,
appl_reference, appl_entity,
effctv_start_date, effctv_end_date, eq_name,
a.note_id, b.trgt_name VALUE,
(SELECT em_name
FROM ip_eq
WHERE eq_name = a.eq_name) em_name,
Bfcommon.is_edit_allowed_note1
(a.note_id,
'PACRIM1\E317329',
SYSDATE,
SYSDATE
) LOCKED
FROM bf_note a, om_limit_hist b
WHERE ( date_changed =
(SELECT MAX (date_changed)
FROM om_limit_hist h1
WHERE date_changed <
(SELECT MAX (date_changed)
FROM om_limit_hist h2
WHERE h2.trgt_name =
b.trgt_name)
AND h1.trgt_name = b.trgt_name)
OR date_changed =
(SELECT MAX (date_changed)
FROM om_limit_hist h1
WHERE h1.trgt_name = b.trgt_name)
AND b.note_id = a.note_id(+)
AND tmplt_id = 19
AND effctv_start_date >=
TO_DATE ('08/12/2008 00:00:00',
'MM/DD/YYYY HH24:MI:SS'
AND ( effctv_end_date <=
TO_DATE ('08/12/2009 00:00:00',
'MM/DD/YYYY HH24:MI:SS'
OR effctv_end_date IS NULL
AND contains (note, '%test%', 1) > 0)
ORDER BY 1 DESC) z
WHERE ROWNUM <= 50)
WHERE r >= 1here it goes for the wild card search for the string test ...Plz tell me how to index it in this case ..
and its plan is
Execution Plan
Plan hash value: 3535478881
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 50 | 128K| | 632 (1)| 00:00:08 |
|* 1 | VIEW | | 50 | 128K| | 632 (1)| 00:00:08 |
|* 2 | COUNT STOPKEY | | | | | | |
| 3 | VIEW | | 69 | 177K| | 632 (1)| 00:00:08 |
|* 4 | SORT ORDER BY STOPKEY | | 69 | 177K| 376K| 632 (1)| 00:00:08 |
| 5 | VIEW | | 69 | 177K| | 590 (1)| 00:00:08 |
| 6 | SORT UNIQUE | | 69 | 7281 | | 590 (2)| 00:00:08 |
| 7 | UNION-ALL | | | | | | |
|* 8 | TABLE ACCESS BY INDEX ROWID | BF_NOTE | 68 | 7140 | | 585 (1)| 00:00:08 |
|* 9 | DOMAIN INDEX | BF_NOTE_TEXT_SEARCH | | | | 221 (0)| 00:00:03 |
|* 10 | FILTER | | | | | | |
| 11 | NESTED LOOPS | | 1 | 141 | | 3 (0)| 00:00:01 |
| 12 | TABLE ACCESS FULL | OM_LIMIT_HIST | 1 | 36 | | 2 (0)| 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID| BF_NOTE | 1 | 105 | | 1 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | BF_NOTE_PK | 1 | | | 1 (0)| 00:00:01 |
| 15 | SORT AGGREGATE | | 1 | 23 | | | |
|* 16 | INDEX RANGE SCAN | OM_LIMIT_HIST_IDX1 | 1 | 23 | | 0 (0)| 00:00:01 |
| 17 | SORT AGGREGATE | | 1 | 23 | | | |
|* 18 | INDEX RANGE SCAN | OM_LIMIT_HIST_IDX1 | 1 | 23 | | 0 (0)| 00:00:01 |
| 19 | SORT AGGREGATE | | 1 | 23 | | | |
|* 20 | INDEX RANGE SCAN | OM_LIMIT_HIST_IDX1 | 1 | 23 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("R">=1)
2 - filter(ROWNUM<=50)
4 - filter(ROWNUM<=50)
8 - filter("EFFCTV_START_DATE">=TO_DATE(' 2008-08-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"TMPLT_ID"19 AND "TMPLT_ID"14 AND "TMPLT_ID"16 AND ("EFFCTV_END_DATE" IS NULL OR
"EFFCTV_END_DATE"<=TO_DATE(' 2009-08-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
9 - access("CTXSYS"."CONTAINS"("NOTE",'%test%',1)>0)
10 - filter("DATE_CHANGED"= (SELECT /*+ */ MAX("DATE_CHANGED") FROM "OM_LIMIT_HIST" "H1" WHERE
"H1"."TRGT_NAME"=:B1 AND "DATE_CHANGED"< (SELECT /*+ */ MAX("DATE_CHANGED") FROM "OM_LIMIT_HIST" "H2" WHER
E
"H2"."TRGT_NAME"=:B2)) OR "DATE_CHANGED"= (SELECT /*+ */ MAX("DATE_CHANGED") FROM "OM_LIMIT_HIST" "H1"
WHERE "H1"."TRGT_NAME"=:B3))
13 - filter("TMPLT_ID"=19 AND "EFFCTV_START_DATE">=TO_DATE(' 2008-08-12 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "CTXSYS"."CONTAINS"("NOTE",'%test%',1)>0 AND ("EFFCTV_END_DATE" IS NULL OR
"EFFCTV_END_DATE"<=TO_DATE(' 2009-08-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
14 - access("B"."NOTE_ID"="A"."NOTE_ID")
16 - access("H1"."TRGT_NAME"=:B1 AND "DATE_CHANGED"< (SELECT /*+ */ MAX("DATE_CHANGED") FROM
"OM_LIMIT_HIST" "H2" WHERE "H2"."TRGT_NAME"=:B2))
filter("DATE_CHANGED"< (SELECT /*+ */ MAX("DATE_CHANGED") FROM "OM_LIMIT_HIST" "H2" WHERE
"H2"."TRGT_NAME"=:B1))
18 - access("H2"."TRGT_NAME"=:B1)
20 - access("H1"."TRGT_NAME"=:B1)What version of Oracle are you using?
Your sample query is a good example of the benefits and costs of 'mixed' queries, and the challenges of mixed query performance. Oracle 11g has some very helpful new features (search for SDATA) that can really improve performance. (Specifically, it looks like your query does some useful date-range bounding. You need to get that into the FT index).
In the end, it's not going to be easy to look at your reasonably complex query, understand the data and relationships, and wave the magic wand to make the thing go fast. -
I have a table with about 500 million records in it stored within both Oracle and MySQL (MyISAM). I have a column (say phone_number) with a standard b-tree index on it in both places. Without data or indexes cached (not enough memory to fully cache the index), I run about 10,000 queries using randomly generated phone numbers as criteria in 10 concurrent threads. It seems that the average time to retrieve a record in MySQL is about 200 milliseconds whereas in Oracle it is about 400 milliseconds. I'm just wondering if MyISAM/MySQL is inherently faster for a basic index search than Oracle is, or should I be able to tune Oracle to get comparable performance.
Of course the hardware configurations and storage configurations are the same. It's not the absolute time I'm concerned about here but the relative time. Twice as long to perform basically the same query seems concerning. I enabled tracing and it seems like some of the problem may be the recursive calls Oracle is making. Is there some way to optimize this a bit further?
Realize, I just want to look at query performance right now...ignoring all of the issues (locking, transactional integrity, etc.)
Thanks,
GregIn Oracle, a standard table is heap-organized. A b-tree index then contains index keys and ROWIDs, so if you need to read a particular row in the table, you first do a few I/O's on the index to get the ROWIDs and then look up the ROWIDs in the table. For any given key, ROWIDs are likely to be scattered throughout the table, so this latter step generally involves multiple scattered I/O's.
You can create an index-organized table or a hash cluster in Oracle in order to minimize the cost of this particular sort of lookup by clustering data with the same key physically near each other and, in the case of IOTs, potentially eliminating the need to store the index and table separately. Of course, there are costs to doing this in that inserts are now more expensive and secondary indexes are likely to be less useful. That probably gets you closer to what MySQL is doing if, as ajallen indicates, a MySQL table is generally stored more like an IOT than a heap-organized table.
If you get really creative, you could even partition this single table to potentially improve performance further.
Of course, if you're only storing one table, I'm not sure that you could really justify the cost of an Oracle license. This may well be the case where MySQL is more than sufficient for what this particular customer needs (knowing, nothing, of course, about the other requirements for the system).
Justin -
Oracle 10g performance is slow
Dear Exports
how we can imporve the Oracle 10g performance........we are upgrading from Oracle 8 to Oracle 10g. Windows platform. and using Oracle developer 6 as front end .
thanks in advanceDo you have statistics gathered on the tables in the 8i database? Can you post the explain plan for the query in both databases?
Since you know what SQL is having poor performance you can use TKPROF and SQL TRACE to see where your query is spending its time.
Try the following:
alter session set timed_statistics=true;
alter session set max_dump_file_size=unlimited;
alter session set tracefile_identifier='BAD_SQL';
alter session set events '10046 trace name context forever, level 12';
<insert sql with poor response time>
disconnect
Use the TKPROF utility on the file found in USER_DUMP_DEST that contains the string BAD_SQL.
For information on how to interrupt the TKPROF output, see the following link.
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm
Maybe you are looking for
-
How do i get rid of the other on my iphone?
Hey all, I have 3.18 gb of other on my iphone and I don't know how to get rid of it to make some room on my phone. Do you all have any tips?
-
How can I delete my Firefox Sync account?
How can I delete my Firefox Sync account?
-
What's coming in the next version of Premiere Pro CC
Hi all, Here's a blog post by Al Mooney telling you about what's coming in the next version of Premiere Pro. http://blogs.adobe.com/premierepro/2014/04/thenextpremiereproccrelease.html tclark also posted this link a bit earlier about all the CC video
-
Simpletech drive used for Time Machine
Hi, I am using a SimpleTech drive for my Time Machine backup drive and I run into the issue of the drive not reconnecting when I restart my iMac. Actually I have two Simpletech drives connected to my iMac, the other one holds all of my music and it d
-
I am in the process of encoding a 40 minute project. Its windsurfing footage. Admittedly its the most MPG2 demanding stuff possible - lots of motion and some less than steady hand held shots- but also plenty of steady mount stuff as well. Hence i hav