What to check when comparing query performance in 2 envs
I have a query that executes in 1 sec in env A. Env A was initially was 9.2.0.5 and later upgraded to 9.2.0.6.
The same query executes in 1 min 30 secs in Env B. Env A has always been on 9.2.0.6.
The query plans and stats are same in both the environments.
What else do I need to look here?
Thanks
Sorry for giving you the incorrect info. The plans are exactly the same in both the environments. The stats are also the same . The table BIG_TBL has different rows in each enviornment. Even though it has more rows in other env the query comes back fast in that env.
The stats are almost identical on both the tables.
I executed the query in the slow environment by pointing the query to the BIG_TBL@fast_env by a db link fast_env and it came back in 3 secs.
What is wrong in the slow environment? What parametrs do i need to look or check? I am very confused now.
[ code ]
--+ SLOW ENVIRONMENT
no rows selected
Elapsed: 00:00:49.08
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=10 Card=2 Bytes=232)
1 0 SORT (ORDER BY) (Cost=10 Card=2 Bytes=232)
2 1 CONCATENATION
3 2 NESTED LOOPS (OUTER) (Cost=4 Card=1 Bytes=116)
4 3 NESTED LOOPS (OUTER) (Cost=3 Card=1 Bytes=84)
5 4 NESTED LOOPS (Cost=2 Card=1 Bytes=75)
6 5 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TBL' (C
ost=1 Card=1 Bytes=44)
7 6 INDEX (RANGE SCAN) OF 'IDX_BIG_TBL' (NON-
UNIQUE) (Cost=3 Card=1)
8 5 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_A'
(Cost=1 Card=1 Bytes=31)
9 8 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_A' (UNI
QUE)
10 4 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_B' (Cost=1 C
ard=1 Bytes=9)
11 10 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_B' (UNIQUE)
12 3 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_C' (Cost=1 C
ard=1 Bytes=32)
13 12 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_C' (UNIQUE)
14 2 NESTED LOOPS (OUTER) (Cost=4 Card=1 Bytes=116)
15 14 NESTED LOOPS (OUTER) (Cost=3 Card=1 Bytes=84)
16 15 NESTED LOOPS (Cost=2 Card=1 Bytes=75)
17 16 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TBL' (C
ost=1 Card=1 Bytes=44)
18 17 INDEX (RANGE SCAN) OF 'IDX1_BIG_TBL' (N
ON-UNIQUE) (Cost=3 Card=1)
19 16 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_A'
(Cost=1 Card=1 Bytes=31)
20 19 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_A' (UNI
QUE)
21 15 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_B' (Cost=1 C
ard=1 Bytes=9)
22 21 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_B' (UNIQUE)
23 14 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_C' (Cost=1 C
ard=1 Bytes=32)
24 23 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_C' (UNIQUE)
Statistics
238 recursive calls
0 db block gets
47798 consistent gets
46256 physical reads
0 redo size
408 bytes sent via SQL*Net to client
267 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
19:16:51 SQL> connect user@fastenv
Enter password: ********
Connected.
19:17:08 SQL> set time on
19:17:10 SQL> set timing on
19:17:12 SQL> /
no rows selected
Elapsed: 00:00:00.03
19:17:14 SQL> set autotrace on
19:17:24 SQL> /
--+ FAST ENVIRONMENT
no rows selected
Elapsed: 00:00:02.00
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=10 Card=2 Bytes=232)
1 0 SORT (ORDER BY) (Cost=10 Card=2 Bytes=232)
2 1 CONCATENATION
3 2 NESTED LOOPS (OUTER) (Cost=4 Card=1 Bytes=116)
4 3 NESTED LOOPS (OUTER) (Cost=3 Card=1 Bytes=84)
5 4 NESTED LOOPS (Cost=2 Card=1 Bytes=75)
6 5 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TBL' (C
ost=1 Card=1 Bytes=44)
7 6 INDEX (RANGE SCAN) OF 'IDX_BIG_TBL' (NON-
UNIQUE) (Cost=3 Card=1)
8 5 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_A'
(Cost=1 Card=1 Bytes=31)
9 8 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_A' (UNI
QUE)
10 4 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_B' (Cost=1 C
ard=1 Bytes=9)
11 10 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_B' (UNIQUE)
12 3 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_C' (Cost=1 C
ard=1 Bytes=32)
13 12 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_C' (UNIQUE)
14 2 NESTED LOOPS (OUTER) (Cost=4 Card=1 Bytes=116)
15 14 NESTED LOOPS (OUTER) (Cost=3 Card=1 Bytes=84)
16 15 NESTED LOOPS (Cost=2 Card=1 Bytes=75)
17 16 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TBL' (C
ost=1 Card=1 Bytes=44)
18 17 INDEX (RANGE SCAN) OF 'IDX1_BIG_TBL' (N
ON-UNIQUE) (Cost=3 Card=1)
19 16 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_A'
(Cost=1 Card=1 Bytes=31)
20 19 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_A' (UNI
QUE)
21 15 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_B' (Cost=1 C
ard=1 Bytes=9)
22 21 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_B' (UNIQUE)
23 14 TABLE ACCESS (BY INDEX ROWID) OF 'TABLE_C' (Cost=1 C
ard=1 Bytes=32)
24 23 INDEX (UNIQUE SCAN) OF 'IDX_TABLE_C' (UNIQUE)
Statistics
0 recursive calls
0 db block gets
5 consistent gets
0 physical reads
0 redo size
410 bytes sent via SQL*Net to client
266 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
[ code / ]
Similar Messages
-
What to check in my query running slow
Hi,
Our 11g database was migrated from one platform to another. A query which ran very quickly before on the source platform now takes ages to run on the target platform. Please could you point me in the right direction as to what things I should check to fix this problem?
thank youHi,
Following are the areas which can be explored. I hope this will help you.
1) Check the Explain plan for both servers. I hope you know how to gahter the explain plan for quries. Its better to use dbms_XPLAIN
-Most probably your explain plan will be different..
2) Check the init parameters of source and target tables
3) check the optimizer parameters of source and target tables. Parameters like optimizer_ind_cost_adj,sort_area_size,hash_area_size,Optimizer_mode,cursor_sharing etc
4) check the data volumn of source and target tables.
5) Check the statistics of source and target tables. Its cumbersome to compare the statistics of source and target. So what you can do is
a) Take the export of statistics of target tables
b) Import those statistics to source tables(Before importing take the backup of source table statistics)
c) Once done the check the explain plan, it should be the same like target database.
-If step 2 confirms that statistics is the problem then you should take the export of source table statistics and import in target to get the same explain plan.
HTH -
Where to check when the query was last used.
Hi all,
Where can we check the information when a particuler query was last used. (date ). Any table.
Also how can we check the same from satistics queries.Hi..
run a query and you will immediately see a new record in table RSDDSTAT (SE16) with your user-ID.
Also check the Query statistics in Tcode ST03N.
Cheers. -
Hi, I need help for improving query performance..
I have a query in which i am joining 2 tables, join after some aggregation. Both table has more than 50 million record.
There is no index created on these table,both tables are loaded after truncation. So is it required to create index on this table before joining? The query status was showing 'suspended' since it was running for long time. For temporary purpose, i just executed
the query multiple times by changing month filter each times.
How can i improve this instead of adding month filter and running multiple timesHi Nikkred,
According to your description, you are joining 2 table which contain more than 50 million records. Now what you want is improving query performance, right?
Query tuning is not an easy task. Basically it depends on three factors: your degree of knowledge, the query itself and the amount of optimization required. So in your scenario, you can post your detail query, so that you can get more help. Besides, you
can create index on your table which can improve the performance. Here are some links about performance tuning tips for you reference.
http://www.mssqltips.com/sql-server-tip-category/9/performance-tuning/
http://www.infoworld.com/d/data-management/7-performance-tips-faster-sql-queries-262
Regards,
Charlie Liao
TechNet Community Support -
Advantages of BW Query in SAP Portal when compared to WEB BEx Analyzer
Hi
What are the advantages of having a BW Query in SAP Portal when compared to WEB BEx Analyzer
I had to presention to higher managers of having a SAPPortal for BW report that WEB BEx Analyzer
can any one please update me with few points (10-20) that will be useful for my presentation
screen to geethikakrishna->gmail
Thanks in advanceHi Krishna ,
I think you should highlight the portal role in presentation
Now a days all organizations using sap portal as their user interface. We can login portal once and we can get all the information from bw,crm...sap systems and non sap systems.
so no need to login each and every system individually.
The portal offers a single point of access to SAP and non-SAP information sources, enterprise applications, information repositories, databases and services in and outside your organizationall integrated into a single user experience. It provides you the tools to manage this knowledge, to analyze and interrelate it, and to share and collaborate on the basis of it.
With its role-based content, and personalization features, the portal enables usersfrom employees and customers to partners and suppliersto focus exclusively on data relevant to daily decision-making processes
For all information on portals refer to help.sap.com->documentation->netweaver->Netweaver2004->English
IN people integration you will find information on portals
http://help.sap.com/saphelp_nw04/helpdata/en/19/4554426dd13555e10000000a1550b0/frameset.htm
check below threads for some information
easiest way to publish
Easiest way to post existing BW queries onto portal
http://help.sap.com/saphelp_nw04s/helpdata/en/43/92dceb49fd25e5e10000000a1553f7/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/9d/24ff4009b8f223e10000000a155106/frameset.htm
Execution of BW Reports Through Enterprise Portal
Best Practices for BI Report publishing on Portal - BI 7
Use of BI's Enterprise Portal
The web application designer offers rich functionality for creating dashboards. In SAP NetWeaver BI 2004s (aka 7.0), when you publish a template it is automatically created as an iView for the portal.
SAP NetWeaver 2004s BI - Define your Publishing Strategy Part 1
Koti Reddy -
System/Query Performance: What to look for in these tcodes
Hi
I have been researching on system/query performance in general in the BW environment.
I have seen tcodes such as
ST02 :Buffer/Table analysis
ST03 :System workload
ST03N:
ST04 : Database monitor
ST05 : SQL trace
ST06 :
ST66:
ST21:
ST22:
SE30: ABAP runtime analysis
RSRT:Query performance
RSRV: Analysis and repair of BW objects
For example, Note 948066 provides descriptions of these tcodes but what I am not getting are thresholds and their implications. e.g. ST02 gave tune summary screen with several rows and columns (?not sure what they are called) with several numerical values.
Is there some information on these rows/columns such as the typical range for each of these columns and rows; and the acceptable figures, and which numbers under which columns suggest what problems?
Basically some type of a metric for each of these indicators provided by these performance tcodes.
Something similar to when you are using an Operating system, and the CPU performance is consistently over 70% which may suggest the need to upgrade CPU; while over 90% suggests your system is about to crush, etc.
I will appreciate some guidelines on the use of these tcodes and from your personal experience, which indicators you pay attention to under each tcode and why?
Thankshi Amanda,
i forgot something .... SAP provides Early Watch report, if you have solution manager, you can generate it by yourself .... in Early Watch report there will be red, yellow and green light for parameters
http://help.sap.com/saphelp_sm40/helpdata/EN/a4/a0cd16e4bcb3418efdaf4a07f4cdf8/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f35bf3-14a3-2910-abb8-89a7a294cedb
EarlyWatch focuses on the following aspects:
· Server analysis
· Database analysis
· Configuration analysis
· Application analysis
· Workload analysis
EarlyWatch Alert a free part of your standard maintenance contract with SAP is a preventive service designed to help you take rapid action before potential problems can lead to actual downtime. In addition to EarlyWatch Alert, you can also decide to have an EarlyWatch session for a more detailed analysis of your system.
ask your basis for Early Watch sample report, the parameters in Early Watch should cover what you are looking for with red, yellow, green indicators
Understanding Your EarlyWatch Alert Reports
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4b88cb90-0201-0010-5bb1-a65272a329bf
hope this helps. -
Poor query performance when joining CONTAINS to another table
We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
For example, we can find all the records a user has access to from our base table by the following query:
SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID;
This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
Our search query looks like this:
SELECT score(1), d.*
FROM duns d
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
WITH subset
AS
(SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID
SELECT score(1), d.*
FROM duns d
JOIN subset s
ON d.duns_loc = s.duns_loc
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
Thanks!!Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
SCOTT@orcl_11gR2> -- tables:
SCOTT@orcl_11gR2> CREATE TABLE duns
2 (duns_loc NUMBER,
3 text_key VARCHAR2 (30))
4 /
Table created.
SCOTT@orcl_11gR2> CREATE TABLE primary_contact
2 (duns_loc NUMBER,
3 emp_id NUMBER)
4 /
Table created.
SCOTT@orcl_11gR2> -- data:
SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO duns
2 SELECT object_id, object_name
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact
2 SELECT object_id, namespace
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> -- indexes:
SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
2 ON duns (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
2 ON primary_contact (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
SCOTT@orcl_11gR2> -- as suggested by Roger:
SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
2 ON duns (text_key)
3 INDEXTYPE IS CTXSYS.CONTEXT
4 FILTER BY duns_loc
5 /
Index created.
SCOTT@orcl_11gR2> -- gather statistics:
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- variables:
SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
SCOTT@orcl_11gR2> EXEC :employeeid := 1
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
SCOTT@orcl_11gR2> EXEC :search := 'highway'
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- original query:
SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
SCOTT@orcl_11gR2> WITH
2 subset AS
3 (SELECT d.duns_loc
4 FROM duns d
5 JOIN primary_contact pc
6 ON d.duns_loc = pc.duns_loc
7 AND pc.emp_id = :employeeID)
8 SELECT score(1), d.*
9 FROM duns d
10 JOIN subset s
11 ON d.duns_loc = s.duns_loc
12 WHERE CONTAINS (TEXT_KEY, :search,1) > 0
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 4228563783
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 84 | 121 (4)| 00:00:02 |
| 1 | SORT ORDER BY | | 2 | 84 | 121 (4)| 00:00:02 |
|* 2 | HASH JOIN | | 2 | 84 | 120 (3)| 00:00:02 |
| 3 | NESTED LOOPS | | 38 | 1292 | 50 (2)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 5 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | DUNS_DUNS_LOC_IDX | 1 | 5 | 1 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
SCOTT@orcl_11gR2> WITH
2 subset1 AS
3 (SELECT pc.duns_loc
4 FROM primary_contact pc
5 WHERE pc.emp_id = :employeeID),
6 subset2 AS
7 (SELECT score(1), d.*
8 FROM duns d
9 WHERE CONTAINS (TEXT_KEY, :search,1) > 0)
10 SELECT subset2.*
11 FROM subset1, subset2
12 WHERE subset1.duns_loc = subset2.duns_loc
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
SCOTT@orcl_11gR2> SELECT subset2.*
2 FROM (SELECT pc.duns_loc
3 FROM primary_contact pc
4 WHERE pc.emp_id = :employeeID) subset1,
5 (SELECT score(1), d.*
6 FROM duns d
7 WHERE CONTAINS (TEXT_KEY, :search,1) > 0) subset2
8 WHERE subset1.duns_loc = subset2.duns_loc
9 ORDER BY score(1) DESC
10 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- ansi join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 JOIN primary_contact
4 ON duns.duns_loc = primary_contact.duns_loc
5 WHERE CONTAINS (duns.text_key, :search, 1) > 0
6 AND primary_contact.emp_id = :employeeid
7 ORDER BY SCORE(1) DESC
8 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- old join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns, primary_contact
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc = primary_contact.duns_loc
5 AND primary_contact.emp_id = :employeeid
6 ORDER BY SCORE(1) DESC
7 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- in clause:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc IN
5 (SELECT primary_contact.duns_loc
6 FROM primary_contact
7 WHERE primary_contact.emp_id = :employeeid)
8 ORDER BY SCORE(1) DESC
9 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 3825821668
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN SEMI | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -
Query performance when multiple single variable values selected
Hi Gurus,
I have a sales analysis report that I run with some complex selection criteria. One of the variables is the Sales Orgranisation.
If I run the report for individual sales organisations, the performance is excellant, results are dislayed in a matter of seconds. However, if I specify more than one sales organisation, the query will run and run and run.... until it eventually times out.
For example;
I run the report for SALEORG1 and the results are displayed in less than 1 minute.
I then run the report for SALEORG2 and again the results are displayed in less than 1 minute.
If I try to run the query for both SALEORG1 and SALEORG2, the report does not display any results but continues until it times out.
Anybody got any ideas on why this would be happening?
Any advise, gratefully accepted.
Regards,
DavidWhile compression is generally something that you should be doing, I don't think it is a factor here, since query performance is OK when using just a sinlge value.
I would do two things - first make sure the DB stats for the cube are current. You might even consider increasing the sample rate for the stats collection if you are using one. You don't mention the DB you use, or what type of index is on Salesorg which could play a role. Does the query run against a multiprovider on top of multiple cubes, or is there just one cube involved.
If you still have problems after refreshing the stats, then I think the next step is to get a DB execution plan for the query by running the query from RSRT debugging mode when it is run with just one value and another when run with multiple values to see if the DB is doing something different - seek out your DBA if you are not familiar with executoin plan. -
How to improve query performance when reporting on ods object?
Hi,
Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
Thanks in advance,
Ravi Alakuntla.Hi Ravi,
Check these links which may cater your requirement,
Re: performance issues of ODS
Which criteria to follow to pick InfoObj. as secondary index of ODS?
PDF on BW performance tuning,
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
Regards,
Mani. -
Which tables are hit when a query runs - time dependent objects performance
Hello all,
We are trying to see what are the effects of time dependent master data objects in query. We will have a key date as variable so user can see the data as a particular point.
I am trying to see what are all the tables hit when a query is executed and how the time dependent info objects affect performance. Basically we are trying to see is - does the query hit the P or Q or Y tables of the infoobject. Is there any tcode or program that I can use to see which tables were hit and how much time the query took to execute.
Also if the time dependent attribute is in free characteristic does it directly effect the query.
If you can share some more experience with time dependent master data objects in query and its effects on performance that will be great.
Thanks all in advanceHello Siggi, Thanks for inputs
That is what I actually did before posting message here, the tables that are hit are /BI0/S...or /BI0/R and /BI0/T.... I never see the tables /BI0/P or /BI0/Q ... tables hit. I have a key date on variable screen at so when i put future or past date the /BI0/SDATE table is hit does it sound about right to you ?
Is the /S table hit the most because the data is being seen from the SID's that are generated.
Can you share your thoughts.
Thanks again,
Have assigned you points. -
What are the main things to do when optimizing the performance of Java App
what are the main things to do when optimizing the performance of Java App
what are the main things to do when optimizing the performance of Java App
-
When trying to perform updates I get this message: This update is not available for this Apple ID either because it was bought by a different user or the item was refunded or cancelled. What do I do? I just bought the MacBook Pro new and migrated my apps from my old MacBook Pro.
Perform the following, stopping with the first one which works:
1. Open the Mac App Store's Purchases tab. If you're prompted to accept them into your Apple ID, do so.
2. Move them out of the Applications folder(they may need to be put into the Trash temporarily), and then see if you can download them for free from their individual product pages. If you're asked to buy them, see #3.
3. Click here, contact Apple, and wait for a response.
4. Buy them.
(116300) -
What is the use of jsp when compare with Struts
what is the use of jsp when compare with Struts
JSP Tag Libraries are great for reusable content formatting and ligic.
For example, let's say you have this Shopping site. Each item you sell is stored in a database, and you get them out depending on Catagories, creating a List of ItemBeans. You allways want to display the items with a catagory header, then a <table> with the item number, the description and the price.
Instead of creating a bunch of logic in the JSP that does this, you can pass it on to a Tag that might look like this in your JSP:
<shopping:itemTable catagory="${selectedCatagory}" items="${itemsForCatagory}" />
This would make the JSP easier to read and work with.
The actual uses are incredible. Have you used the <jsp:useBean ...> tag? That is an example of a use of the Custom Tag Libraries.
Furthermore, look into JSTL (JSP Standard Tag Libraries). They are a collection of tags (API by Sun, coding by Apache) used to do many of the standard actions you might want/need to do in JSPs, like a conditional tag (c:if only do something if the test is true), multiple-conditional tags (c:choos c:when c:otherwise) like an if [else if] else construct. Looping through an array or Collection (c:forEach), storeing values in scopes (c:set) formating numbers and dates (the fmt library), xml transformations (xml library), and lots of other things that you could replace scriptlet code with. -
How to specify when the authorization check is to be performed
How to specify when the authorization check is to be performed declratiely for
any webapplication.
There is Direction object argument as part isAccessAllowed method of AccessDecesion
class which can have either of the three values i.e. POST, PRE, & ONCE. Ijust
wana know how can we declaratively set this direction value for any Web Resource
in weblogic server 8.1 security
-Subhash"Subhash Chopra" <[email protected]> wrote in message
news:3e9c5805$[email protected]..
>
How to specify when the authorization check is to be performed declratielyfor
any webapplication.
There is Direction object argument as part isAccessAllowed method ofAccessDecesion
class which can have either of the three values i.e. POST, PRE, & ONCE.Ijust
wana know how can we declaratively set this direction value for any WebResource
in weblogic server 8.1 security
The containers always call the isAccessAllowed method with no direction so
Once is passed
to the providers. There is no way to set the direction that the containers
use.
-Subhash -
What are the advantage of using Oracle Database when compare to SQL SERVER
Hi all
Please tell anyone about
what are the advantage of using Oracle Database when compare to SQL SERVER
Thanks in advance
Balamurugan Suser12842738 wrote:
Hi,
There are various differences between the two.
1. SQL Server is only Windows, but Oracle runs on almost all Platforms.
2. You can have multiple databases in SQL Server, but Oracle provides you only one database per instance.Given that the very term 'database' has s different meaning in the two products, this "difference" is absolutely meaningless.
3. SQL Server provides T-SQL for writing programs, whereas Oracle provides PL/SQLWhich means what? Both products have a procedural programming language. They named them differently, and the languages are not interchangeable. Means nothing in comparing the features/strengths/weaknesses/suitability to purpose.
4. Backup types in both are the same. (Except Oracle provides an additional backup called Logical Backup.)You make that sound like "Logical Backup" is something more than it is. It is nothing more than an export of the data and metadata. Many experts don't even consider it a backup. I'm sure SQL Server provides the same functionality though they probably call it by some other name.
5. Both provide High Availability.Well, I guess they both have a suite of features they refer to as "High Availability". But what does that really mean? The devil is in the details. Remember, the two products don't even agree on what constitutes a "database".
6. Both come in various distributions.???
>
If you are going for an Implementation, you can try SQL Server Express Edition and Oracle XE which are free to use.
Then you can choose whichever is comfortable for your needs.
Thanks.
Maybe you are looking for
-
Getting a new computer and want to set up the old one for wireless.
I am getting a new G5 and have a few questions about wireless networking. First off, I presently have a G4 Dual 400, wired to a Netgear wireless G router. My laptop (G4 Titanium) and my iPhone will not connect to wi-fi at the furthest location in my
-
Hi, is it possible to output a page, when it is accessed, as raw html? I have created a report using the printer friendly template and a query using dbms_xmlgen.getxml like this select dbms_xmlgen.getxml('select code, descr from items') xml from d
-
How to modify the Registered Schema
Hi Gurus, I have registered an XMLschema (A) based on an xsd and a table with column having datatype XMLTYPE is created based on the schema. Now this xsd is inherited by many xsds and based on those xsd's schemas are created and tables with columns h
-
Tecra A9 - memory card controler under Windows XP?
Hello, I have a new Tecra A9-51G. I had downgrade Vista to Windows XP Professional and evrything is working fine, but Windows can't find driver to memory card contorler and I can't use 5-in-1 Bridge Media slot (supports SD Card, Memory Stick, Memory
-
HTTP Server Configuration Assistant failled
Hi , i am trying to install OAS 10g R2 and i am getting this error in middle of installation. ================================================================================================================ Output generated from configuration assista