Query - Performance - in operator
Hi All,
I apologize for a lengthy posting.
I have a problem in performance of a Query.
When I use static value in a in-list as given in Query1, it is faster and the Logical I/O is very less.
But the values in the list are dynamic, So I tried two Methods
Method 1
CREATE OR REPLACE TYPE PF_TY_TBL_NUM AS TABLE OF NUMBER;
and created a function as follows...
CREATE OR REPLACE FUNCTION PF_FN_GET_NUM_LIST(PARAM_STR VARCHAR2) RETURN PF_TY_TBL_NUM
AS
V_STRINGS long default PARAM_STR || ',';
V_INDEX number;
V_DATA PF_TY_TBL_NUM := PF_TY_TBL_NUM();
BEGIN
LOOP
V_INDEX := INSTR(V_STRINGS, ',' );
EXIT WHEN (NVL(V_INDEX,0) = 0);
V_DATA.extend;
V_DATA( V_DATA.count ) := ltrim(rtrim(substr(V_STRINGS,1,V_INDEX-1)));
V_STRINGS := SUBSTR( V_STRINGS, V_INDEX+1 );
END LOOP;
RETURN V_DATA;
END PF_FN_GET_NUM_LIST;
Query and result for this is given in Query2 -- You can Notice that the index for the 2nd table is not used and the Consistent gets is higher than the Query1
Method 2
Is I have created a temporary table and inserted values into that and used the table in the exist clause.
Query and result for this is given in Query2 -- You can Notice that the index for the 2nd table is not used and the Consistent gets is higher than the Query1
Can Anyone suggest me on this.
Query 1:
SELECT PR.pr_id,
sp.sp_class_destinatario,
SP.sp_sospesa,
sp.sp_id_destinatario,
IV.in_id_mittente,
IV.in_oggetto,
IV.in_data_spedizione,
PR.pr_id_soggetto,
PR.PR_NUMERO_PRATICA,
PR.pr_id_utente_presentatore
FROM PF_TR_INVIO IV, PF_TR_SITUAZIONE_PRATICA SP, PF_TR_PRATICA PR
WHERE (((SP.SP_CLASS_DESTINATARIO = 10) AND (SP.SP_ID_DESTINATARIO = 1260)) OR
((SP.SP_CLASS_DESTINATARIO = 20) AND
(SP.SP_ID_DESTINATARIO IN
('7', '2', '17', '333', '349', '501', '1', '320', '414', '406',
'1889', '3018', '1364', '1140', '10', '3052', '71'))) OR
((SP.SP_CLASS_DESTINATARIO = 30) AND (SP.SP_ID_DESTINATARIO = 420)))
AND (SP.SP_ID_ULTIMO_INVIO = IV.IN_ID)
AND (SP.SP_ID_PRATICA = PR.PR_ID)
ORDER BY IV.IN_ID,
PR.PR_NUMERO_PRATICA
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=63 Card=4 Bytes=276)
1 0 SORT (ORDER BY) (Cost=63 Card=4 Bytes=276)
2 1 CONCATENATION
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_INVIO' (Cost=1
Card=1 Bytes=22)
4 3 NESTED LOOPS (Cost=5 Card=1 Bytes=69)
5 4 NESTED LOOPS (Cost=4 Card=1 Bytes=47)
6 5 INLIST ITERATOR
7 6 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_SITUAZ
IONE_PRATICA' (Cost=3 Card=1 Bytes=15)
8 7 INDEX (RANGE SCAN) OF 'PF_SP_IN_DESTINATARIO
' (NON-UNIQUE) (Cost=2 Card=1)
9 5 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_PRATICA'
(Cost=1 Card=1 Bytes=32)
10 9 INDEX (RANGE SCAN) OF 'PF_PR_IN_ID' (NON-UNIQU
E)
11 4 INDEX (RANGE SCAN) OF 'PF_IN_IN_ID' (NON-UNIQUE)
12 2 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_INVIO' (Cost=1
Card=1 Bytes=22)
13 12 NESTED LOOPS (Cost=5 Card=1 Bytes=69)
14 13 NESTED LOOPS (Cost=4 Card=1 Bytes=47)
15 14 INLIST ITERATOR
16 15 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_SITUAZ
IONE_PRATICA' (Cost=3 Card=1 Bytes=15)
17 16 INDEX (RANGE SCAN) OF 'PF_SP_IN_DESTINATARIO
' (NON-UNIQUE) (Cost=2 Card=1)
18 14 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_PRATICA'
(Cost=1 Card=1 Bytes=32)
19 18 INDEX (RANGE SCAN) OF 'PF_PR_IN_ID' (NON-UNIQU
E)
20 13 INDEX (RANGE SCAN) OF 'PF_IN_IN_ID' (NON-UNIQUE)
Statistics
115 recursive calls
0 db block gets
121 consistent gets
42 physical reads
0 redo size
1176 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
13 rows processed
Query2
SELECT PR.pr_id,
sp.sp_class_destinatario,
SP.sp_sospesa,
sp.sp_id_destinatario,
IV.in_id_mittente,
IV.in_oggetto,
IV.in_data_spedizione,
PR.pr_id_soggetto,
PR.PR_NUMERO_PRATICA,
PR.pr_id_utente_presentatore
FROM PF_TR_INVIO IV,
PF_TR_SITUAZIONE_PRATICA SP,
PF_TR_PRATICA PR
WHERE (((SP.SP_CLASS_DESTINATARIO = 10) AND
(SP.SP_ID_DESTINATARIO = 1260)) OR
((SP.SP_CLASS_DESTINATARIO = 20) AND
(SP.SP_ID_DESTINATARIO IN
(SELECT column_value
FROM TABLE(CAST(PF_FN_GET_NUM_LIST('7,2,17,333,349,501,1,320,414,406,1889,3018,1364,1140,10,3052,71') AS
PF_TY_TBL_NUM))))) OR
((SP.SP_CLASS_DESTINATARIO = 30) AND
(SP.SP_ID_DESTINATARIO = 420)))
AND (SP.SP_ID_ULTIMO_INVIO = IV.IN_ID)
AND (SP.SP_ID_PRATICA = PR.PR_ID)
ORDER BY IV.IN_ID,
PR.PR_NUMERO_PRATICA
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=80 Card=11 Bytes=759
1 0 SORT (ORDER BY) (Cost=80 Card=11 Bytes=759)
2 1 FILTER
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_INVIO' (Cost=1
Card=1 Bytes=22)
4 3 NESTED LOOPS (Cost=33 Card=11 Bytes=759)
5 4 NESTED LOOPS (Cost=22 Card=11 Bytes=517)
6 5 TABLE ACCESS (FULL) OF 'PF_TR_SITUAZIONE_PRATICA
' (Cost=11 Card=11 Bytes=165)
7 5 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_PRATICA'
(Cost=1 Card=1 Bytes=32)
8 7 INDEX (RANGE SCAN) OF 'PF_PR_IN_ID' (NON-UNIQU
E)
9 4 INDEX (RANGE SCAN) OF 'PF_IN_IN_ID' (NON-UNIQUE)
10 2 COLLECTION ITERATOR (PICKLER FETCH) OF 'PF_FN_GET_NUM_
LIST'
Statistics
205 recursive calls
0 db block gets
7285 consistent gets
203 physical reads
0 redo size
1193 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
7 sorts (memory)
0 sorts (disk)
13 rows processed
Query3
SELECT PR.pr_id,
sp.sp_class_destinatario,
SP.sp_sospesa,
sp.sp_id_destinatario,
IV.in_id_mittente,
IV.in_oggetto,
IV.in_data_spedizione,
PR.pr_id_soggetto,
PR.PR_NUMERO_PRATICA,
PR.pr_id_utente_presentatore
FROM PF_TR_INVIO IV, PF_TR_SITUAZIONE_PRATICA SP, PF_TR_PRATICA PR
WHERE (((SP.SP_CLASS_DESTINATARIO = 10) AND (SP.SP_ID_DESTINATARIO = 1260)) OR
((SP.SP_CLASS_DESTINATARIO = 20) AND
(exists
(select 1 from a where SP.SP_ID_DESTINATARIO=a))) OR
((SP.SP_CLASS_DESTINATARIO = 30) AND (SP.SP_ID_DESTINATARIO = 420)))
AND (SP.SP_ID_ULTIMO_INVIO = IV.IN_ID)
AND (SP.SP_ID_PRATICA = PR.PR_ID)
ORDER BY IV.IN_ID,
PR.PR_NUMERO_PRATICA
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=80 Card=11 Bytes=759
1 0 SORT (ORDER BY) (Cost=80 Card=11 Bytes=759)
2 1 FILTER
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_INVIO' (Cost=1
Card=1 Bytes=22)
4 3 NESTED LOOPS (Cost=33 Card=11 Bytes=759)
5 4 NESTED LOOPS (Cost=22 Card=11 Bytes=517)
6 5 TABLE ACCESS (FULL) OF 'PF_TR_SITUAZIONE_PRATICA
' (Cost=11 Card=11 Bytes=165)
7 5 TABLE ACCESS (BY INDEX ROWID) OF 'PF_TR_PRATICA'
(Cost=1 Card=1 Bytes=32)
8 7 INDEX (RANGE SCAN) OF 'PF_PR_IN_ID' (NON-UNIQU
E)
9 4 INDEX (RANGE SCAN) OF 'PF_IN_IN_ID' (NON-UNIQUE)
10 2 TABLE ACCESS (FULL) OF 'A' (Cost=2 Card=1 Bytes=3)
Statistics
0 recursive calls
0 db block gets
7179 consistent gets
0 physical reads
0 redo size
1193 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
13 rows processed
Thank You in Advance,
Jaggy.
Hey,
Although it is not seen in Query 2 execution plan maybe the root of the problem
(And I saw it in other selects from a collection) is that the default cardinality for a
The "COLLECTION ITERATOR PICKLER FETCH" operation is the block size of your database.
The workaround is:
1. a. give an alias for the Table(yada, yada, yada) clause.
1. b. inform the oracle on the assumed count of rows that this Table() has
Via a hint, e.g.
Select +cardinality(table_alias 20) --20 means that 20 rows will be returned
column_1, column_2
from table(yada, yada, yada) table_alias, real_table
where table_alias.id = real_table.id
Anyway, try it.
Amiel.
Similar Messages
-
Can we perform Join operation using SQLCall with Datatabae Query
Hi,
I am working on Toplink SQLCall query. I am performing join operation but, it is giving error.
so please, any can tell me . we can perform join operation using SQLCall with Database Query
Thanking You.You can use joining with SQLCall queries in TopLink, provided your SQL returns all of the required fields.
What is the query you are executing and what error are you getting? -
Cannot perform dml operation inside a query
I have created a function which does some dml opration.
when I use it in a through a transformation operator, and execute the map,
it throws the following error:
cannot perform dml operation inside a query
how to handle this?Hi,
if you want to execute the dml within a mapping, use the pre or post mapping procress operator. Or use a sql*plus activity in the process flow.
Regards,
Carsten. -
Problem in Adhoc Query's set operation functionality.
Hi Experts,
I am facing problem executing Adhoc Query's set operation functionality.
In Selection Tab, following operations are performed :-
Execute a query and mark it as 'Set A'.(Say Hit list = X)
Execute another query and mark it as 'Set B'.(Say Hit list = Y)
In Set operation Tab, following operations are performed :-:-
Carry out an Operations 'Set A minus Set B'.
which results in Resulting Set = Z.
Transfer the resulting set 'in hit list' and press the copy resulting set button.
In Selection Tab, Hit list is populated with Z.
And when output button is pressed, I get to see 'Y' list and not 'Z' list.
Kindly help.
Thanks.
YogeshHi Experts,
I am facing problem executing Adhoc Query's set operation functionality.
In Selection Tab, following operations are performed :-
Execute a query and mark it as 'Set A'.(Say Hit list = X)
Execute another query and mark it as 'Set B'.(Say Hit list = Y)
In Set operation Tab, following operations are performed :-:-
Carry out an Operations 'Set A minus Set B'.
which results in Resulting Set = Z.
Transfer the resulting set 'in hit list' and press the copy resulting set button.
In Selection Tab, Hit list is populated with Z.
And when output button is pressed, I get to see 'Y' list and not 'Z' list.
Kindly help.
Thanks.
Yogesh -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
System/Query Performance: What to look for in these tcodes
Hi
I have been researching on system/query performance in general in the BW environment.
I have seen tcodes such as
ST02 :Buffer/Table analysis
ST03 :System workload
ST03N:
ST04 : Database monitor
ST05 : SQL trace
ST06 :
ST66:
ST21:
ST22:
SE30: ABAP runtime analysis
RSRT:Query performance
RSRV: Analysis and repair of BW objects
For example, Note 948066 provides descriptions of these tcodes but what I am not getting are thresholds and their implications. e.g. ST02 gave tune summary screen with several rows and columns (?not sure what they are called) with several numerical values.
Is there some information on these rows/columns such as the typical range for each of these columns and rows; and the acceptable figures, and which numbers under which columns suggest what problems?
Basically some type of a metric for each of these indicators provided by these performance tcodes.
Something similar to when you are using an Operating system, and the CPU performance is consistently over 70% which may suggest the need to upgrade CPU; while over 90% suggests your system is about to crush, etc.
I will appreciate some guidelines on the use of these tcodes and from your personal experience, which indicators you pay attention to under each tcode and why?
Thankshi Amanda,
i forgot something .... SAP provides Early Watch report, if you have solution manager, you can generate it by yourself .... in Early Watch report there will be red, yellow and green light for parameters
http://help.sap.com/saphelp_sm40/helpdata/EN/a4/a0cd16e4bcb3418efdaf4a07f4cdf8/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f35bf3-14a3-2910-abb8-89a7a294cedb
EarlyWatch focuses on the following aspects:
· Server analysis
· Database analysis
· Configuration analysis
· Application analysis
· Workload analysis
EarlyWatch Alert a free part of your standard maintenance contract with SAP is a preventive service designed to help you take rapid action before potential problems can lead to actual downtime. In addition to EarlyWatch Alert, you can also decide to have an EarlyWatch session for a more detailed analysis of your system.
ask your basis for Early Watch sample report, the parameters in Early Watch should cover what you are looking for with red, yellow, green indicators
Understanding Your EarlyWatch Alert Reports
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4b88cb90-0201-0010-5bb1-a65272a329bf
hope this helps. -
Poor query performance when joining CONTAINS to another table
We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
For example, we can find all the records a user has access to from our base table by the following query:
SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID;
This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
Our search query looks like this:
SELECT score(1), d.*
FROM duns d
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
WITH subset
AS
(SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID
SELECT score(1), d.*
FROM duns d
JOIN subset s
ON d.duns_loc = s.duns_loc
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
Thanks!!Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
SCOTT@orcl_11gR2> -- tables:
SCOTT@orcl_11gR2> CREATE TABLE duns
2 (duns_loc NUMBER,
3 text_key VARCHAR2 (30))
4 /
Table created.
SCOTT@orcl_11gR2> CREATE TABLE primary_contact
2 (duns_loc NUMBER,
3 emp_id NUMBER)
4 /
Table created.
SCOTT@orcl_11gR2> -- data:
SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO duns
2 SELECT object_id, object_name
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact
2 SELECT object_id, namespace
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> -- indexes:
SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
2 ON duns (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
2 ON primary_contact (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
SCOTT@orcl_11gR2> -- as suggested by Roger:
SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
2 ON duns (text_key)
3 INDEXTYPE IS CTXSYS.CONTEXT
4 FILTER BY duns_loc
5 /
Index created.
SCOTT@orcl_11gR2> -- gather statistics:
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- variables:
SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
SCOTT@orcl_11gR2> EXEC :employeeid := 1
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
SCOTT@orcl_11gR2> EXEC :search := 'highway'
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- original query:
SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
SCOTT@orcl_11gR2> WITH
2 subset AS
3 (SELECT d.duns_loc
4 FROM duns d
5 JOIN primary_contact pc
6 ON d.duns_loc = pc.duns_loc
7 AND pc.emp_id = :employeeID)
8 SELECT score(1), d.*
9 FROM duns d
10 JOIN subset s
11 ON d.duns_loc = s.duns_loc
12 WHERE CONTAINS (TEXT_KEY, :search,1) > 0
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 4228563783
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 84 | 121 (4)| 00:00:02 |
| 1 | SORT ORDER BY | | 2 | 84 | 121 (4)| 00:00:02 |
|* 2 | HASH JOIN | | 2 | 84 | 120 (3)| 00:00:02 |
| 3 | NESTED LOOPS | | 38 | 1292 | 50 (2)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 5 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | DUNS_DUNS_LOC_IDX | 1 | 5 | 1 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
SCOTT@orcl_11gR2> WITH
2 subset1 AS
3 (SELECT pc.duns_loc
4 FROM primary_contact pc
5 WHERE pc.emp_id = :employeeID),
6 subset2 AS
7 (SELECT score(1), d.*
8 FROM duns d
9 WHERE CONTAINS (TEXT_KEY, :search,1) > 0)
10 SELECT subset2.*
11 FROM subset1, subset2
12 WHERE subset1.duns_loc = subset2.duns_loc
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
SCOTT@orcl_11gR2> SELECT subset2.*
2 FROM (SELECT pc.duns_loc
3 FROM primary_contact pc
4 WHERE pc.emp_id = :employeeID) subset1,
5 (SELECT score(1), d.*
6 FROM duns d
7 WHERE CONTAINS (TEXT_KEY, :search,1) > 0) subset2
8 WHERE subset1.duns_loc = subset2.duns_loc
9 ORDER BY score(1) DESC
10 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- ansi join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 JOIN primary_contact
4 ON duns.duns_loc = primary_contact.duns_loc
5 WHERE CONTAINS (duns.text_key, :search, 1) > 0
6 AND primary_contact.emp_id = :employeeid
7 ORDER BY SCORE(1) DESC
8 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- old join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns, primary_contact
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc = primary_contact.duns_loc
5 AND primary_contact.emp_id = :employeeid
6 ORDER BY SCORE(1) DESC
7 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- in clause:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc IN
5 (SELECT primary_contact.duns_loc
6 FROM primary_contact
7 WHERE primary_contact.emp_id = :employeeid)
8 ORDER BY SCORE(1) DESC
9 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 3825821668
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN SEMI | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -
Select Query using AND operation
Hello,
I am developing RFC which will retrieve some records from SAP HR module table.
Input will be one structure (INPUT_TABLE) containing 4 fields. User may enter values for any fields, and my requirement is to perform a AND operation on those entered fields only. Currently i am writing query as follows :
SELECT DISTINCT apernr aVORNA aNACHN acname a~gbdat FROM
PA0002 as a INTO CORRESPONDING FIELDS OF TABLE Z_PD_TABLE
WHERE apernr = INPUT_TABLE-pernr AND aVNAMC = INPUT_TABLE-VORNA AND aNCHMC = INPUT_TABLE-NACHN AND agbdat = INPUT_TABLE-GBDAT .
If only 2 values are entered by user out of 4 , it will perform AND operation by taking other values as blank or 000000. I want to skip these AND operation on fields which are not entered by user.
Please help for writing query.
Thanks in advance,
PrashantHi,
Create dynamic where condition based upon user input.
Try like this....
IF NOT INPUT_TABLE-pernr IS INITIAL.
CLEAR : lv_pernr_condition.
CONCATENATE 'PERNR' ' = ' '''' INPUT_TABLE-pernr '''' INTO
lv_pernr_condition.
ENDIF.
IF NOT INPUT_TABLE-vnamc IS INITIAL.
CLEAR : lv_vnamc_condition.
CONCATENATE 'VNAMC' ' = ' '''' INPUT_TABLE-vnamc '''' INTO
lv_vnamc_condition.
ENDIF.
IF NOT INPUT_TABLE-vorna IS INITIAL.
CLEAR : lv_vorna_condition.
CONCATENATE 'VORNA' ' = ' '''' INPUT_TABLE-vorna '''' INTO
lv_vorna_condition.
ENDIF.
IF NOT INPUT_TABLE-nchmc IS INITIAL.
CLEAR : lv_nchmc_condition.
CONCATENATE 'NCHMC' ' = ' '''' INPUT_TABLE-nchmc '''' INTO
lv_nchmc_condition.
ENDIF.
IF NOT INPUT_TABLE-gbdat IS INITIAL.
CLEAR : lv_gbdat_condition.
CONCATENATE 'GBDAT' ' = ' '''' INPUT_TABLE-gbdat '''' INTO
lv_gbdat_condition.
ENDIF.
IF NOT lv_pernr_condition IS INITIAL.
CONCATENATE lv_pernr_condition lv_condition
INTO lv_condition SEPARATED BY space.
ENDIF.
IF NOT lv_vnamc_condition IS INITIAL.
IF lv_condition IS INITIAL.
CONCATENATE lv_vnamc_condition lv_condition
INTO lv_condition SEPARATED BY space.
ELSE.
CONCATENATE lv_condition 'AND' lv_vnamc_condition
INTO lv_condition SEPARATED BY space.
ENDIF.
ENDIF.
IF NOT lv_vorna_condition IS INITIAL.
IF lv_condition IS INITIAL.
CONCATENATE lv_vorna_condition lv_condition
INTO lv_condition SEPARATED BY space.
ELSE.
CONCATENATE lv_condition 'AND' lv_vorna_condition
INTO lv_condition SEPARATED BY space.
ENDIF.
ENDIF.
IF NOT lv_nchmc_condition IS INITIAL.
IF lv_condition IS INITIAL.
CONCATENATE lv_nchmc_condition lv_condition
INTO lv_condition SEPARATED BY space.
ELSE.
CONCATENATE lv_condition 'AND' lv_nchmc_condition
INTO lv_condition SEPARATED BY space.
ENDIF.
ENDIF.
IF NOT lv_gbdat_condition IS INITIAL.
IF lv_condition IS INITIAL.
CONCATENATE lv_gbdat_condition lv_condition
INTO lv_condition SEPARATED BY space.
ELSE.
CONCATENATE lv_condition 'AND' lv_gbdat_condition
INTO lv_condition SEPARATED BY space.
ENDIF.
ENDIF.
SELECT DISTINCT pernr VORNA NACHN cname gbdat FROM
PA0002 INTO CORRESPONDING FIELDS OF TABLE Z_PD_TABLE
WHERE lv_condition . " Dynamic where condition
Hope it will helps -
Error : 'The Connection can not be used to perform thsi operation
Dear All,
I am getting an error like 'The Connection can not be used to perform thsi operation, it is either close or Invalid in this context.
Below is the code what am using:
Sub Select_Query()
Dim SQL_String As String
Dim rs As ADODB.Recordset 'This holds the data
Dim cn As ADODB.Connection 'Declaring Connection
Dim cmdobj As ADODB.Command 'Declare command Object
'Sheets("Payment").Select
Set cn = New ADODB.Connection
cn.Open ("Provider=MSDAORA;Data Source=MyDatabase;User ID=Prasad; Password=Prasad12;")
Dim count_value As Long
count_value = WorksheetFunction.CountA(Worksheets("Payments").Range("A:A"))
For i = 4 To count_value
SQL_String = Worksheets("Payments").Range("AA" & i).Value
Set rs = New ADODB.Recordset
rs.CursorLocation = adUseClient
rs.Open SQL_String, cn, adOpenForwardOnly, adLockOptimistic, adCmdText
Dim val As Integer
val = rs.recordCount
If val = 0 Then
Worksheets("Payments").Range("AZ" & i).Value = "No"
'Else
'Worksheets("Uploaded").Range("O" & i).Value = "Yes"
End If
cn.Close
Set con = Nothing
Next
'cn.Execute SQL_String
'con.Close
Set con = Nothing
End Sub
And select statment is picking one of the sheets, and SQL query is:
="SELECT sa.acct_id
FROM ci_sa sa, ci_sp sp, ci_sa_sp ss, ci_sp_geo spg,ci_acct_char cac
WHERE sa.sa_id = ss.sa_id
AND ss.sp_id = sp.sp_id
AND sp.sp_id = spg.sp_id
AND spg.geo_type_cd LIKE '%RR%'
AND spg.geo_val ='"&B4&"'
and sa.ACCT_ID=cac.ACCT_ID
AND cac.char_type_cd LIKE '%SDO%'
AND TRIM (char_val) IN ('"&Master!G9&"')"
Please help me in this..
Regards,
PrasadPlease, this is a Oracle forum, not a VBA forum. Mark this thread as answered and post it in a VBA forum.
-
SQL query performance issues.
Hi All,
I worked on the query a month ago and the fix worked for me in test intance but failed in production. Following is the URL for the previous thread.
SQL query performance issues.
Following is the tkprof file.
CURSOR_ID:76 LENGTH:2383 ADDRESS:f6b40ab0 HASH_VALUE:2459471753 OPTIMIZER_GOAL:ALL_ROWS USER_ID:443 (APPS)
insert into cos_temp(
TRX_DATE, DEPT, PRODUCT_LINE, PART_NUMBER,
CUSTOMER_NUMBER, QUANTITY_SOLD, ORDER_NUMBER,
INVOICE_NUMBER, EXT_SALES, EXT_COS,
GROSS_PROFIT, ACCT_DATE,
SHIPMENT_TYPE,
FROM_ORGANIZATION_ID,
FROM_ORGANIZATION_CODE)
select a.trx_date,
g.segment5 dept,
g.segment4 prd,
m.segment1 part,
d.customer_number customer,
b.quantity_invoiced units,
-- substr(a.sales_order,1,6) order#,
substr(ltrim(b.interface_line_attribute1),1,10) order#,
a.trx_number invoice,
(b.quantity_invoiced * b.unit_selling_price) sales,
(b.quantity_invoiced * nvl(price.operand,0)) cos,
(b.quantity_invoiced * b.unit_selling_price) -
(b.quantity_invoiced * nvl(price.operand,0)) profit,
to_char(to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS'),'DD-MON-RR') acct_date,
'DRP',
l.ship_from_org_id,
p.organization_code
from ra_customers d,
gl_code_combinations g,
mtl_system_items m,
ra_cust_trx_line_gl_dist c,
ra_customer_trx_lines b,
ra_customer_trx_all a,
apps.oe_order_lines l,
apps.HR_ORGANIZATION_INFORMATION i,
apps.MTL_INTERCOMPANY_PARAMETERS inter,
apps.HZ_CUST_SITE_USES_ALL site,
apps.qp_list_lines_v price,
apps.mtl_parameters p
where a.trx_date between to_date('2010/02/01 00:00:00','yyyy/mm/dd HH24:MI:SS')
and to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS')+0.9999
and a.batch_source_id = 1001 -- Sales order shipped other OU
and a.complete_flag = 'Y'
and a.customer_trx_id = b.customer_trx_id
and b.customer_trx_line_id = c.customer_trx_line_id
and a.sold_to_customer_id = d.customer_id
and b.inventory_item_id = m.inventory_item_id
and m.organization_id
= decode(substr(g.segment4,1,2),'01',5004,'03',5004,
'02',5003,'00',5001,5002)
and nvl(m.item_type,'0') <> '111'
and c.code_combination_id = g.code_combination_id+0
and l.line_id = b.interface_line_attribute6
and i.organization_id = l.ship_from_org_id
and p.organization_id = l.ship_from_org_id
and i.org_information3 <> '5108'
and inter.ship_organization_id = i.org_information3
and inter.sell_organization_id = '5108'
and inter.customer_site_id = site.site_use_id
and site.price_list_id = price.list_header_id
and product_attr_value = to_char(m.inventory_item_id)
call count cpu elapsed disk query current rows misses
Parse 1 0.47 0.56 11 197 0 0 1
Execute 1 3733.40 3739.40 34893 519962154 11 188 0
total 2 3733.87 3739.97 34904 519962351 11 188 1
| Rows Row Source Operation
| ------------ ---------------------------------------------------
| 188 HASH JOIN (cr=519962149 pr=34889 pw=0 time=2607.35)
| 741 .TABLE ACCESS BY INDEX ROWID QP_PRICING_ATTRIBUTES (cr=519939426 pr=34889 pw=0 time=2457.32)
| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)
| 741 ....NESTED LOOPS (cr=50042 pr=7230 pw=0 time=11.37)
| 741 .....NESTED LOOPS (cr=48558 pr=7229 pw=0 time=11.35)
| 741 ......NESTED LOOPS (cr=47815 pr=7223 pw=0 time=11.32)
| 3237 .......NESTED LOOPS (cr=41339 pr=7223 pw=0 time=12.42)
| 3237 ........NESTED LOOPS (cr=38100 pr=7223 pw=0 time=12.39)
| 3237 .........NESTED LOOPS (cr=28296 pr=7139 pw=0 time=12.29)
| 1027 ..........NESTED LOOPS (cr=17656 pr=4471 pw=0 time=3.81)
| 1027 ...........NESTED LOOPS (cr=13537 pr=4404 pw=0 time=3.30)
| 486 ............NESTED LOOPS (cr=10873 pr=4240 pw=0 time=0.04)
| 486 .............NESTED LOOPS (cr=10385 pr=4240 pw=0 time=0.03)
| 486 ..............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL (cr=9411 pr=4240 pw=0 time=0.02)
| 75253 ...............INDEX RANGE SCAN RA_CUSTOMER_TRX_N5 (cr=403 pr=285 pw=0 time=0.38)
| 486 ..............TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=974 pr=0 pw=0 time=0.01)
| 486 ...............INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (cr=488 pr=0 pw=0 time=0.01)
| 486 .............INDEX UNIQUE SCAN HZ_PARTIES_U1 (cr=488 pr=0 pw=0 time=0.01)
| 1027 ............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL (cr=2664 pr=164 pw=0 time=1.95)
| 2063 .............INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (cr=1474 pr=28 pw=0 time=0.22)
| 1027 ...........TABLE ACCESS BY INDEX ROWID RA_CUST_TRX_LINE_GL_DIST_ALL (cr=4119 pr=67 pw=0 time=0.54)
| 1027 ............INDEX RANGE SCAN RA_CUST_TRX_LINE_GL_DIST_N1 (cr=3092 pr=31 pw=0 time=0.20)
| 3237 ..........TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=10640 pr=2668 pw=0 time=15.35)
| 3237 ...........INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_U1 (cr=2062 pr=40 pw=0 time=0.33)
| 3237 .........TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=9804 pr=84 pw=0 time=0.77)
| 3237 ..........INDEX UNIQUE SCAN OE_ORDER_LINES_U1 (cr=6476 pr=47 pw=0 time=0.43)
| 3237 ........TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=3239 pr=0 pw=0 time=0.04)
| 3237 .........INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=2 pr=0 pw=0 time=0.01)
| 741 .......TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=6476 pr=0 pw=0 time=0.10)
| 6474 ........INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=3239 pr=0 pw=0 time=0.03)Please help.
Regards
Ashish| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)There is no way the optimizer should choose to process that many rows using nested loops.
Either the statistics are not up to date, the data values are skewed or you have some optimizer parameter set to none default to force index access.
Please post explain plan and optimizer* parameter settings. -
Disappointing query performance with object-relational storag
Hello,
after some frustrating days trying to improve query performance on an xmltype table I'm on my wits' end. I have tried all possible combinations of indexes, added scopes, tried out of line and inline storage, removed recursive type definition from the schema, tried the examples from the form thread Setting Attribute SQLInline to false for Out-of-Line Storage (having the same problems) and still have no clue. I have prepared a stripped down example of my schema which shows the same problems as the real one. I'm using 10.2.0.4.0:
SQL> select * from v$version;
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
You can find the script on http://www.grmblfrz.de/xmldbproblem.sql (I tried including it here but got an internal server error) The results are on http://www.grmblfrz.de/xmldbtest.lst . I have no idea how to improve the performance (and if even with this simple schema query rewrite does not work, how can oracle xmldb be feasible for more complex structures?). I must have made a mistake somewhere, hopefully someone can spot it.
Thanks in advance.
--Swen
Edited by: user636644 on Aug 30, 2008 3:55 PM
Edited by: user636644 on Aug 30, 2008 4:12 PMMarc,
thanks, I did not know that it is possible to use "varray store as table" for the reference tables. I have tried your example. I can create the nested table, the scope and the indexes, but I get a different result - full table scan on t_element. With the original table I get an index scan. On the original table there is a trigger (t_element$xd) which is missing on the new table. I have tried the same with an xmltype table (drop table t_element; create table t_element of xmltype ...) with the same result. My script ... is on [google groups|http://groups.google.com/group/oracle-xmldb-temporary-group/browse_thread/thread/f30c3cf0f3dbcafc] (internal server error while trying to include it here). Here is the plan of the query
select rt.object_value
from t_element rt
where existsnode(rt.object_value,'/mddbelement/group[attribute[@name="an27"]="99"]') = 1;
Execution Plan
Plan hash value: 4104484998
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 40 | 2505 (1)| 00:00:38 |
| 1 | TABLE ACCESS BY INDEX ROWID | NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
|* 3 | FILTER | | | | | |
| 4 | TABLE ACCESS FULL | T_ELEMENT | 1000 | 40000 | 4 (0)| 00:00:01 |
| 5 | NESTED LOOPS SEMI | | 1 | 88 | 5 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 59 | 4 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID| NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
|* 9 | TABLE ACCESS BY INDEX ROWID| T_GROUP | 1 | 39 | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | SYS_C0082878 | 1 | | 0 (0)| 00:00:01 |
|* 11 | INDEX RANGE SCAN | SYS_IOT_TOP_184789 | 1 | 29 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("NESTED_TABLE_ID"=:B1)
3 - filter( EXISTS (SELECT /*+ ???)
8 - access("NESTED_TABLE_ID"=:B1)
9 - filter("T_GROUP"."SYS_NC0001300014$" IS NOT NULL AND
SYS_CHECKACL("ACLOID","OWNERID",xmltype('<privilege
xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-insta
nce" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-properties
/><read-contents/></privilege>'))=1)
10 - access("SYS_ALIAS_3"."COLUMN_VALUE"="T_GROUP"."SYS_NC_OID$")
11 - access("NESTED_TABLE_ID"="T_GROUP"."SYS_NC0001300014$")
filter("SYS_XDBBODY$"='99' AND "NAME"='an27')
Edited by: user636644 on Sep 1, 2008 9:56 PM -
How to write the given query using 'ANY ' operator
Hi,
How to write the given query using 'ANY ' operator , I dont need to fetch to grade_master table twice in database, just need to fetch within the result set.
SELECT dsg_code,dsg_name,dsg_grade FROM designation_master WHERE dsg_orgn='&&Orgn' and dsg_ctry='&&ctry'
And dsg_loc ='&&loc' And dsg_oru = '&&oru' and dsg_grade in decode('&&radio_group',
1, SELECT grd_code FROM grade_master WHERE grd_osm_code in (Select grd_osm_code FROM grade_master WHERE grd_orgn='&&Orgn' and grd_ctry='&&ctry' And grd_loc ='&&loc' And grd_oru = '&&oru' and grd_code ='&&emp_grade'),
2, SELECT grd_code FROM grade_master WHERE grd_osm_code > (Select grd_osm_code FROM grade_master WHERE grd_orgn='&&orgn' and grd_ctry='&&ctry' and grd_loc ='&&loc' And grd_oru = '&&oru' and grd_code),
3, SELECT grd_code FROM grade_master WHERE grd_osm_code < (Select grd_osm_code FROM grade_master WHERE grd_orgn='&&orgn' and grd_ctry='&&ctry' And grd_loc ='&&loc' And grd_oru = '&&oru' and grd_code ='&&emp_grade'))
thanks
rincyHi,
One thing I understood my your issue is you want to perform, execution of query once or fetch the results sets my minimizing the number of times executions of queries. It would be hard for us to check in this way, atleast provide some temporary data and some business rules. Only I can IN, >, < (queries logical conditons on inner query)
- Pavan Kumar N
- ORACLE OCP - 9i/10g
https://www.oracleinternals.blogspot.com -
Its 11G R2 version, and query is performing very slow
SELECT OBJSTATE
FROM
SUB_CON_CALL_OFF WHERE SUB_CON_NO = :B2 AND CALL_OFF_SEQ = :B1
call count cpu elapsed disk query current rows
Parse 140 0.00 0.00 0 0 0 0
Execute 798747 8.34 14.01 0 4 0 0
Fetch 798747 22.22 35.54 0 7987470 0 798747
total 1597634 30.56 49.56 0 7987474 0 798747
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 51 (recursive depth: 1)
Rows Row Source Operation
5 FILTER (cr=50 pr=0 pw=0 time=239 us)
5 NESTED LOOPS (cr=40 pr=0 pw=0 time=164 us)
5 NESTED LOOPS (cr=30 pr=0 pw=0 time=117 us)
5 TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
5 INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
5 TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
5 INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
5 INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
5 INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
5 FAST DUAL (cr=0 pr=0 pw=0 time=4 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 1 0.00 0.00
gc cr block 2-way 3 0.00 0.00
gc current block 2-way 1 0.00 0.00
gc cr multi block request 4 0.00 0.00 Edited by: 842638 on Feb 2, 2013 5:52 AMHi Mark,
Just have few basic doubts regarding the below query performance :
call count cpu elapsed disk query current rows
Parse 140 0.00 0.00 0 0 0 0
Execute 798747 8.34 14.01 0 4 0 0
Fetch 798747 22.22 35.54 0 7987470 0 798747
total 1597634 30.56 49.56 0 7987474 0 798747
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 51 (recursive depth: 1)
Rows Row Source Operation
5 FILTER (cr=50 pr=0 pw=0 time=239 us)
5 NESTED LOOPS (cr=40 pr=0 pw=0 time=164 us)
5 NESTED LOOPS (cr=30 pr=0 pw=0 time=117 us)
5 TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
5 INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
5 TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
5 INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
5 INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
5 INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
5 FAST DUAL (cr=0 pr=0 pw=0 time=4 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 1 0.00 0.00
gc cr block 2-way 3 0.00 0.00
gc current block 2-way 1 0.00 0.00
gc cr multi block request 4 0.00 0.00
1] How do you determine that this query performance is +ok+ ?
2] What is the actual need of checking the query performance this way?
3] Is this the TKPROF output?
4] How do you know that the query was +called+ 798747 times? the +execute+ shows 0
Could you please help me with this?
Thanks.
Ranit B. -
Version: Application Express 3.0.1.00.12
DB: 11.1.0.7.0
Running a query: select to_char(max(to_date(datetime,'DD/MON/YYYY:HH24:MI:SS')),'DD/MON/YYYY:HH24:MI:SS') "Latest upload time" from TBL;
gives me a result in .109 sec in SQL DEVELOPER
But the same query in APEX (this is the only report on page!) takes 60+ seconds..
0.02: print column headings
0.02: rows loop: 15 row(s)
24/NOV/2009:12:01:29
62.47: Region: Hits per second
Why the difference???
Is APEX not utilizing the index in same way as sql*developer? The table has over 9mil+ rows,but as the index is utilized it should return fast as sql*d.
Thanks,Thanks for responses, didn't get much far though, here's the actions i took :
a) Disabled pagination - no changes
b) Recreated region - no changes
Sorting wasn't enabled in both the above cases.
c) Put trace by following http://download.oracle.com/docs/cd/E14373_01/appdev.32/e11838/debug.htm#BABGDGEH
The trace file DID NOT Generate, no idea why!
When i placed the &p_trace=YES , the application asked me to relogin and upon login went to the page, but the tracefile in user_dump_dest wasn't generated.
NEED Help in understanding why the trace is not working!! I'm using XDB http server inbuild if that's any help.
d) Index is used automatically for MAX ,if available, here's a explain plan of query performing fast in sql*plus
SQL> /
Latest upload time
24/NOV/2009:12:01:29
Execution Plan
Plan hash value: 2141038945
| Id | Operation | Name | Rows | Bytes | Cost
(%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 8 | 80212
(1)| 00:16:03 |
| 1 | SORT AGGREGATE | | 1 | 8 |
| |
| 2 | INDEX FULL SCAN (MIN/MAX)| TBL1_UNIQ_IDX | 16M| 123M|
| |
SQL> l
1* select to_char(max(to_date(datetime,'DD/MON/YYYY:HH24:MI:SS')),'DD/MON/YYYY:HH24:MI:SS') "Latest upload time" from TBL1 -
Hi,
I am executing one query it takes 40-45 mins, can anybody tell me where is the issue because I have index on SUBSCRIPTION table.
Query is taking time in Nested Loop. Can anyboduy please help to improve query performance.
Select count(unique individual_id)
from SUBSCRIPTION S ,SOURCE D WHERE S.ORDER_DOCUMENT_KEY_CD=D.FULFILLMENT_KEY_CD AND prod_abbr='TOH'
and to_char(source_start_dt,'YYMM')>='1010' and mke_mag_source_type_cd='D';
select count(*) from source; ----------3,425,131
select count(*) from subscription;---------394,517,271
Below is exlain Plan
Plan
SELECT STATEMENT CHOOSECost: 219 Bytes: 38 Cardinality: 1
13 SORT GROUP BY Bytes: 38 Cardinality: 1
12 PX COORDINATOR
11 PX SEND QC (RANDOM) SYS.:TQ10001 Bytes: 38 Cardinality: 1
10 SORT GROUP BY Bytes: 38 Cardinality: 1
9 PX RECEIVE Bytes: 38 Cardinality: 1
8 PX SEND HASH SYS.:TQ10000 Bytes: 38 Cardinality: 1
7 SORT GROUP BY Bytes: 38 Cardinality: 1
6 TABLE ACCESS BY LOCAL INDEX ROWID TABLE SUBSCRIPTION Cost: 21 Bytes: 3,976 Cardinality: 284
5 NESTED LOOPS Cost: 219 Bytes: 604,276 Cardinality: 15,902
2 PX BLOCK ITERATOR
1 TABLE ACCESS FULL TABLE SOURCE Cost: 72 Bytes: 1,344 Cardinality: 56
4 PARTITION HASH ALL Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16
3 INDEX RANGE SCAN INDEX XAK1SUBSCRIPTION Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16
Please suggestit eliminate hidden conversation from char to numberi dont know indexes/partition on TC table, and you?
drop table test;
create table test as select level id, sysdate + level/24/60/60 datum from dual connect by level < 10000;
create index idx1 on test(datum);
analyze table test compute statistics;
explain plan for select count(*) from test where to_char(datum,'YYYYMMDD') > '20120516';
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3467505462
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 7 (15)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TEST | 500 | 3500 | 7 (15)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD')>'20120516')
explain plan for select count(*) from test where datum > trunc(sysdate);
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2330213601
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 7 (15)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | INDEX FAST FULL SCAN| IDX1 | 9999 | 69993 | 7 (15)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("DATUM">TRUNC(SYSDATE@!))
drop index idx1;
create index idx1 on test(to_number(to_char(datum,'YYYYMMDD')));
analyze table test compute statistics;
explain plan for select count(*) from test where to_number(to_char(datum,'YYYYMMDD')) > 20120516;
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 227046122
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 5 | 2 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
|* 2 | INDEX RANGE SCAN| IDX1 | 1 | 5 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access(TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD'))>
20120516)
explain plan for select count(*) from test where datum > trunc(sysdate);
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3467505462
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 7 (15)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TEST | 9999 | 69993 | 7 (15)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("DATUM">TRUNC(SYSDATE@!))
Maybe you are looking for
-
How to allow input text field to accept more than one "specific" answer.
Hi, I am working for something and trying to create a type-box-based quiz for one of my classes, where an input text field can change it's border color if 2 or more words from an accepted word list is inputed. For example, Say a list has possible ans
-
How do I prevent a new Firefox tab from opening when opening a PDF in Adobe.
First off, this issue started Monday, August 25th 2014 when I was forced to update Firefox in order to log into my online accounting program. I was not having this issue in the previous Firefox version. When I open a PDF document from an email (which
-
Apple ID issues with Apple Tv 2
My husband bought me an Apple Tv 2 today. I was in the store with him when he bought it; it is new. I tried logging in through my Apple ID and it will not accept it. I am facory resetting it currently and it is to take a very long time and I am not
-
Using weblogic.deployer to deploy multiple files?
Is there way, when using the weblogic.deployer, to specify multiple files to be deployed in one command? I see the filelist for redeploy, but is there a way to deploy/undeploy multiple files in one command? Thanks.
-
Following a freeze of imovie 09, which required holding down the on button to shut down, my Hard drives aren't showing in the events library. Thus I can't access my movies though they appear to be there. Reloaded imovie, played with zoom, with no eff