Query Consuming too much time.
Hi,
i am using Release 10.2.0.4.0 version of oracle. I am having a query, its taking too much time(~7 minutes) for indexed read. Please help me to understand the reason and workaround for same.
select *
FROM a,
b
WHERE a.xdt_docownerpaypk = b.paypk
AND a.xdt_doctype = 'PURCHASEORDER'
AND b.companypk = 1202829117
AND a.xdt_createdt BETWEEN TO_DATE (
'07/01/2009',
'MM/DD/YYYY')
AND TO_DATE (
'01/01/2010',
'MM/DD/YYYY')
ORDER BY a.xdt_createdt DESC;
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | SORT ORDER BY | | 1 | 1 | 907 |00:06:45.83 | 66716 | 60047 | 478K| 448K| 424K (0)|
|* 2 | TABLE ACCESS BY INDEX ROWID | a | 1 | 1 | 907 |00:06:45.82 | 66716 | 60047 | | | |
| 3 | NESTED LOOPS | | 1 | 1 | 6977 |00:06:45.64 | 60045 | 60030 | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| b | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | | | |
|* 5 | INDEX RANGE SCAN | IDX_PAYIDENTITYCOMPANY | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | | | |
|* 6 | INDEX RANGE SCAN | IDX_XDT_N7 | 1 | 3438 | 6975 |00:06:45.64 | 60041 | 60030 | | | |
Predicate Information (identified by operation id):
2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
5 - access("b"."COMPANYPK"=1202829117)
6 - access("XDT_DOCTYPE"='PURCHASEORDER' AND "a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")
filter("a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")
32 rows selected.
index 'idx_xdt_n7' is on (xdt_doctype,action_date,xdt_docownerpaypk).
index idx_xdt_n7 details are as below.
blevel distinct_keys avg_leaf_blocks_per_key avg_data_blocks_per_key clustering_factor num_rows
3 868840 1 47 24020933 69871000
But when i am deriving exact value of paypk from table b and applying to the query, its using another index(idx_xdt_n4) which is on index 'idx_xdt_n4' is on (month,year,xdt_docownerpaypk,xdt_doctype,action_date)
and completes within ~17 seconds. below is the query/plan details.
select *
FROM a
WHERE a.xdt_docownerpaypk = 1202829132
AND xdt_doctype = 'PURCHASEORDER'
AND a.xdt_createdt BETWEEN TO_DATE (
'07/01/2009',
'MM/DD/YYYY')
AND TO_DATE (
'01/01/2010',
'MM/DD/YYYY')
ORDER BY xdt_createdt DESC;
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | SORT ORDER BY | | 1 | 3224 | 907 |00:00:02.19 | 7001 | 339 | 337K| 337K| 299K (0)|
|* 2 | TABLE ACCESS BY INDEX ROWID| a | 1 | 3224 | 907 |00:00:02.19 | 7001 | 339 | | | |
|* 3 | INDEX SKIP SCAN | IDX_XDT_N4 | 1 | 38329 | 6975 |00:00:02.08 | 330 | 321 | | | |
Predicate Information (identified by operation id):
2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
3 - access("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER')
filter(("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER'))
index idx_xdt_n4 details are as below.
blevel distinct_keys avg_leaf_blocks_per_key avg_data_blocks_per_key clustering_factor num_rows
3 868840 1 47 23942833 70224133Edited by: 930254 on Apr 26, 2013 5:04 AM
the first query uses the predicate "XDT_DOCTYPE"='PURCHASEORDER' to determine the range of the index IDX_XDT_N7 that has to be scanned and uses the other predicates to filter out most of the index blocks. The second query uses an INDEX SKIP SCAN ignoring the first column of the index IDX_XDT_N4 and using the predicates for the following columns ("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER') to get a much more selective access (reading only 330 blocks instead of > 60K).
I think there are two possible options to improve the performance:
1. If creating a new index is an option you could define an index on table A(xdt_doctype, xdt_docownerpaypk, xdt_createdt)
2. If creating a new index is not an option you could use an INDEX SKIP SCAN Hint (INDEX_SS(A IDX_XDT_N4)) to order the CBO to use the second index (without a hint the CBO tends to ignore the option of using a SKIP SCAN in an NL join). But using Hints in production is rarely a good idea... In 11g you could you sql baselines to avoid such hints in the code.
Regards
Martin
Similar Messages
-
Parsing the query takes too much time.
Hello.
I hitting the bug in в Oracle XE (parsing some query takes too much time).
A similar bug was previously found in the commercial release and was successfully fixed (SR Number 3-3301916511).
Please, raise a bug for Oracle XE.
Steps to reproduce the issue:
1. Extract files from testcase_dump.zip and testcase_sql.zip
2. Under username SYSTEM execute script schema.sql
3. Import data from file TESTCASE14.DMP
4. Under username SYSTEM execute script testcase14.sql
SQL text can be downloaded from http://files.mail.ru/DJTTE3
Datapump dump of testcase can be downloaded from http://files.mail.ru/EC1J36
Regards,
Viacheslav.Bug number? Version fix applies to?
Relevant Note that describes the problem and points out bug/patch availability?
With a little luck some PSEs might be "backported", since 11g XE is not base release e.g. 11.2.0.1. -
Delete query taking too much time
Hi All,
my delete query is taking too much time. around 1hr 30 min for 1.5 lac records.
Though I have dropped mv log on the table & disabled all the triggers on it.
Moreover deletion is based on primary key .
delete from table_name where primary_key in (values)
above is dummy format of my query.
can anyone please tell me what could be other reason that query is performing that slow.
Is there anything to check in DB other than triggers,mv log,constraints in order to improve the performance?
Please reply asap.Delete is the most time consuming operation, as the whole record data has to be stored at the undo segments. On the other hand, there is a part of the query used to select records to delete that probably is adding an extra overhead to the process, the in (values) clause. It would be nice on your side to post another dummy from this (values) clause. I could figure out this is a subquery, and in order for you to obtain this list you have to run a inefficient query.
You can gather the execution plan so you can see where the most heavy part of th query is. This way a better tuning approach and a more accurate diagnostic can be issued.
~ Madrid. -
SELECT query takes too much time! Y?
Plz find my SELECT query below:
select w~mandt
wvbeln wposnr wmeins wmatnr wwerks wnetwr
wkwmeng wvrkme wmatwa wcharg w~pstyv
wposar wprodh wgrkor wantlf wkztlf wlprio
wvstel wroute wumvkz wumvkn wabgru wuntto
wawahr werdat werzet wfixmg wprctr wvpmat
wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
wbedae wcuobj w~mtvfp
xetenr xwmeng xbmeng xettyp xwepos xabart
x~edatu
xtddat xmbdat xlddat xwadat xabruf xetart
x~ezeit
into table t_vbap
from vbap as w
inner join vbep as x on xvbeln = wvbeln and
xposnr = wposnr and
xmandt = wmandt
where
( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
( ( ( erdat > pre_dat and erdat < p_syndt ) or
( erdat = p_syndt and erzet <= p_syntm ) ) ) and
w~matnr in s_matnr and
w~pstyv in s_itmcat and
w~lfrel in s_lfrel and
w~abgru = ' ' and
w~kwmeng > 0 and
w~mtvfp in w_mtvfp and
x~ettyp in w_ettyp and
x~bdart in s_req_tp and
x~plart in s_pln_tp and
x~etart in s_etart and
x~abart in s_abart and
( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
The problem: It takes too much time while executing this statement.
Could anybody change this statement and help me out to reduce the DB Access time?
ThxWays of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
4. For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
5. Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where conditions you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
2. For all frequently used Select statements, try to use an index.
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
3. Using buffered tables improves the performance considerably.
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
3. Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
Data Dictionary query takes too much time.
Hello,
I am using ORACLE DATABASE 11g.
The below query is taking too much time to execute and give the output. As i have tried a few Oracle sql hints but it dint worked out.
SELECT
distinct B.TABLE_NAME, 'Y'
FROM USER_IND_PARTITIONS A, USER_INDEXES B, USER_IND_SUBPARTITIONS C
WHERE A.INDEX_NAME = B.INDEX_NAME
AND A.PARTITION_NAME = C.PARTITION_NAME
AND C.STATUS = 'UNUSABLE'
OR A.STATUS = 'UNUSABLE'
OR B.STATUS = 'INVALID';Please guide me what to do ? to run this query in a fast pace mode ...
thanks in advance ..Your query is incorrect. It will return ALL tables if A.STATUS = 'UNUSABLE' or B.STATUS = 'INVALID'. Most likely you meant:
SELECT
distinct B.TABLE_NAME, 'Y'
FROM USER_IND_PARTITIONS A, USER_INDEXES B, USER_IND_SUBPARTITIONS C
WHERE A.INDEX_NAME = B.INDEX_NAME
AND A.PARTITION_NAME = C.PARTITION_NAME
AND (C.STATUS = 'UNUSABLE'
OR A.STATUS = 'UNUSABLE'
OR B.STATUS = 'INVALID');But the above will return subpartitioned tables with invalid/unusable indexes. It will not return non-subpartitioned partitioned tables with invalid/unusable indexes/index partitions same as non-partitioned tables with invalid/unusable indexes. If you want to get any table with invalid/unusable indexes you need to outer join which will hurt performance even more. I suggest you use UNION:
SELECT DISTINCT TABLE_NAME,
'Y'
FROM (
SELECT INDEX_NAME,'Y' FROM USER_INDEXES WHERE STATUS = 'INVALID'
UNION ALL
SELECT INDEX_NAME,'Y' FROM USER_IND_PARTITIONS WHERE STATUS = 'UNUSABLE'
UNION ALL
SELECT INDEX_NAME,'Y' FROM USER_IND_SUBPARTITIONS WHERE STATUS = 'UNUSABLE'
) A,
USER_INDEXES B
WHERE A.INDEX_NAME = B.INDEX_NAME
/SY. -
Query taking too much time with dates??
hello folks,
I am trying pull some data using the date condition and for somereason its taking too much time to return the data
and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1 --If i use this its takes too much time
and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??Presumably you've got an index on activity_date.
If you apply a function like TRUNC to activity_date, you can no longer use the index.
Post execution plans to verify.
and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
and al.activity_date < TRUNC (SYSDATE, 'DD') -
Jnlp-launch consumes too much time to operate
Hello all!
I want to discuss the UX we have using jnlp launch in JavaFX.
You can find jira-discussion here:
http://javafx-jira.kenai.com/browse/RT-14930
As you can see, it was recognized as major bug.
What's wrong with jnlp-start?
1. When user launch jnlp-file, red confusing window "Java 7..." appears immediately.
While it can be slightly useful in promotion purposes, but many of today's users will be disturbed. There is no association "Java means cool" today. Users wanted to launch new sexy app, but all they see is strange poster with "Java........."
Imagine pop-up window with "Adobe Flash XX.XXX" for several seconds every time when you try to launch flash game. Is it fine UX you await?
2. I don't know, is it independent issue, or tightly coupled with previous one, but jnlp-start really consumes large amount of time.
On my machine with SSD most JavaFX apps can be launched from jar in less than 1 second.
But launching same app via jnlp take about 4-5 seconds total.
I suppose, it is self-evidently that launch of app with .jar files on disk must consume not much time than direct jar-launch.Hello,
1) Should there even be a jnlp-start pop-up? Maybe a setting with no web-start pop-up or maybe the developer can provide own image to pop-up?
2) It does seem a little slow. I see that in the jnlp file there are references to external URL's.... could it be that these URL's are being accessed every time the JNLP is launched causing this delay? Is the performance of the jnlp startup tied to the performance of the external site and its network?
thanks
jose -
Why is query taking too much time ?
Hi gurus,
I have a table name test which has 100000 records in it,now the question i like to ask is.
when i query select * from test ; no proble with responce time, but when the next day i fire the above query it is taking too much of time say 3 times.i would also like to tell you that everything is ok in respect of tuning,the db is properly tuned, network is tuned properly. what could be the hurting factor here ??
take care
All expertise.Here is a small test on my windows PC.
oracle 9i Rel1.
Table : emp_test
number of records : 42k
set autot trace exp stat
15:29:13 jaffar@PRIMEDB> select * from emp_test;
41665 rows selected.
Elapsed: 00:00:02.06 ==> response time.
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
Statistics
0 recursive calls
0 db block gets
2951 consistent gets
178 physical reads
0 redo size
1268062 bytes sent via SQL*Net to client
31050 bytes received via SQL*Net from client
2779 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
41665 rows processed
15:29:40 jaffar@PRIMEDB> delete from emp_test where deptno = 10;
24998 rows deleted.
Elapsed: 00:00:10.06
15:31:19 jaffar@PRIMEDB> select * from emp_test;
16667 rows selected.
Elapsed: 00:00:00.09 ==> response time
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
Statistics
0 recursive calls
0 db block gets
1289 consistent gets
0 physical reads
0 redo size
218615 bytes sent via SQL*Net to client
12724 bytes received via SQL*Net from client
1113 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
16667 rows processed -
Query takes too much time in fetching last records.
Hi,
I am using oracle 8.1 and trying to execute a SQL statement, it takes few minutes and display records.
When trying to fetch all the records, it is fast up to some level and takes much time to fetch last record.
Ex: If total records = 16336 , then it fetches records faster up to 16300 and takes app.500 sec to fetch last 36 records.
Could you kindly let me know the reason.
I have copied the explain plan below for your reference.Please let me know if anything is required.
SELECT STATEMENT, GOAL = RULE 4046 8 4048
NESTED LOOPS OUTER 4046 8 4048
NESTED LOOPS OUTER 4030 8 2952
FILTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER 4014 8 1728
NESTED LOOPS 3998 8 936
TABLE ACCESS BY INDEX ROWID IFSAPP CUSTOMER_ORDER_TAB 3966 8 440
INDEX RANGE SCAN IFSAPP CUSTOMER_ORDER_1_IX 108 8
TABLE ACCESS BY INDEX ROWID IFSAPP CUSTOMER_ORDER_LINE_TAB 4 30667 1901354
INDEX RANGE SCAN IFSAPP CUSTOMER_ORDER_LINE_PK 3 30667
TABLE ACCESS BY INDEX ROWID IFSAPP PWR_CONS_PARCEL_CONTENT_TAB 2 2000 198000
INDEX RANGE SCAN IFSAPP PWR_CONS_PARCEL_CONTENT_1_IDX 1 2000
TABLE ACCESS BY INDEX ROWID IFSAPP PWR_CONS_PARCEL_TAB 1 2000 222000
INDEX UNIQUE SCAN IFSAPP PWR_CONS_PARCEL_PK 2000
TABLE ACCESS BY INDEX ROWID IFSAPP CONSIGNMENT_PARCEL_TAB 1 2000 84000
INDEX UNIQUE SCAN IFSAPP CONSIGNMENT_PARCEL_PK 2000
TABLE ACCESS BY INDEX ROWID IFSAPP PWR_OBJECT_CONNECTION_TAB 2 20 2740
INDEX RANGE SCAN IFSAPP PWR_OBJECT_CONNECTION_IX1 1 20 Thanks.We are using PL/SQL Developer tool. The time what we have mentioned in the post is approximated time.
Apologies for not mentioning these details in previous thread.Let it be approximate time but how did you arrive at that time? When a query fetches records how did you derived that one portion is fetched in x and the remaining in y time limit?
I would suggest this could be some issue with PL/SQL Developer (Never used this tool by myself) But for performance testing i would suggest you to use SQL Plus. Thats the best tool to test performance. -
Exists clause in query causes too much time to get back results
We Are having the long query as follows which is taking so much time to respond.Is there any way to optimize the query.We see that there is exists clause in query which is taking long time .if it is so please suggest appropriate solution to remove exists clause.
SELECT
DISTINCT t0.JDOID,
t0.JDOCLASS,
t0.JDOVERSION,
t0.ACTIVITY,
t0.ADMINSTATE,
t0.CREATEDDATE,
t0.CREATEDUSER,
t0.DESCRIPTION,
t0.ENDDATE,
t0.ID,
t0.LASTMODIFIEDDATE,
t0.LASTMODIFIEDUSER,
t0.NAME,
t0.NOSPEC,
t0.OBJECTSTATE,
t0.OWNER,
t0.PARTITION,
t0.PERMISSIONS,
t0.SPECIFICATION,
t0.STARTDATE
FROM
SERVINV.RESOURCESPECIFICATION t2,
SERVINV.SPECIFICATION t3,
SERVINV.TELEPHONENUMBER t0,
SERVINV.TELEPHONENUMBERSPECIFICATION t1,
SERVINV.TN_CHAR t4
WHERE
t3.NAME = 'usTelephoneNumber'
AND
t0.ID LIKE '0010210%'
OR t0.ID LIKE '0010370%'
OR t0.ID LIKE '0010690%'
OR t0.ID LIKE '0010090%'
OR t0.ID LIKE '0010610%'
OR t0.ID LIKE '0010570%'
OR t0.ID LIKE '0010330%'
OR t0.ID LIKE '0010130%'
OR t0.ID LIKE '0010410%'
OR t0.ID LIKE '0010650%'
OR t0.ID LIKE '0010730%'
OR t0.ID LIKE '0010050%'
OR t0.ID LIKE '0010450%'
OR t0.ID LIKE '0010490%'
OR t0.ID LIKE '0010530%'
OR t0.ID LIKE '0010170%'
OR t0.ID LIKE '0010290%'
OR t0.ID LIKE '0010030%'
OR t0.ID LIKE '0010250%'
OR t0.ID LIKE '0010770%'
AND t4.NAME = 'tnType'
AND t4.VALUE = 'OWNED'
AND NOT EXISTS (
SELECT
t5.JDOID
FROM
SERVINV.TNCONSUMER t5
WHERE
t5.TELEPHONENUMBER = t0.JDOID
AND
t5.ADMINSTATE IS
NULL
OR t5.ADMINSTATE <> 'UNASSIGNED'
AND
t0.OBJECTSTATE = 'ACTIVE'
OR t0.OBJECTSTATE = 'INACTIVE'
OR t0.OBJECTSTATE IS
NULL
AND t0.JDOCLASS = 'com.metasolv.impl.entity.TelephoneNumberDAO'
AND t0.SPECIFICATION = t1.JDOID
AND t0.JDOID = t4.TELEPHONENUMBER
AND t1.JDOID = t2.JDOID
AND t2.JDOID = t3.JDOID
ORDER BY
t0.ID ASC;
Unable to post xplain plan as it is huge and exceeding 30000 characters.
PanduHi Pandu,
try something like this and check if it works:
replace the OR conditions with substr function:
AND substr(t0.ID,1,7) in
('0010210',
'0010370',
'0010690',
'0010090',
'0010610',
'0010570',
'0010330',
'0010130',
'0010410',
'0010650',
'0010730',
'0010050',
'0010450',
'0010490',
'0010530',
'0010170',
'0010290',
'0010030',
'0010250',
'0010770')confirm if this reduces the time...will check further after your confirmation...
Regards
Imran -
Query taking too much time!!!!!
Sorry for posting without format. I will post an another question with the use of your format blog.
Edited by: San on 22 Feb, 2011 3:18 PMWhen your query takes too long ...
HOW TO: Post a SQL statement tuning request - template posting -
Insert query takes too much time
I have two select clauses as follows:
"select * from employee" This returns me 6000 rows.
& I have next clause as
"select * from employee where userid in(1,2,3,....,3000)"
This returns me 3000 rows.
Now i have to insert the result of above queries into same extended list view of Visual Basic. But the insert for first query takes 11 seconds while second takes 34 sec. We have evaluated this that this time is for insert query & not for select query.
I want to know that even if the Ist query returns 6000 rows it takes lesser time than the second which is
inserting 3000 rows.
We are using Oracle 8.1.7
Thanks in advanceThe first query can do a straight dump of the table. The second select has to compare every userid to a hardcoded list of 3000 numbers. This will take quite a bit longer. Try rewriting it to
select * from employee where userid between 1 and 3000
It will run much faster then the other query. -
Select query taking too much time to fetch data from pool table a005
Dear all,
I am using 2 pool table a005 and a006 in my program. I am using select query to fetch data from these table. i.e. example is mentioned below.
select * from a005 into table t_a005 for all entries in it_itab
where vkorg in s_vkorg
and matnr in s_matnr
and aplp in s_aplp
and kmunh = it_itab-kmunh.
here i can't create index also as tables are pool table...If there is any solutions , than please help me for same..
Thanks ,it would be helpful to know what other fields are in the internal table you are using for the FOR ALL ENTRIES.
In general, you should code the order of your fields in the select in the same order as they appear in the database. If you do not have the top key field, then the entire database is read. If it's large then it's going to take a lot of time. The more key fields from the beginning of the structure that you can supply at faster the retrieval.
Regards,
Brent -
Crystal Report Query taking too much time
Hi,
We are developing one report based on SQL Server 2008 in Crystal Report. There are around 50,000 valid combination in database. Based on dynamic filter we need to bring few records in report. Since these filters are at report level, and crystal report is using microcube, it is taking more than 15 mins to execute.
Is there any option to fetch record based on filter applied at report level.
Regards
BabyHI,
First of all , thank you very much.
Since having cascading prompt, we never thought in this way.
Details:-
For our report we have 4 prompts.
1. category->family-brand (cascading- mandatory)
2. season(madatory)
3.collection (madatory)
4.owner(not mandatory)
previously we set all these filters at record level.
Now we set season and collection at query level and brand, owner at report level. Report only query for selected season and collection only.
Thanks once again.
Regards
Baby -
Batch Query takes too much time
Hi All,
A query as simple as
select * from ibt1 where batchnum = 'AAA'
takes around 180 seconds to give results. if I replace batchnum condition with itemcode or anything other than batchnum, the same query returns results in some 5 seconds. the IBT1 table has nearly 3 lac rows. Consequently, a little complex query involving batchnm conditions gives 'query execution time out' error.
what could be the reason? any resolution?
Thanks,
BinitaHello Binita,
You need some database tunning....
The IBT1 table has complex index on ItemCode, BatchNum, LineNum, WhsCode, and has no index on bacthnumber. But it has statistics, and statistics are useful for running queries ( see [ms technet here|http://technet.microsoft.com/hu-hu/library/cc966419(en-us).aspx]). Also there is a note about performance tunning databases 783183 .
There is 2 ways to "speed up" your query: indexes and statistics
Statistics
See the statistics of IBT1 table:
exec SP_HELPSTATS 'IBT1','ALL'
the result set you see : the statistics name ([IBT1_PRIMARY] and some statistics created by the system name likes WASys_XXXX. For Batchnum, you can execute the following statement:
DBCC SHOW_STATISTICS ('IBT1',_WA_Sys_00000002_4EE969DE)
--where _WA_Sys_00000002_4EE969DE is the name of the statistics of batchnum
Check the resultset. If the "updated", "rows","row sampled" columns. If necessary, you can update the statistics by:
-- update statistics on whole database
EXEC SP_UPDATESTATS
-- update all statistics of IBT1 table
UPDATE STATISTICS IBT1 WITH FULLSCAN
-- update a specific statistics of IBT1 table
UPDATE STATISTICS IBT1(_WA_Sys_00000002_4EE969DE) WITH FULLSCAN
Index defragmentation/reindex
If the index not contiguous, the seeking inside several fragmented files takes more time.
Execute your query in Management Studio Query analizer, and trun on Excution plans (CTRLL or CTRLM). This shows the compute cost of the query.
select * from ibt1 where batchnum = 'AAA'
result will be [IBT1_PRIMARY] index scan. You can check the fragmentation of the primary index of IBT1 table:
DBCC SHOWCONTIG ('IBT1') WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS
In the resultset the column ScanDensity shows the fragmentation percent, This value is 100 if everything is contiguous; if this value is less than 100, some fragmentation exists.
Reindex can be done by the following statement:
DBCC DBREINDEX ('IBT1','IBT1_PRIMARY',100) WITH NO_INFOMSGS
Take care: all statements should NOT be execute during business hours.
This may helps you.
Regards,
J.
Maybe you are looking for
-
Justify text still doesn't work correctly in Pages 2.0.1
Justify text does not work in Pages 2 or 2.0.1 when one wishes to specify where the lines should break. It used to work in Pages 1 by using <SHIFT-RETURN>. Does anyone know a workaround? Thanks.
-
DELL Inspiron N5110 does not Shut Down and does not wake up from Sleep
Hello. I have Dell Inspiron N5110 which cannot shut down neither in Windows 7 nor in Ubuntu Linux. I specifically booted Ubuntu Live from the CD to eliminate the possibility that the problem is in the Windows 7. Shutdown in Ubuntu does not work eith
-
Error occurred while publishing the content item to the Knowledge Directory
We are receiving the following error upon publishing non-binary content items to the knowledge directory: Error occurred while publishing the content item to the Knowledge Directory. Contact your portal administrator. Ironically, in the KD, we call s
-
Problem in flat file loading(bulk data)
Hi, I am populating a flat file with 3lac records. I am getting error due to buffer space problem. "Error during target file write " around 2 lac records are getting inserted after that the above error is coming.. Now temporarily I am populating 1.5
-
Allow customers to pay with bank account without Paypal account
When customers fill out a form that then goes to Paypal for payment, it has a page which lets them log in. It also has the following: Don't have a PayPal account? Use your credit card or bank account (where available). When one clicks "Continue" it