Index slows down query execution
Hello everybody,
I have reordered the join conditions for the query...
select (first_name||' '||middle_name||' '||last_name) name,regn_no,age,gender,
(select loc_name from locations where loc_id=location_code and loc_h_id='L6') district,
person_id from persons p,musers u where reg_center_id=u.center_id and
p.ipop='RG' and u.user_id = '8832' and u.eff_end_dt is null and p.CID = '1' order by p.crt_dt desc
like this...
select (first_name||' '||middle_name||' '||last_name) name,regn_no,age,gender,
(select loc_name from locations where loc_id=location_code and loc_h_id='L6') district,
person_id from musers u, persons p where reg_center_id=u.center_id and u.user_id = '8832'
and p.ipop='RG' and u.eff_end_dt is null and p.CID = '1'
because
select count(*) from persons p, musers u where reg_center_id=u.center_id and
p.ipop='RG' is 13002
and
select count(*) from persons p, musers u where reg_center_id=u.center_id and u.user_id = '8832' is 1007.
In this excercise I have a couple of questions..
1. This did not show any difference in the CPU time.
and,
I have created an index 'idx_ipop_persons' on persons(ipop) "create index idx_ipop_persons on persons(ipop)".
2. The query is taking more time to execute than it was before creating the index.
Please help me...
Thanks,
Aswin.
Please post the execution plan for your query.
And also i need some details:
select count(*) from person where ipop='RG';
How many records fetch?
select distinct ipop from persons; --How many
records fetch?
Regards
RajaBaskar
Execution plan:
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=921 Card=176 Bytes
=11088)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'LOCATIONS' (TABLE)
(Cost=2 Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'IDX_LOCID_LOCHDR_LOCATIONS' (INDE
X) (Cost=1 Card=1)
3 0 TABLE ACCESS (BY INDEX ROWID) OF 'PERSONS' (TABLE) (
Cost=918 Card=176 Bytes=9152)
4 3 NESTED LOOPS (Cost=921 Card=176 Bytes=11088)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MUSERS' (TA
BLE) (Cost=3 Card=1 Bytes=11)
6 5 INDEX (RANGE SCAN) OF 'PK_MUSERS' (INDEX (UNIQUE)
) (Cost=2 Card=1)
7 4 INDEX (RANGE SCAN) OF 'IDX2_PERSONS' (INDEX) (Co
st=1 Card=1464)
select count(*) from person where ipop='RG';
count(*)
12135
select distinct ipop from persons;
distinct ipop
RG
OP
IP
RF
CR
Similar Messages
-
PID control loop slows down during execution.
Hi, I am a attaching a LV8.6 code that i am currently using to control my engine experiment. I use PID control where the input signal is an rpm value that i measure using a counter. The ttl signal from the sensor is not clean and so i use an analog trigger to generate pulses on the counter from where i measure the frequency and hence the rpm. The output is generated as an analog voltage on an output channel. The problem i am facing is that the loop runs really good when i start off but gradually the loop keeps slowing down and this greatly affects my ability to control engine speed. I am not sure why this is happening. I tried increasing the sample size and rate (so as to increase the buffer size) but this didnt have any effect on the speed. I am guessing its a problem with the way i have my loops setup.. any suggestions would be greatly appreciated.
Thanks, Shyam.
Attachments:
PID control loop.vi 33 KBHi all..I realised my mistake soon after posting as usually is the case..the create channel vi for doing the analog output to the output channel was within the loop and slowing things down..when i moved it out..it fixed the problem!!
Shyam. -
Adding indexes to a table is slowing down query performance.
I am running a query against a table which contains approx. 4 million records in it. So I created 4 indexes on the table and noticed that the performance on my query drastically decreased. I went back and began remove and creating the indexes in different combinations. It turns out that whenever two of four indexes are created the performance worsens. The strange thing about this problem is when I do an explain plan on the query the cost is greater when the performance is better and the cost is less when the performance is worse. Also Oracle only uses one out of the four indexes on the table for this query.
I'd like to try to understand what is going on with the Oracle optimizer to try to fix this problem.Mark,
Below is the information you requested.
DATABASE: 10.2.0.3.0
QUERY:
select distinct object, object_access from betweb_objects
where instr(object_access,'RES\') = 0
and object_access_type = 'ADM'
and object in (select distinct object
from betweb_objects
where instr(object_access,'RES\') = 0
and object_access_type = 'NTK'
and object not like '%.%'
and substr(object_access,instr(object_access,'\')+1) in (select distinct substr(object_access,instr(object_access,'\')+1)
from betweb_objects
where object_access_type = 'NTK'
and instr(object_access,'RES\') = 0
minus
select distinct upper(id)
from uamp.ad_users
where status = 'A'))
TABLE:
BETWEB_OBJECTS
OBJECT VARCHAR2
OBJECT_ACCESS VARCHAR2
OBJECT_ACCESS_TYPE VARCHAR2
INDEXES ON BETWEB_OBJECTS:
BETWEB_OBJECTS_IDX1
OBJECT
BETWEB_OBJECTS_IDX2
OBJECT_ACCESS
BETWEB_OBJECTS_IDX3
OBJECT_ACCESS_TYPE
BETWEB_OBJECTS_IDX4
OBJECT_ACCESS
OBJECT_ACCESS_TYPE
TABLE:
AD_USERS
ID VARCHAR2
DOMAIN VARCHAR2
FNAME VARCHAR2
LNAME VARCHAR2
INITIALS VARCHAR2
TITLE VARCHAR2
DN VARCHAR2
COMPANY VARCHAR2
DEPARTMENT VARCHAR2
PHONE VARCHAR2
MANAGER VARCHAR2
STATUS VARCHAR2
DISPLAY_NAME VARCHAR2
EXPLAIN PLAN when performance is better:
SELECT STATEMENT Rows=13,414 Time=643,641 Cost=53,636,676 Bytes=6,948,452
HASH UNIQUE Rows=13,414 Time=643,641 Cost=53,636,676 Bytes=6,948,452
HASH JOIN Rows=694,646,835 Time=428 Cost=35,620 Bytes=359,827,060,530
VIEW VW_NSO_1 Rows=542 Time=42 Cost=3,491 Bytes=163,684
MINUS
SORT UNIQUE Rows=542 Bytes=9,756
INDEX FAST FULL SCAN BETWEB_OBJECTS_IDX4 Rows=26,427 Time=40 Cost=3,302 Bytes=475,686
SORT UNIQUE Rows=16,228 Bytes=178,508
TABLE ACCESS FULL AD_USERS Rows=16,360 Time=2 Cost=113 Bytes=179,960
HASH JOIN Rows=128,163,623 Time=322 Cost=26,805 Bytes=27,683,342,568
TABLE ACCESS FULL BETWEB_OBJECTS Rows=9,161 Time=154 Cost=12,805 Bytes=989,388
TABLE ACCESS FULL BETWEB_OBJECTS Rows=25,106 Time=154 Cost=12,822 Bytes=2,711,448
EXPLAIN PLAN when performance is worse:
SELECT STATEMENT Rows=13,414 Time=22,614 Cost=1,884,484 Bytes=2,897,424
HASH UNIQUE Rows=13,414 Time=22,614 Cost=1,884,484 Bytes=2,897,424
HASH JOIN Rows=128,163,623 Time=322 Cost=26,805 Bytes=27,683,342,568
TABLE ACCESS FULL BETWEB_OBJECTS Rows=9,161 Time=154 Cost=12,805 Bytes=989,388
TABLE ACCESS FULL BETWEB_OBJECTS Rows=25,106 Time=154 Cost=12,822 Bytes=2,711,448
MINUS
SORT UNIQUE NOSORT Rows=209 Time=40 Cost=3,305 Bytes=3,762
INDEX FAST FULL SCAN BETWEB_OBJECTS_IDX4 Rows=264 Time=40 Cost=3,304 Bytes=4,752
SORT UNIQUE NOSORT Rows=164 Time=2 Cost=115 Bytes=1,804
TABLE ACCESS FULL AD_USERS Rows=164 Time=2 Cost=114 Bytes=1,804 -
Index Used during Query Execution
I am facing a strange and I am unable to get the soulution I am trying out optimization of queries & using explain plan for tht I had analysed my tables
Query on the table was using index earlier on a databse... there is no change in db no new rows inserted nothing but now same query in not using Index wht could be the problem. I used hints as well but no use.
Please let me know wht should I do?Are you saying that the same query on the same database with the same data and the same statistics has a different plan? Are you sure that nothing is different? Different version of the database? Different statistics?
Can you post the query?
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Query With BETWEEN Clause Slows Down
hi,
I am experiencing slow down query by using BETWEEN clause. Is there any solution for it?Here is the difference if I use equal not between.
SQL> select to_char(sysdate,'MM-DD-YYYY HH24:MI:SS') from dual;
TO_CHAR(SYSDATE,'MM
11-14-2005 15:44:03
SQL> SELECT COUNT(*) /*+ USE_NL(al2), USE_NL(al3), USE_NL(al4),
2 USE_NL(al5), USE_NL(al6) */
3 FROM acct.TRANSACTION al1,
4 acct.account_balance_history al2,
5 acct.ACCOUNT al3,
6 acct.journal al4,
7 acct.TIME al5,
8 acct.object_code al6
9 WHERE ( al1.reference_num = al4.reference_num(+)
10 AND al1.timekey = al5.timekey
11 AND al5.timekey = al2.timekey
12 AND al3.surrogate_acct_key = al2.surrogate_acct_key
13 AND al3.surrogate_acct_key = al1.surrogate_acct_key
14 AND al1.report_fy = al3.rpt_fy
15 AND al6.object_code = al1.object_adj
16 )
17 AND ((al1.timekey = 20040701
18 or al1.timekey = 20040801
19 or al1.timekey = 20040901
20 or al1.timekey = 20041001
21 or al1.timekey = 20041101
22 or al1.timekey = 20041201
23 or al1.timekey = 20050101
24 or al1.timekey = 20050201
25 or al1.timekey = 20050301
26 or al1.timekey = 20050401
27 or al1.timekey = 20050501
28 or al1.timekey = 20050601
29 or al1.timekey = 20050701
30 or al1.timekey = 20050801
31 or al1.timekey = 20050901)
32 AND al3.dept = '480');
COUNT(*)/*+USE_NL(AL2),USE_NL(AL3),USE_NL(AL4),USE_NL(AL5),USE_NL(AL6)*/
34245
SQL> select to_char(sysdate,'MM-DD-YYYY HH24:MI:SS') from dual;
TO_CHAR(SYSDATE,'MM
11-14-2005 15:44:24 -
I have some problem of performance due to the report generation.
If I do not use "on the fly reporting", performance are ok, each item of my loop takes the same time. But I have nothing in the report.
If I use "on the fly reporting", performance are bad, ecah item of my loop take more and more time which becomes inacceptable. On the contrary, my report is ok.
Is possible to have a report without disturbing and slow down the execution of the sequence ?
Thanks a lot for your support.Hey sfl,
The short answer is NO.
Basically on the fly reporting is being updated everytime a step that records results gets executed. So you have to store that in memory. How many steps do you have? If you have a bunch of steps and your report is really long then you will see performance drop dramatically.
So you have to choose one or the other unfortunately.
Regards,
jigg
CTA, CLA
teststandhelp.com
~Will work for kudos and/or BBQ~ -
Query slows down after second run for Index Organised Tables
We are trying to optimise our application which supports MSSQL to run with Oracle 9i for one of our customers.
We have created one database with normal tables and PK constraints/indexes and turned caching on for the tables, this seems to work well but no way as fast as MSSQL on similar hardware. The first run of query was slower but as the caching became more effective the query times came down.
So we investigated turning the tables into Index Organised Tables. We ran analyze on the new indexed tables and the response time of one of our more complex queries became akin to MSSQL. We ran the same query 5 seconds later and it took about 3 times longer to return the same data. Subsequent runs produced the same result.
We have run the same query on both styles of tables and also run showplans on the two queries, the regular table returns a cost of 190 and the IOT 340. This would point to the fact that we should use the regular tables for our queries but why did the IOT set return much faster for the first run after the analyze then slow down as if the stats were missing, but the execution plan remain the same.
Any help would be appreiciated.
Darren Fitzgibboncould be a lot of reasons:
1. Is Oracle the only process that runs on this server? Could it be any other process (i.e. MSSQL) that took the the resources during the secodn run?
2. Is this the only query that was running during your tests? Could there be another query that put the load on the database when you were running the second test?
3. The autotrace statistics and explain plan would be useful for first and second run
(how to use autotrace:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/autotrac.htm#1018
4. If you run the same process again several times - how does the response time change?
Mike -
Hi,
I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
The query:
SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
FROM MyTable
WHERE SomeDate= date_entered_by_user AND SomeString IN ("aaa","bbb")
GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
To check I replaced the where clause with
WHERE SomeDate= date_entered_by_user AND SomeString = "aaa"No improvements.
What could be the problem?
Thank you,
LoboIt's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
The stored execution plan will enable the engine to execute the query faster.
/> -
Query slows down after upgrade
Hi,
After upgrade from oracle 10 to oracle 11 a simple query is slowing down alot.
Same amount of records, same table only the execution cost time is increased
Can someone give me some feedback over this problem?
Can I check some things to look into the problem?
Thx in advance.
Greetings
query:
Select nvl(sum(a.BM_OPENSTAAND_DEBET- a.BM_OPENSTAAND_CREDIT),0)
from bh.bh123gh a
where
a.F123_AR_NR>='4400000000' and
a.F123_AR_NR<='4404000000' and
a.F123_KL_LEV_AR_NR='0631001000' and
a.SRT_REK=2 and
a.F123_BKJR>=0000 and
a.F123_BKJR<=2011 and
a.F123_FIRMA=2
explain plans
oracle 11
cost 1,792
Bytes: 38
Cardinality: 1
oracle 10
cost 1,594
Bytes: 38
Cardinality: 1>
After upgrade from oracle 10 to oracle 11 a simple query is slowing down alot.
Same amount of records, same table only the execution cost time is increased
Can someone give me some feedback over this problem?
Can I check some things to look into the problem?
Thx in advance.
>
Is it just one query or all queries are behaving strangely? If it's just one query, do a trace and see where it's slowing down.
In the meanwhile, you can also modify the optimizer_features_enable parameter at session level and check the explain plan.
Alter session set optimizer_features_enable='10.2.0.5';
Now test the query.
By the way, if you are just worried about the COST factor, you can just ignore it. COST by itself has no real meaning. You will only have to look at the response time.
Regards -
Slow, then fast execution of same query
I have a test query:
dbxml> query 'count(collection()/Quots/QuotVerificationInstance/@lastModifiedDate[.="2009-09-07"])'
With this index:
Index: node-attribute-substring-string for node {}:lastModifiedDate
The first time I run the query in the shell, it takes a while:
Query - Finished query execution, time taken = 29656ms
The second time I run the query, it's a more acceptable speed:
Query - Finished query execution, time taken = 735ms
Can anyone explain the difference in search speed? The slow, fast pattern is pretty consistent (I've tried fresh shell sessions a few times and get the same initial slow search followed by the quicker one.)
What I'm trying to do is allow on-demand generation of a table showing the number of records modified per day and per month. The test date I'm searching for above is the date of insertion of records so it returns a comparatively large result set at the moment, but 29-odd seconds is way too slow to make the query comfortable. I've added a substring index because sometimes I want a partial date search (e.g. when counting records modified per month). I remember reading that a substring index doubles as an equality index. The date is stored simply as a string, not a date type.
What am I missing? I'd like to understand the difference in execution times, but basically I need it to run more quickly for simultaneous date queries. I'd be grateful for any pointers.
TimHi Viyacheslav
I think it's perhaps too much trouble for us both for me to try and share the container -- my managers would want assurances about copyright (the data is part of a previously published dictionary), and I'd have to build a smaller one as the current file is > 500MB. But thanks for your interest.
My container setup is pretty simple. It's around 45000 small XML files added to a node container for a read only database (no environment). I have 16 indices supporting a range of different searches and a couple of reports, and the only index relevant to the @lastModifiedDate search is the node-attribute-substring-string index mentioned above (when I tried other edge or equality indexes I deleted the substring index). The query script I intended would be instantiated purely to run the one query and then die each time, so I don't see there being any chance of getting past an initial slow query if that's the behaviour. Because that first query finds around 37000 matches among the 45000 dates, it's understandable that it would take a while but I'm not sure why it speeds up in the shell from the second time round. An obvious suggestion would be that some form of optimisation is taking place in the shell, but I don't understand the workings of DBXML well enough to guess how. Even if this is the case, knowing that it happens wouldn't help me speed up my query due to the life cycle of the script.
Perhaps I should just generate these stats on a daily basis rather than generating them on-demand, or else try a different approach at getting the counts. But thanks again for the comments.
Tim -
Invisible index getting accessed during query execution
Hello Guys,
There is a strange problem , I am encountering . I am working on tuning the performance of one of the concurrent request in our 11i ERP System having database 11.1.0.7
I had enabled oradebug trace for the request and generated tkprof out of it. For below query which is taking time , I found that , in the trace generated , wait event is "db file sequential read" on an PO_LINES_N10 index but in the generated tkprof , for the same below query , the full table scan for PO_LINES_ALL is happening , as that table is 600 MB in size.
Below is the query ,
===============
UPDATE PO_LINES_ALL A
SET A.VENDOR_PRODUCT_NUM = (SELECT SUPPLIER_ITEM FROM APPS.IRPO_IN_BPAUPDATE_TMP C WHERE BATCH_ID = :B1 AND PROCESSED_FLAG = 'P' AND ACTION = 'UPDATE' AND C.LINE_ID =A.PO_LINE_ID AND ROWNUM = 1 AND SUPPLIER_ITEM IS NOT NULL),
LAST_UPDATE_DATE = SYSDATE
===============
Index PO_LINES_N10 is on the column LAST_UPDATE_DATE , logically for such query , index should not have got used as that indexed column is not in select / where clause.
Also, why there is discrepancy between tkprof and trace generated for the same query .
So , I decided to INVISIBLE the index PO_LINES_N10 but still that index is getting accessed in the trace file .
I have also checked the below parameter , which is false so optimizer should not make use of invisible indexes during query execution.
SQL> show parameter invisible
NAME TYPE VALUE
optimizer_use_invisible_indexes boolean FALSE
Any clue regarding this .
Thanks and Regards,
Prasad
Edited by: Prasad on Jun 15, 2011 4:39 AMHi Dom,
Sorry for the late reply , but yes , an update statement is trying to update that index even if it's invisible.
Also, it seems performance issue started appearing when this index got created , so now I have dropped that index in test environment and ran the concurrent program again with oradebug level 12 trace enabled and found bit improvement in the results .
With index dropped -> 24 records/min got processed
With index -> 14 records/min got processed
so , I am looking forward without this index in the production too but before that, I have concerns regarding tkprof output. Can we further improve the performance of this query.
Please find the below tkprof with and without index .
====================
Sql statement
====================
UPDATE PO_LINES_ALL A SET A.VENDOR_PRODUCT_NUM = (SELECT SUPPLIER_ITEM FROM
APPS.IRPO_IN_BPAUPDATE_TMP C
WHERE
BATCH_ID = :B1 AND PROCESSED_FLAG = 'P' AND ACTION = 'UPDATE' AND C.LINE_ID =
A.PO_LINE_ID AND ROWNUM = 1 AND SUPPLIER_ITEM IS NOT NULL),
LAST_UPDATE_DATE = SYSDATE
=========================
TKPROF with Index for the above query ( processed 643 records )
=========================
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 2499.64 2511.99 98158 645561632 13105579 1812777
Fetch 0 0.00 0.00 0 0 0 0
total 2 2499.64 2511.99 98158 645561632 13105579 1812777
=============================
TKPROF without Index for the above query ( processed 4452 records )
=============================
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 10746.96 10544.13 84125 3079376156 1870058 1816289
Fetch 0 0.00 0.00 0 0 0 0
total 2 10746.96 10544.13 84125 3079376156 1870058 1816289
=============================
Explain plan which is same in both the cases
=============================
Rows Row Source Operation
0 UPDATE PO_LINES_ALL (cr=3079377095 pr=84127 pw=0 time=0 us)
1816289 TABLE ACCESS FULL PO_LINES_ALL (cr=83175 pr=83026 pw=0 time=117690 us cost=11151 size=29060624 card=1816289)
0 COUNT STOPKEY (cr=3079292918 pr=20 pw=0 time=0 us)
0 TABLE ACCESS BY INDEX ROWID IRPO_IN_BPAUPDATE_TMP (cr=3079292918 pr=20 pw=0 time=0 us cost=4 size=22 card=1)
180368800 INDEX RANGE SCAN IRPO_IN_BPAUPDATE_N1 (cr=51539155 pr=3 pw=0 time=16090005 us cost=3 size=0 card=1)(object id 372721)
There is a lot increase in the CPU ,so I would like to further tune this query. I have run SQL Tuning task but didn't get any recommendations for the same.
Since in the trace , I have got db scattered read wait event for the table "PO_LINES_ALL" but disk reads are not much , so am not sure the performance improvement even if I pin this table (620 MB in size and is it feasible to pin , SGA is 5GB with sga_target set ) in the shared pool .
I have already gathers stats for the concerned tables and rebuilt the indexes .
Is there any other thing that can be performed to tune this query further and bring down CPU, time taken to execute.
Thanks a lot for your reply.
Thanks and Regards,
Prasad
Edited by: Prasad on Jun 28, 2011 3:52 AM
Edited by: Prasad on Jun 28, 2011 3:54 AM
Edited by: Prasad on Jun 28, 2011 3:56 AM -
Query slow down when added a where clause
I have a procedure that has performance issue, so I copy some of the query and run in the sql plus and try to spot which join cause the problem, but I get a result which I can figuer out why. I have a query which like below:
Select Count(a.ID) From TableA a
-- INNER JOIN other tables
WHERE a.TypeID = 2;
TableA has 140000 records, when the where clause is not added, the count return quite quick, but if I add the where clause, then the query slow down and seems never return so I have to kill my SQL Plus session. TableA has index on TypeID and TypeID is a number type. When TablA has 3000 records, the procedure return very quick, but it slow down and hang there when the TableA contains 140000 records. Any idea why this will slow down the query?
Also, the TypeID is a foreign key to another table (TableAType), so the query above can written as :
Select Count(a.ID) From TableA a
-- INNER JOIN other tables
INNER JOIN TableAType atype ON a.TypeID = atype.ID
WHERE atype.Name = 'typename';
TableAType table is a small table only contains less than 100 records, in this case, would the second query be more efficient to the first query?
Any suggestions are welcome, thanks in advance...
Message was edited by:
user500168TableA now has 230000 records and 28000 of them has the TypeID 2.
I haven't use the hint yet but thank you for your reply which let me to to run a query to check how many records in TableA has TypeID 2. When I doing this, it seems pretty fast. So I begin with the select count for TableA only and gradually add table to join and seems the query is pretty fast as long as TableA is the fist table to select from.
Before in my query TableA is the second table to join from, there is another table (which is large as well but not as large as TableA) before TableA. So I think this is why it runs slow before. I am not at work yesterday so the query given in my post is based on my roughly memory and I forget to mention another table is joined before TableA, really sorry about that.
I think I learn a lesson here, the largest table need to be in the begining of the select statement...
Thank you very much everyone. -
Hi Experts,
I have problem with query execution. It is taking more time to execution.
Query is like this :
SELECT gcc_po.segment1 bc,
gcc_po.segment2 rc,
gcc_po.segment3 dept,
gcc_po.segment4 ACCOUNT,
gcc_po.segment5 product,
gcc_po.segment6 project,
gcc_po.segment7 tbd,
SUBSTR (pv.vendor_name, 1, 50) vendor_name,
pv.vendor_id,
NVL (ph.closed_code, 'OPEN') status,
ph.cancel_flag,
ph.vendor_site_id,
ph.segment1 po_number,
ph.creation_date po_creation_date,
pv.segment1 supplier_number,
pvsa.vendor_site_code,
ph.currency_code po_curr_code,
ph.blanket_total_amount,
NVL (ph.rate, 1) po_rate,
SUM (DECODE (:p_currency,
'FUNCTIONAL', DECODE (:p_func_curr_code,
ph.currency_code, NVL
(pd.amount_billed,
0),
NVL (pd.amount_billed, 0)
* NVL (ph.rate, 1)
NVL (pd.amount_billed, 0)
)) amt_vouchered,
ph.po_header_id poheaderid,
INITCAP (ph.attribute1) po_type,
DECODE (ph.attribute8,
'ARIBA', DECODE (ph.attribute4,
NULL, ph.attribute4,
ppf.full_name
ph.attribute4
) origanator,
ph.attribute8 phv_attribute8,
UPPER (ph.attribute4) phv_attribute4
FROM po_headers ph,
po_vendors pv,
po_vendor_sites pvsa,
po_distributions pd,
gl_code_combinations gcc_po,
per_all_people_f ppf
WHERE ph.segment1 BETWEEN '001002' AND 'IND900714'
AND ph.vendor_id = pv.vendor_id(+)
AND ph.vendor_site_id = pvsa.vendor_site_id
AND ph.po_header_id = pd.po_header_id
AND gcc_po.code_combination_id = pd.code_combination_id
AND pv.vendor_id = pvsa.vendor_id
AND UPPER (ph.attribute4) = ppf.attribute2(+) -- no index on attributes
AND ph.creation_date BETWEEN ppf.effective_start_date(+) AND ppf.effective_end_date(+)
GROUP BY gcc_po.segment1,-- no index on segments
gcc_po.segment2,
gcc_po.segment3,
gcc_po.segment4,
gcc_po.segment5,
gcc_po.segment6,
gcc_po.segment7,
SUBSTR (pv.vendor_name, 1, 50),
pv.vendor_id,
NVL (ph.closed_code, 'OPEN'),
ph.cancel_flag,
ph.vendor_site_id,
ph.segment1,
ph.creation_date,
pvsa.attribute7,
pv.segment1,
pvsa.vendor_site_code,
ph.currency_code,
ph.blanket_total_amount,
NVL (ph.rate, 1),
ph.po_header_id,
INITCAP (ph.attribute1),
DECODE (ph.attribute8,
'ARIBA', DECODE (ph.attribute4,
NULL, ph.attribute4,
ppf.full_name
ph.attribute4
ph.attribute8,
ph.attribute4Here with out SUM funciton and group by function its execution is fast. if i use this Sum function and Group by function it is taking nearly 45 mins.
Explain plan for this:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=ALL_ROWS 1 6364
HASH GROUP BY 1 272 6364
NESTED LOOPS OUTER 1 272 6363
NESTED LOOPS 1 232 6360
NESTED LOOPS 1 192 6358
NESTED LOOPS 1 171 6341
HASH JOIN 1 K 100 K 2455
TABLE ACCESS FULL PO_VENDOR_SITES_ALL 1 K 36 K 1683
TABLE ACCESS FULL PO_VENDORS 56 K 3 M 770
TABLE ACCESS BY INDEX ROWID PO_HEADERS_ALL 1 82 53
INDEX RANGE SCAN PO_HEADERS_N1 69 2
TABLE ACCESS BY INDEX ROWID PO_DISTRIBUTIONS_ALL 1 21 17
INDEX RANGE SCAN PO_DISTRIBUTIONS_N3 76 2
TABLE ACCESS BY INDEX ROWID GL_CODE_COMBINATIONS 1 40 2
INDEX UNIQUE SCAN GL_CODE_COMBINATIONS_U1 1 1
TABLE ACCESS BY INDEX ROWID PER_ALL_PEOPLE_F 1 40 3
INDEX RANGE SCAN PER_PEOPLE_F_ATT2 2 1 plz giv me solution for this.....Whihc Hints shall i use in this query.....
thanks in ADV....I have a feeling this will lead us nowhere, but let me try for the last time.
Tuning a query is not about trying out all available index hints, because there must be one that makes the query fly. It is about diagnosing the query. See what it does and see where time is being spent. Only after you know where time is being spent, then you can effectively do something about it (if it is not tuned already).
So please read about explain plan, SQL*Trace and tkprof, and start diagnosing where your problem is.
Regards,
Rob. -
Query Designer slows down after working some time with it
Hi all,
the new BEx Query Designer slows down when working some time with it. The longer it remains open, the slower it gets. Especially formula editing slows down extremely.
Did anyone of you encounter the same problem? Do you have an idea, how to fix this. To me it seems as if the Designer allocates more and more RAM and does not free that up.
My version: BI AddOn 7.X, Support Package 13, Revision 467
Kind regards,
PhilippI have seen a similar problem on one of my devices, the 'Samsung A-920'. Every time the system would pop up the 'Will you allow Network Access' screen , the imput from all keypresses from then on would be strangely delayed. It looked like the problem was connected with the switching from my app and the system dialog form. I tried for many many long hours / days to fix this, but just ended up hacking my phone to remove the security questions. After removing the security questions my problem went away.
I don't know if it's an option in your application, but is it possible to do everything using just one Canvas, and not switch between displayables? You may want to do an experiment using a single displayable Canvas, and just change how it draws. I know this will make user input much more complicated, but you may be able to avoid the input delays.
In my case, I think the device wasn't properly releasing / un-registering the input handling from the previous dialogs, so all keypresses still went through the non-current network-security dialog before reaching my app. -
File dialog box slow down execution
Dear all,
I am using Labview 8.2.1 with Windows XP.
I have a program who's allow the user to select any file or folder thanks to File dialog box.
For a reason that i don't understand when this dialog box is displayed, other parrallel while loop time execution goes slow down.
Put on my LabView front panel File Path command without any code and press "Browse" file button as shown on joined picture_1 birng slow down execution of the while loop.
Does any body could explain why this problem appear ?
By switch off on LabView Tool parameter "Use native file dialogs" (picture_2). the problem disapear. Unfortunately this kind of old dialog box is not practical...
If any body have an idea it could help me.
Thanks.
Solved!
Go to Solution.
Attachments:
picture_1.JPG 16 KB
picture_2.JPG 71 KB__KB__ wrote:
Hello,
When the File dialog box is running, other while loop time execution goes down.
Thanks.
This you have already mentioned in your first post... now my question is how you've actually figured it out... or can you share your code here???
I am not allergic to Kudos, in fact I love Kudos.
Make your LabVIEW experience more CONVENIENT.
Maybe you are looking for
-
MSI neo2 platinum problem, forced to reset BIOS everytime I turn on the PSU
I just got my neo2 platinum mobo. Everytime I switch it ON all the led's at the back (diagnostic) turn red/orange and never turn green. I have to reset the bios everytime to get it to post successfully. (bios is the original bios, no flashes done) Pr
-
Unable to read sounds in my library.
hello, I have a little problem, when I open a new audio track, none of my library sounds works, I can't read them. whereas when I open a software instrument track, everything works. what can I do? what manipulation have I got to do to read the sounds
-
Airport Express Ethernet Port Question (Please);
Hello, Senario: If using an Airport Express to extend the wireless internet capabilities in a given area ( Internet is provided wirelessly from a pimary Airport Base Station to this Airport Express). Is this airport express able to also share interne
-
Record time when boolean condition is true
Is it possible to record the time when a boolean expression is true, i.e if an =0 condition was placed on an AC signal, would it be possible to record the times when the wave crossed 0. I expected this part of my overall problem the easiest, but im s
-
Use MySQL recordset with JCheckbox
hi.. i am working on a simple java application that run on the user pc locally... the application is some sort of bill management.. when the user click customer the program will get the list from the database and display on the JTextField with checkb