Adding a column makes big difference with a simple query
Hello All,
The oracle environment I am working with is
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
and
PL/SQL Release 9.2.0.8.0 - Production
I have a question for the following 2 queries. The only difference is that first one has an extra field than the second one. That extra field makes big difference regarding excution time. One is 4 seconds and the other is 5 minutes. Could anyone help me to understand why? Thank you for your help! -Susan
4 Seconds
================================================================
select distinct B.A_CA.CA_ID, B.A_CA.CE_ID
from B.A_CA, B.CA_LIST_DTLS
where UPPER(deleted(B.A_CA.CA_ID) = 'N'
and UPPER(B.CA_LIST_DTLS.CA_LIST_NAME) = 'SW'
and B.A_CA.CA_ID = B.CA_LIST_DTLS.CA_ID ;
5 minutes
==============================================================
select distinct B.A_CA.CA_ID
from B.A_CA, B.CA_LIST_DTLS
where UPPER(deleted(B.A_CA.CA_ID) = 'N'
and UPPER(B.CA_LIST_DTLS.CA_LIST_NAME) = 'SW'
and B.A_CA.CA_ID = B.CA_LIST_DTLS.CA_ID ;
Below is the function
=============================================================
FUNCTION deleted ( in_ca_id IN B.A_CA.CA_ID%TYPE ) RETURN VARCHAR2 IS
n_result VARCHAR2(20);
cnt NUMBER :=0;
cursor cur is
SELECT COUNT(*)
from B.A_CA
where B.A_CA.CA_ID=in_ca_id
and (B.A_CA.COMPLAINT_FLAG = 'Y'
or B.A_CA.DELETE_FLAG = 'Y') ;
BEGIN
OPEN cur;
FETCH cur INTO cnt;
CLOSE cur;
If cnt >0 then
n_result := 'Y';
else
n_result := 'N';
end if;
RETURN n_result;
EXCEPTION
WHEN NO_DATA_FOUND THEN
return 'N/A';
END;
Edited by: 849989 on Apr 5, 2011 10:25 AM
Explain plan for two queries:
The fast one:
Plan
SELECT STATEMENT CHOOSE Cost: 4,223 Bytes: 116,032 Cardinality: 2,072
5 SORT UNIQUE Cost: 4,223 Bytes: 116,032 Cardinality: 2,072
4 NESTED LOOPS Cost: 4,212 Bytes: 116,032 Cardinality: 2,072
1 TABLE ACCESS FULL B.CA_LIST_DTLS Cost: 68 Bytes: 87,024 Cardinality: 2,072
3 TABLE ACCESS BY INDEX ROWID B.A_CA Cost: 2 Bytes: 14 Cardinality: 1
2 INDEX UNIQUE SCAN UNIQUE B.A_CA_PK Cost: 1 Cardinality: 1
The slow one:
Plan
SELECT STATEMENT CHOOSE Cost: 305 Bytes: 111,888 Cardinality: 2,072
4 SORT UNIQUE Cost: 305 Bytes: 111,888 Cardinality: 2,072
3 HASH JOIN Cost: 295 Bytes: 111,888 Cardinality: 2,072
1 TABLE ACCESS FULL B.CA_LIST_DTLS Cost: 68 Bytes: 87,024 Cardinality: 2,072
2 INDEX FAST FULL SCAN UNIQUE B.A_CA_PK Cost: 225 Bytes: 131,928 Cardinality: 10,994
Edited by: 849989 on Apr 5, 2011 11:49 AM
Similar Messages
-
Execution time of same program makes big difference
Hello all,
The execution time of same program in PRD system and QAS system makes big difference.
The difference of data is not much(as system copy was run on a regular time schedule. And the system enviroments are exactly the same. However, while the program only cost 2-3 seconds in QAS, it cost 7-8 minutes in PRD.
It only happens when trying to join some tables together.
I've checked the execution plans of same search, they are different:
QAS:
SQL Statement
SELECT
T_00.RANL , T_00.XALLB , T_00.REPKE , T_00.REWHR , T_00.HKONT , T_00.ZTMNAIBRX , T_00.GSART ,
T_00.ZTMHOYMNX , T_00.ZTMSBKBNX , T_00.ZTMSHDAYZ , T_00.ZTMMBHZKP , T_01.BAL_SH_CUR ,
T_01.ZTMSIHONP , T_02.SECURITY_ID , T_02.SECURITY_ACCOUNT
FROM
ZTM0108 T_00, ZTM0135 T_01, TRACV_POSCONTEXT T_02
WHERE
T_00.MANDT = '350' AND T_00.BUKRS = 'MC51' AND T_00.ZTMMCSNGX = '200806' AND
T_02.SECURITY_ACCOUNT = '0001' AND T_01.MANDT = '350' AND T_01.BUKRS = T_00.BUKRS AND
T_01.ZTMMCSNGX = T_00.ZTMMCSNGX AND T_01.PARTNER = T_00.REPKE AND T_02.MANDT = '350' AND
T_02.SECURITY_ID = T_00.RANL
Execution Plan
SELECT STATEMENT ( Estimated Costs = 666 , Estimated #Rows = 72 )
--- 12 HASH JOIN
( Estim. Costs = 666 , Estim. #Rows = 72 )
Estim. CPU-Costs = 37,505,220 Estim. IO-Costs = 663
Access Predicates
-- 9 HASH JOIN
( Estim. Costs = 268 , Estim. #Rows = 51 )
Estim. CPU-Costs = 18,679,663 Estim. IO-Costs = 267
Access Predicates
-- 6 NESTED LOOPS
( Estim. Costs = 25 , Estim. #Rows = 38 )
Estim. CPU-Costs = 264,164 Estim. IO-Costs = 25
-- 4 NESTED LOOPS
( Estim. Costs = 25 , Estim. #Rows = 27 )
Estim. CPU-Costs = 258,494 Estim. IO-Costs = 25
-- 2 TABLE ACCESS BY INDEX ROWID DIFT_POS_IDENT
( Estim. Costs = 25 , Estim. #Rows = 24 )
Estim. CPU-Costs = 253,454 Estim. IO-Costs = 25
Filter Predicates
1 INDEX RANGE SCAN DIFT_POS_IDENT~SA
( Estim. Costs = 1 , Estim. #Rows = 554 )
Search Columns: 1
Estim. CPU-Costs = 29,801 Estim. IO-Costs = 1
Access Predicates
3 INDEX RANGE SCAN TRACT_POSCONTEXTID
Search Columns: 2
Estim. CPU-Costs = 210 Estim. IO-Costs = 0
Access Predicates
5 INDEX UNIQUE SCAN TZPA~0
Search Columns: 2
Estim. CPU-Costs = 210 Estim. IO-Costs = 0
Access Predicates
--- 8 TABLE ACCESS BY INDEX ROWID ZTM0108
( Estim. Costs = 242 , Estim. #Rows = 2,540 )
Estim. CPU-Costs = 10,811,361 Estim. IO-Costs = 241
7 INDEX RANGE SCAN ZTM0108~0
( Estim. Costs = 207 , Estim. #Rows = 2,540 )
Search Columns: 3
Estim. CPU-Costs = 9,790,330 Estim. IO-Costs = 207
Access Predicates Filter Predicates
--- 11 TABLE ACCESS BY INDEX ROWID ZTM0135
( Estim. Costs = 397 , Estim. #Rows = 2,380 )
Estim. CPU-Costs = 11,235,469 Estim. IO-Costs = 396
10 INDEX RANGE SCAN ZTM0135~0
( Estim. Costs = 323 , Estim. #Rows = 2,380 )
Search Columns: 3
Estim. CPU-Costs = 10,288,477 Estim. IO-Costs = 323
Access Predicates Filter Predicates
PRD:
Execution Plan
SELECT STATEMENT ( Estimated Costs = 209 , Estimated #Rows = 1 )
--- 12 NESTED LOOPS
( Estim. Costs = 208 , Estim. #Rows = 1 )
Estim. CPU-Costs = 18.996.864 Estim. IO-Costs = 207
-- 9 NESTED LOOPS
( Estim. Costs = 120 , Estim. #Rows = 1 )
Estim. CPU-Costs = 10.171.528 Estim. IO-Costs = 119
-- 6 NESTED LOOPS
Estim. CPU-Costs = 27.634 Estim. IO-Costs = 0
-- 4 NESTED LOOPS
Estim. CPU-Costs = 27.424 Estim. IO-Costs = 0
1 INDEX RANGE SCAN TZPA~0
Search Columns: 1
Estim. CPU-Costs = 5.584 Estim. IO-Costs = 0
Access Predicates
--- 3 TABLE ACCESS BY INDEX ROWID DIFT_POS_IDENT
Estim. CPU-Costs = 210 Estim. IO-Costs = 0
Filter Predicates
2 INDEX RANGE SCAN DIFT_POS_IDENT~PT
Search Columns: 1
Estim. CPU-Costs = 210 Estim. IO-Costs = 0
Access Predicates
5 INDEX RANGE SCAN TRACT_POSCONTEXTID
Search Columns: 2
Estim. CPU-Costs = 210 Estim. IO-Costs = 0
Access Predicates
--- 8 TABLE ACCESS BY INDEX ROWID ZTM0108
( Estim. Costs = 120 , Estim. #Rows = 1 )
Estim. CPU-Costs = 10.143.893 Estim. IO-Costs = 119
7 INDEX RANGE SCAN ZTM0108~0
( Estim. Costs = 119 , Estim. #Rows = 1 )
Search Columns: 4
Estim. CPU-Costs = 10.142.167 Estim. IO-Costs = 119
Access Predicates Filter Predicates
--- 11 TABLE ACCESS BY INDEX ROWID ZTM0135
( Estim. Costs = 89 , Estim. #Rows = 1 )
Estim. CPU-Costs = 8.825.337 Estim. IO-Costs = 88
10 INDEX RANGE SCAN ZTM0135~0
( Estim. Costs = 88 , Estim. #Rows = 1 )
Search Columns: 4
Estim. CPU-Costs = 8.823.742 Estim. IO-Costs = 88
Access Predicates Filter Predicates
Could anyone tell me the reason? I've found note 724545 but not sure.
And, how to read the execution plan?(1 first or 12 first?)
Best Regards,
RobinHello Michael.
Thank you.
However, the sql statement is same:
QAS:
SQL Statement
SELECT
T_00.RANL , T_00.XALLB , T_00.REPKE , T_00.REWHR , T_00.HKONT , T_00.ZTMNAIBRX , T_00.GSART ,
T_00.ZTMHOYMNX , T_00.ZTMSBKBNX , T_00.ZTMSHDAYZ , T_00.ZTMMBHZKP , T_01.BAL_SH_CUR ,
T_01.ZTMSIHONP , T_02.SECURITY_ID , T_02.SECURITY_ACCOUNT
FROM
ZTM0108 T_00, ZTM0135 T_01, TRACV_POSCONTEXT T_02
WHERE
T_00.MANDT = '350' AND T_00.BUKRS = 'MC51' AND T_00.ZTMMCSNGX = '200806' AND
T_02.SECURITY_ACCOUNT = '0001' AND T_01.MANDT = '350' AND T_01.BUKRS = T_00.BUKRS AND
T_01.ZTMMCSNGX = T_00.ZTMMCSNGX AND T_01.PARTNER = T_00.REPKE AND T_02.MANDT = '350' AND
T_02.SECURITY_ID = T_00.RANL
Execution Plan
SELECT STATEMENT ( Estimated Costs = 666 , Estimated #Rows = 72 )
--- 12 HASH JOIN
( Estim. Costs = 666 , Estim. #Rows = 72 )
Estim. CPU-Costs = 37,505,220 Estim. IO-Costs = 663
Access Predicates
-- 9 HASH JOIN
( Estim. Costs = 268 , Estim. #Rows = 51 )
Estim. CPU-Costs = 18,679,663 Estim. IO-Costs = 267
Access Predicates
-- 6 NESTED LOOPS
( Estim. Costs = 25 , Estim. #Rows = 38 )
Estim. CPU-Costs = 264,164 Estim. IO-Costs = 25
-- 4 NESTED LOOPS
( Estim. Costs = 25 , Estim. #Rows = 27 )
Estim. CPU-Costs = 258,494 Estim. IO-Costs = 25
-- 2 TABLE ACCESS BY INDEX ROWID DIFT_POS_IDENT
( Estim. Costs = 25 , Estim. #Rows = 24 )
Estim. CPU-Costs = 253,454 Estim. IO-Costs = 25
Filter Predicates
1 INDEX RANGE SCAN DIFT_POS_IDENT~SA
( Estim. Costs = 1 , Estim. #Rows = 554 )
Search Columns: 1
Estim. CPU-Costs = 29,801 Estim. IO-Costs = 1
Access Predicates
3 INDEX RANGE SCAN TRACT_POSCONTEXTID
Search Columns: 2
Estim. CPU-Costs = 210 Estim. IO-Costs = 0
Access Predicates
5 INDEX UNIQUE SCAN TZPA~0
Search Columns: 2
Estim. CPU-Costs = 210 Estim. IO-Costs = 0
Access Predicates
--- 8 TABLE ACCESS BY INDEX ROWID ZTM0108
( Estim. Costs = 242 , Estim. #Rows = 2,540 )
Estim. CPU-Costs = 10,811,361 Estim. IO-Costs = 241
7 INDEX RANGE SCAN ZTM0108~0
( Estim. Costs = 207 , Estim. #Rows = 2,540 )
Search Columns: 3
Estim. CPU-Costs = 9,790,330 Estim. IO-Costs = 207
Access Predicates Filter Predicates
--- 11 TABLE ACCESS BY INDEX ROWID ZTM0135
( Estim. Costs = 397 , Estim. #Rows = 2,380 )
Estim. CPU-Costs = 11,235,469 Estim. IO-Costs = 396
10 INDEX RANGE SCAN ZTM0135~0
( Estim. Costs = 323 , Estim. #Rows = 2,380 )
Search Columns: 3
Estim. CPU-Costs = 10,288,477 Estim. IO-Costs = 323
Access Predicates Filter Predicates
PRD:
SQL Statement
SELECT
T_00.RANL , T_00.XALLB , T_00.REPKE , T_00.REWHR , T_00.HKONT , T_00.ZTMNAIBRX , T_00.GSART ,
T_00.ZTMHOYMNX , T_00.ZTMSBKBNX , T_00.ZTMSHDAYZ , T_00.ZTMMBHZKP , T_01.BAL_SH_CUR ,
T_01.ZTMSIHONP , T_02.SECURITY_ID , T_02.SECURITY_ACCOUNT
FROM
ZTM0108 T_00, ZTM0135 T_01, TRACV_POSCONTEXT T_02
WHERE
T_00.MANDT = '500' AND T_00.BUKRS = 'MC51' AND T_00.ZTMMCSNGX = '200806' AND
T_02.SECURITY_ACCOUNT = '0001' AND T_01.MANDT = '500' AND T_01.BUKRS = T_00.BUKRS AND
T_01.ZTMMCSNGX = T_00.ZTMMCSNGX AND T_01.PARTNER = T_00.REPKE AND T_02.MANDT = '500' AND
T_02.SECURITY_ID = T_00.RANL
Execution Plan
SELECT STATEMENT ( Estimated Costs = 209 , Estimated #Rows = 1 )
--- 12 NESTED LOOPS
| ( Estim. Costs = 208 , Estim. #Rows = 1 )
| Estim. CPU-Costs = 18.996.864 Estim. IO-Costs = 207
|-- 9 NESTED LOOPS
| | ( Estim. Costs = 120 , Estim. #Rows = 1 )
| | Estim. CPU-Costs = 10.171.528 Estim. IO-Costs = 119
| |-- 6 NESTED LOOPS
| | | Estim. CPU-Costs = 27.634 Estim. IO-Costs = 0
| | |-- 4 NESTED LOOPS
| | | | Estim. CPU-Costs = 27.424 Estim. IO-Costs = 0
| | | |-----1 INDEX RANGE SCAN TZPA~0
| | | | Search Columns: 1
| | | | Estim. CPU-Costs = 5.584 Estim. IO-Costs = 0
| | | | Access Predicates
| | | --- 3 TABLE ACCESS BY INDEX ROWID DIFT_POS_IDENT
| | | | Estim. CPU-Costs = 210 Estim. IO-Costs = 0
| | | | Filter Predicates
| | | -
2 INDEX RANGE SCAN DIFT_POS_IDENT~PT
| | | Search Columns: 1
| | | Estim. CPU-Costs = 210 Estim. IO-Costs = 0
| | | Access Predicates
| | -
5 INDEX RANGE SCAN TRACT_POSCONTEXTID
| | Search Columns: 2
| | Estim. CPU-Costs = 210 Estim. IO-Costs = 0
| | Access Predicates
| --- 8 TABLE ACCESS BY INDEX ROWID ZTM0108
| | ( Estim. Costs = 120 , Estim. #Rows = 1 )
| | Estim. CPU-Costs = 10.143.893 Estim. IO-Costs = 119
| -
7 INDEX RANGE SCAN ZTM0108~0
| ( Estim. Costs = 119 , Estim. #Rows = 1 )
| Search Columns: 4
| Estim. CPU-Costs = 10.142.167 Estim. IO-Costs = 119
| Access Predicates Filter Predicates
--- 11 TABLE ACCESS BY INDEX ROWID ZTM0135
| ( Estim. Costs = 89 , Estim. #Rows = 1 )
| Estim. CPU-Costs = 8.825.337 Estim. IO-Costs = 88
10 INDEX RANGE SCAN ZTM0135~0
( Estim. Costs = 88 , Estim. #Rows = 1 )
Search Columns: 4
Estim. CPU-Costs = 8.823.742 Estim. IO-Costs = 88
Access Predicates Filter Predicates
Only difference is the client.
I see that QAS use index SA on table DIFT_POS_IDENT first, while PRD deal with table TZPA first...Is it the reason?
Best Regards,
Robin -
Performance problem with relatively simple query
Hi, I've a statement that takes over 20s. It should take 3s or less. The statistics are up te date so I created an explain plan and tkprof. However, I don't see the problem. Maybe somebody can help me with this?
explain plan
SQL Statement which produced this data:
select * from table(dbms_xplan.display)
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
| 0 | SELECT STATEMENT | | 16718 | 669K| | 22254 |
| 1 | SORT UNIQUE | | 16718 | 669K| 26M| 22254 |
| 2 | FILTER | | | | | |
|* 3 | HASH JOIN | | 507K| 19M| | 9139 |
| 4 | TABLE ACCESS FULL | PLATE | 16718 | 212K| | 19 |
|* 5 | HASH JOIN | | 507K| 13M| 6760K| 8683 |
|* 6 | HASH JOIN | | 216K| 4223K| | 1873 |
|* 7 | TABLE ACCESS FULL | SDG_USER | 1007 | 6042 | | 5 |
|* 8 | HASH JOIN | | 844K| 11M| | 1840 |
|* 9 | TABLE ACCESS FULL| SDG | 3931 | 23586 | | 8 |
| 10 | TABLE ACCESS FULL| SAMPLE | 864K| 6757K| | 1767 |
|* 11 | TABLE ACCESS FULL | ALIQUOT | 2031K| 15M| | 5645 |
| 12 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 5 | | |
| 13 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 5 | | |
| 14 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 5 | | |
| 15 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 5 | | |
Predicate Information (identified by operation id):
3 - access("SYS_ALIAS_2"."PLATE_ID"="SYS_ALIAS_1"."PLATE_ID")
5 - access("SYS_ALIAS_3"."SAMPLE_ID"="SYS_ALIAS_2"."SAMPLE_ID")
6 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
7 - filter("SDG_USER"."U_CLIENT_TYPE"='QC')
8 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
9 - filter("SYS_ALIAS_4"."STATUS"='C' OR "SYS_ALIAS_4"."STATUS"='P' OR "SYS_ALIA
S_4"."STATUS"='V')
11 - filter("SYS_ALIAS_2"."PLATE_ID" IS NOT NULL)
Note: cpu costing is off
tkprof
TKPROF: Release 9.2.0.1.0 - Production on Mon Sep 22 11:09:37 2008
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: d:\oracle\admin\nautp\udump\nautp_ora_5708.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
alter session set sql_trace true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 61
SELECT distinct p.name
FROM lims_sys.sdg sd, lims_sys.sdg_user sdu, lims_sys.sample sa, lims_sys.aliquot a, lims_sys.plate p
WHERE sd.sdg_id = sdu.sdg_id
AND sd.sdg_id = sa.sdg_id
AND sa.sample_id = a.sample_id
AND a.plate_id = p.plate_id
AND sd.status IN ('V','P','C')
AND sdu.u_client_type = 'QC'
call count cpu elapsed disk query current rows
Parse 1 0.09 0.09 0 3 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 7.67 24.63 66191 78732 0 500
total 3 7.76 24.72 66191 78735 0 500
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 61
Rows Row Source Operation
500 SORT UNIQUE
520358 FILTER
520358 HASH JOIN
16757 TABLE ACCESS FULL PLATE
520358 HASH JOIN
196632 HASH JOIN
2402 TABLE ACCESS FULL SDG_USER
834055 HASH JOIN
3931 TABLE ACCESS FULL SDG
864985 TABLE ACCESS FULL SAMPLE
2037373 TABLE ACCESS FULL ALIQUOT
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 33865)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 33865)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 33865)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 33865)
select 'x'
from
dual
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
total 3 0.00 0.00 0 3 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 61
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
begin :id := sys.dbms_transaction.local_transaction_id; end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 12 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 12 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 61
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 3 0.09 0.09 0 3 0 0
Execute 4 0.00 0.00 0 12 0 1
Fetch 2 7.67 24.63 66191 78735 0 501
total 9 7.76 24.72 66191 78750 0 502
Misses in library cache during parse: 3
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 48 0.00 0.00 0 0 0 0
Execute 54 0.00 0.01 0 0 0 0
Fetch 65 0.00 0.00 0 157 0 58
total 167 0.00 0.01 0 157 0 58
Misses in library cache during parse: 16
4 user SQL statements in session.
48 internal SQL statements in session.
52 SQL statements in session.
Trace file: d:\oracle\admin\nautp\udump\nautp_ora_5708.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
4 user SQL statements in trace file.
48 internal SQL statements in trace file.
52 SQL statements in trace file.
20 unique SQL statements in trace file.
500 lines in trace file.Edited by: RZ on Sep 22, 2008 2:27 AMA few notes:
1. You seem to have either a VPD policy active or you're using views that add some more predicates to the query, according to the plan posted (the access on the PK_OPERATOR_GROUP index). Could this make any difference?
2. The estimates of the optimizer are really very accurate - actually astonishing - compared to the tkprof output, so the optimizer seems to have a very good picture of the cardinalities and therefore the plan should be reasonable.
3. Did you gather index statistics as well (using COMPUTE STATISTICS when creating the index or "cascade=>true" option) when gathering the statistics? I assume you're on 9i, not 10g according to the plan and tkprof output.
4. Looking at the amount of data that needs to be processed it is unlikely that this query takes only 3 seconds, the 20 seconds seems to be OK.
If you are sure that for a similar amount of underlying data the query took only 3 seconds in the past it would be very useful if you - by any chance - have an execution plan at hand of that "3 seconds" execution.
One thing that I could imagine is that due to the monthly data growth that you've mentioned one or more of the tables have exceeded the "2% of the buffer cache" threshold and therefore are no longer treated as "small tables" in the buffer cache. This could explain that you now have more physical reads than in the past and therefore the query takes longer to execute than before.
I think that this query could only be executed in 3 seconds if it is somewhere using a predicate that is more selective and could benefit from an indexed access path.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Hello, I'll start by saying that I am a noob. Anyways, I am trying to do what I thought would be a simple query to get records that are greater than or equal to the current date: this is my query...
<cfquery name="getUpcoming" datasource="events">
SELECT title, eventDate FROM event WHERE eventDate >= #Now()# ORDER BY eventDate ASC
</cfquery>
It works, sort of, I do get records that are greater than the current date, but any records that are equal to do not show up.
I am assuming that it is looking at time as well, or I am doing it completely wrong. I don't know? Any help would be greatly appreciated.I didn't use the cfqueryparam as suggested, is there something dangerous about doing it this way?
Nothing dangerous, no. Just "less than ideal" (in a sloppy / lazy sort of way). As I suggested, one should not hard-code dynamic values into the SQL string, one should pass them as parameters. it's just "the way it should be done".
When the DB receives your SQL string (with the dynamic values hard-coded), the DB engine needs to compile the SQL to make an execution plan before executing the query. Any change to the SQL string requires recompilation. However if you pass your parameter as parameters, then the SQL does not need to be recompiled.
It's the same sort of thing as not using global variables unless one has to, despite the fact they're "easier", or duplicating code instead of refactoring code. One should try to write decent code.
Adam -
Is there a big difference with the SR MBPs?
ik have to option to buy (from apple) a refurbished macbookpro 17inch C2D at a great price.
i had almost decided on buying it for sure, when i realised it is the model before the santa rosa mbps.
my question is:
are the santa rosa mbps allot less buggy, or allot faster, or anything significamt or should i go with the old one.?I have the original MBP and the new one with the Santa Rosa chipset, both 15" models---the only differences I see are a longer time between battery charges, the capability to add up to 4 GB of memory as opposed to the 2 GB limit on the older one (or 3 GB for newer "older" ones), and a little cooler.
Overall, my two MBPs seem about the same in terms of performance (slight edge to the SR MBP), and the display looks the same to me, as well. Neither are or were ever "buggy"---the older one has been a pleasure to work on, and still is. The new one is just as nice.
You can read a longer comparison I made a few weeks ago here:
http://discussions.apple.com/message.jspa?messageID=4705037#4705037 -
Hi,
I registered three XSD schemas, they were created by "trang" as you can read at:
XML schema register , tables and types.
After I corrected namespace and i added following options: storeVarrayAsTable="true", maintainDOM="false" and a defaultTable.
They were register successfull, and all tables was created either. I insert three little XMl files in the default table. Three rows were created (I try: select count(*))
Then I try this query:
Query:
select extract(object_value,'/') from mitablename;
but I got this error:
Error:
ORA-00600: código de error interno, argumentos: [kokax_oct_adtcbk5], [], [], [], [], [], [], []
If I try the same query in SQLdeveloper i get this:
http://img484.imageshack.us/img484/7017/ora00600lu7.jpg
I'm trying with oracle XE here at home.
Did anyone know anything about this error ?
thanks!
Message was edited by:
pollopoleaIt is a long shot, but because of "XE", the ORA-0600, and maybe because you don't have a customer support id.; you could re-install XDB. This will invalidate all your schema based structures and repos. stuff (if dependend on the XDB schema). The XDB schema will be dropped be dropped during this excercise...so make a backup if needed !!!
For *nix OS systems the following could be done to re-install XDB - MAKE A FULL (RMAN) BACKUP of your DATABASE
conn / as sysdba
spool /tmp/reinstall.log
@?/rdbms/admin/catnoqm.sql
shutdown immediate
startup
@?/rdbms/admin/catqm.sql
spool off
No "mayor" errors should be notified in the spooled log file. If so re-try once to run catqm.sql
-- to enable HTTP agian
call dbms_xdb.setHTTPport(8080)
-- to enable FTP again
call dbms_xdb.setFTPport(2100)Message was edited by:
mgralike -
What is wrong with this simple query
Hi,
I am writting a simple code just to get the maximum no values from a database table
The query is
ResultSet = stm.executeQuery("SELECT MAX(column_name) FROM Database_table ");
it seems to be a simple one but i am getting the message
column not found
Please answer soonWell, it depends on how your resultset is retrieving the results. If you retrieve by column name, then that's your problem. You need to do something like this:
ResultSet = stm.executeQuery("SELECT MAX(column_name) AS myColumnName FROM Database_table ");
String myResult = ResultSet.getString(myColumnName);Using MAX, COUNT, etc, will return your result with a mangled or no actual column name to retrieve from. Optionally, you can solve your problem by:
ResultSet.getString(1);Michael Bishop -
Hey. I've got a table which contains only one entry:
Table "users":
id | date | user | pass | profile | last_logged
1 | ... | evo | ... | ... | ...
My query is: "SELECT * FROM users WHERE user=evo". The problem is that im always gettings a null pointer exception, but if I get the data by refering to the id (WHERE id=1) then it works! Why is this?
I've got a login system which takes the username from the form and attempts to get the id corresponding to the user value. But it just won't work. help appreciated. Thanks.Is the user column of type string and evo a string value?
Which database are you using?
Don't you have to put single quotes around that?
String query = "SELECT * FROM users WHERE user = 'evo'"MOD -
Wat is wrong with this simple query ???
I am using 10gxe.
Below is the query which is not working
Whenever i am executing it a pop up windows is coming up
and asking me to enter bind variables ..wat shall i do ???
Here is a prntscrn of the issue .
http://potupaul.webs.com/at.html
VARIABLE g_message VARCHAR2(30)
BEGIN
:g_message := 'My PL/SQL Block Works';
END;
PRINT g_message
Edited by: user4501184 on May 18, 2010 12:42 AMsqlplus "system/sm@test"
SQL*Plus: Release 10.2.0.2.0 - Production on Tue May 18 12:45:05 2010
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> VARIABLE g_message VARCHAR2(30)
SQL> BEGIN
2 :g_message := 'My PL/SQL Block Works';
3 END;
4 /
PL/SQL procedure successfully completed.
SQL> PRINT g_message;
G_MESSAGE
My PL/SQL Block Works
SQL> -
Hi ,
I'm running the following simple query in sql*plus on ORACLE9i. But this query stopped running after 30minutes, and the sql*plus die at the same time .I have no idea about this. Could somebody tell me how I can solve this problem. Thank you very much for your help.
Select Distinct PERSADDRUSE. ADDRUSECD as "Application", PERS.PERSNBR as "Account",
(PERS.FIRSTNAME || ' '|| PERS.MDLINIT ||' ' || PERS.LASTNAME ) as "Name1",' 'as "Name2",' 'as "Name3",
AL1.TEXT as "Address1",AL2.TEXT as "Address2",AL3.TEXT as "Address3",
(ADDR.CITYNAME ||' ' || ' '||ADDR.STATECD ||' '||ADDR.ZIPCD||' '|| ADDR.ZIPSUF) as "CityStateZip"
From PERSADDRUSE
Join PERS
ON PERS.PERSNBR = PERSADDRUSE.PERSNBR
--AND PERS.ADDDATE = '12-JAN-2005'
AND PERSADDRUSE.ADDRUSECD = 'PRI'
join ADDR
ON PERSADDRUSE.ADDRNBR = ADDR.ADDRNBR
left JOIN ADDRLINE AL1
ON ADDR.ADDRNBR = AL1.ADDRNBR
AND AL1.LINENBR = 1
left JOIN ADDRLINE AL2
ON ADDR.ADDRNBR = AL2.ADDRNBR
AND AL2.LINENBR = 2
left JOIN ADDRLINE AL3
ON ADDR.ADDRNBR = AL3.ADDRNBR
AND AL3.LINENBR = 3;Thanks for reply. I have some other query running for 45m and it seems fine. The following are the explain plan I print out. I'm new to PL/SQL.Could you guys give me some other ideas?
BMS_XPLAN.DISPLAY()(PLAN_TABLE_OUTPUT)
PERSADDRUSE | 5726 | 68712 | 183 |'), DBMS_XPLAN_TYPE('| 8 | TABLE ACCESS FULL| PERS | 161K| 2839K| 431 |'), DBMS_XPLAN_TYPE('| 9 | TABLE ACCESS FULL | ADDR | 239K| 5145K| 298 |'), DBMS_XPLAN_TYPE('| 10 | TABLE ACCESS FULL | ADDRLINE | 82087 | 1683K| 240 |'), DBMS_XPLAN_TYPE('| 11 | TABLE ACCESS FULL | ADDRLINE | 82087 | 1683K| 240 |'), DBMS_XPLAN_TYPE('| 12 | TABLE ACCESS FULL | ADDRLINE | 82087 | 1683K| 240 |'), DBMS_XPLAN_TYPE('------------------------------------------------------------------------'), DBMS_XPLAN_TYPE(' '), DBMS_XPLAN_TYPE('Note: cpu costing is off, PLAN_TABLE'' is old version')) -
Big problem with a select using remote database
Hi Guy.
I have a big problem with a simple query, but this is a scenario.
Actuallly I'm in Alexandria - Egypt with a server with Oracle 10.2.0.4 database 32 bit on linux Red Hat as 4.8. the server can connect to another database oracle but 9.2.0.6 installed on Red Hat As 4.5 placed in Milano Italy . Ttwo networks are connected via two adsl cisco router with firewall and Vpn functions. In Egypt there isn't a very good adsl.
In Alexandria, I'm trying to connect to database in Italy with sqlplus. The sqlplus connected I write.. select * from addetti and all work fine.
My problem is that when I try to make the same select on a table with many columns oracle database kill me a session.
My table (ic_lav) is long 174 colums 1924 byte for row. Well when I write a query with select * from ic_lav all oracle close my session.
So I began to change my query I start to
select field1 from ic_lav... and work
select field1,field2,field3,..........field50 from ic_lav and work
select field1,field2,field3,..........field70 from ic_lav and doesn't work
the select work with 68 columns
problem: the query with more then 1064 byte for row doesn't work.
I've tryed with anoter big table with the same problem, but the select fwork with 65 columns...
Iit is obvious that there is any problem with the limit of the query.
The same query (select * from ic_lav) in localMilano)l work fine.
The same query (select * from ic_lav) in vpn with a better adsl line and openvpn software (NO CISCO firewall) WORK FINE.
The same query connectetd in Milano with a vpn make with analogic modem and remote access by windows work fine.
the query (make on my laptop connectet with vpn by cisco doesn't work.
In cisco firewall we haven't any error (cisco man tell me so)
on database 9 I found :
*** 2009-06-12 09:49:45.509
*** SESSION ID:(66.44406) 2009-06-12 09:49:45.497
FATAL ERROR IN TWO-TASK SERVER: error = 12152
*** 2009-06-12 09:49:45.509
ksedmp: internal or fatal error
Current SQL statement for this session:
select * from ic_lis where ditta
----- Call Stack Trace -----
but I don't understand why the lost connection problem (bug 3816595 A processstate dump is produced for a lost connection (12152) ) is caused by len of row
Anybody have some idea ?
Thank youMy table (ic_lav) is long 174 colums 1924 byte for row. Well when I write a query with select * from ic_lav all oracle close my session.Do you get any error?
If the query length is a problem, you could create a view and query the view instead to see if this problem is resolved. -
Having a column name as a variable within a query,Problem!!!!
I have a problem with a simple query shown below
SELECT * FROM DisneyWHERE Upper(COLNAME) LIKE UPPER('%' || SEARCHSTRING || '%');SELECT * FROM Disney
WHERE Upper(COLNAME) LIKE UPPER('%' || SEARCHSTRING || '%');
My problem, The colname variable is not being recognised as a column name for example
A user can select to view a set of characters from the DB by username, movies, etc etc (they select this from a combobox) They then enter a search string(textbox)
colname = username
Searchstring = pluto
SELECT * FROM Disney
WHERE Upper(COLNAME) LIKE UPPER('%' || SEARCHSTRING || '%');
The problem is orac;e does not seem to be picking up that colname is a column name and seems to be doing a simple comparison. To make this clearer
it' seems to be trying to match username = pluto
rather than finding pluto in the username column.
Has anyone got any ideas how i can get around this. I have a strange feeling it is something to do with dynamic pl/sql but i am new to oracle so i have no idea how to write dynamic queries. Any help would be muchly appreciated
I am using oracle 11g and visual studio .net 2005user10372910 wrote:
If you can refer me to any material you think i would find helpful to read or experiment with pls let me knowThe online documentation is a good place to look...
http://www.oracle.com/pls/db102/homepage?remark=tahiti
Start with the concepts guide which gives a good background to the Oracle database.
Look up "bind variables", "hard parsing"/"soft parsing" of queries
In light of what boneist highlighted, you may also want to do a search for "function based indexes".
Those topics should give you some insight into the performance issues around queries and why it is better to write SQL that uses bind variables and get's soft parsed (re-usable SQL) rather than writing dynamic queries that become hard parsed (non-reusable SQL). -
Run a Crystal Report ( with SAP BW Query ) on BO XI R2 SP2
Hi SAP Guru,
In one customer, we want integrate SAP BI 7.0 on Business Objects XI R2 SP2 ( Full Control Right ) to delivery dedicated reports to target users ( Scheduling )
We strart with a simple query that it works fine on Bex..
The integration Kit for SAP was installed on BO server and transports on SAP BW Server (based on Thread of Ingo).
For scheduling, I create a Business View based on my BW query and I create my crystal report. Everything work fine, when I launch my report localy on my desktop.
The problem arrives when I want schedule my report on the CMC or in infoview. I received a error message :
Failed to open the connection. D:ApplBusiness ObjectsBusinessObjects Enterprise 11.5DataprocSched - system - ~tmp148860c59ff44c6.rpt
I try to schedule a report without Business View but I receive the same message.
Questions :
1 ) Is the Integation Kit for SAP is enable for this environement ( SAP BI 7.0 and BO XI R2 SP2 ) ?
2) Have you any suggestion of resolution for my problem ?
If you need more info, I will update my thread..
Thanks in advance for your help...
CédricThanks for your quick reply !!!
are you able to run your report on-demand in the InfoView?
--> I receive my error when I open/run the report.
Which version of CR Designer are you using?
--> CR 11.5.9
Any chance to upgrade? There is already SP6 available for XI R2.
--> We project to migrate to R3.1 this year but we want to show report earlier ( if it's possible on R2 of course ).
Cédric -
Will increasing the memory from 4Gb to 8Gb make a big difference to the speed of my Intel Core Duo 2 Mac mini?
Well, technically no, CPU will remain the same speed. But having 8 gig will give you some room for memory hungry apps and run better with multiple apps.
Adding a SSD drive will give a second life to your macmini. I did mine a few weeks ago (240 gb SSD ) and also went from 2 to 4 gig ram. It's like having a new computer!
Good luck! -
Will adding more RAM to my Power Mac G5 make a difference?
I have a Power Mac G5 (Late 2005) with 1 GB of RAM, which is how I ordered it. Sometimes when I am viewing large (large viewing size) video files with Quicktime, the video files get a little choppy from time to time, especially when I have many other applications open. Will adding more RAM fix this or is it not really necessary? Also, does Leopard improve this problem? I am currently using Tiger.
Hey Tim
I'm definitely not an expert - but more Ram makes a huge difference. I'm not sure if more Ram will specifically solve your Quicktime issues or whether upgrading your graphic card is really the answer. Those more knowledgeable should jump in here. Buying Ram from 3rd parties, like OWC (making sure you do have the right compatible Ram), will improve your performance immediately. I noticed the difference when maximizing the RAm on my G4. It cannot hurt. 4GB of Ram for my G5 cost less than $200. I think Ram is the biggest bang for the buck.
Do you really need to get Leopard? The performance of Tiger on the G5 for me is solid. Seems like Tiger is optimized at this point for the G5 processor chips. Why muck around with all the potential problems you read about on these forums. Check out the Adobe forum and hear some of their nightmare stories. Who has this amount of time to waste on fixing what shouldn't need fixing in the first place. Just a thought - hope I'm not out of line for chiming in on this.
Since your model can take 8GB of Ram, indulge yourself and maybe buy another 4GB of matched Ram and if it doesn't solve your Quicktime issues, I'll bet it will make you happy with everything else.
Mike
Maybe you are looking for
-
RealPlayer says I Need to Open a Port For Chat to Work, Am I doing it Right
I have live feeds for CBS's Big Brother. I'm trying to get on their Chat and I get a Java IOException error, cannot connect to server, etc. RealPlayer is insisting I have to open port 21 and/or 6667 in my router (AEBS). I entered 21, 6667 in all 4 po
-
Zen Micro - Problems with Zen Micro Media Explo
I've successfully installed the Zen Media Explorer and Media Source. I've used Zen Media explorer at work just fine but seem to be missing something on my home desktop. Everything in Media Explorer seems to work except for the "Rip Audio CD Tracks to
-
Cannot create a new Offline Address Book
I have 2 instances of Exchange Server installation in one single organisation (by one per domain): 1. Exchange 2010 Version 14.3 (Build 123.4). 2. Echange 2013 Version 15.0 (Build 1044.25). I have created new OAB and arbitration mailbox following th
-
The SQL query is not executing
Hi I have the following situation: In a project we designed our reports calling a stored procedure the exits in a MS SQL Server 200 database. The Stored Procedures works fine and when they are used in the report everything works perfectly. The report
-
Can OCR'd text be edited?
I'm using Acrobat 9 Pro, and working with a document, a TIFF file, that was converted to PDF and OCR'd in Acrobat. Now I need to edit some of the text using the TouchUp Text tool, but am not able to edit it. In previous versions I was able to edit OC