Query Performance Please Help
Hi can any body tell me how do I improve the performance of this query.This query takes forever to execute.
PLEASE HELP
select substr(d.name,1,14) "dist",
sum(r.room_net_sq_foot) "nsf",
sum(r.student_station_count) "sta",
sum(distinct(r.cofte)) "fte"
from b_fish_report r,
g_efis_organization d
where substr(r.organization_code,-2,2) = substr(d.code,-2,2) and
d.organization_type = 'CNTY' and
r.room_satisfactory_flag = 'Y' and
substr(d.code,-2,2) between '01' and '72'
-- rownum < 50
group by d.name, r.organization_code
order by d.name
It has nonunique Indexes on Organization code
Thanks
Asma.
Asma,
I tried your SQL on my tables T1 and T2. Indexes are on C1,C2,C3 and N1,N2,N3. The data in T1 and T2 are shown below with the explain plan (also called EP) listed. You really need to do an explain plan (free TOAD is easiest to do this in) and respond showing your EP results.
By simply changing the optimizer mode to RULE I was able to get it to use indexes on both T1 and T2.
T1 data
C1 C2 C3 N1 N2
001 Y AAA 1 11
002 Y BBB 2 22
003 Y CCC 3 33
111 N DDD 4 44
222 N EEE 5 55
333 Y FFF 6 66
070 Y GGG 7 77
071 N HHH 8 88
072 Y III 9 99
TEST TEST TEST 10 100
T2 data
C1 C2 C3 N1 N2
001 CNTY AAA 1 11
002 CNTY BBB 2 22
003 CNTY CCC 3 33
111 XXX DDD 4 44
222 XXX EEE 5 55
333 CNTY FFF 6 66
070 CNTY GGG 7 77
071 XXX HHH 8 88
072 CNTY III 9 99
TEST TEST TEST 10 100
This is the results when I run the SQL based on this data ...
dist nsf sta fte
AAA 1 11 10
BBB 2 22 20
CCC 3 33 30
FFF 6 66 60
GGG 7 77 70
III 9 99 90
--[SQL 1] : with CHOOSE as the optimizer mode, which is normally the DEFAULT if no hint is specified
select /*+ CHOOSE */
substr(d.c3,1,14) "dist",
sum(r.n1) "nsf",
sum(r.n2) "sta",
sum(distinct(r.n3)) "fte"
from t1 r, t2 d
where substr(r.c1,-2,2) = substr(d.c1,-2,2) and
d.c2 = 'CNTY' and
r.c2 = 'Y' and
substr(d.c1,-2,2) between '01' and '72'
group by d.c3, r.c1
order by d.c3
This is what the EP shows for your SQL (which will probably be the same for you once you do an EP on your actuall sql) ...
SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=37)
SORT (GROUP BY) (Cost=4 Card=1 Bytes=37)
NESTED LOOPS (Cost=2 Card=1 Bytes=37)
TABLE ACCESS (FULL) OF T1 (Cost=1 Card=1 Bytes=12)
TABLE ACCESS (BY INDEX ROWID) OF T2 (Cost=1 Card=1 Bytes=25)
INDEX (RANGE SCAN) OF I_NU_T2_C2 (NON-UNIQUE)
Notice the FULL table scan of T1 which you don't want, and neither C1 index is getting used (I've explained why below).
--[SQL 2] : only changed the hint to RULE ...
select /*+ RULE */
substr(d.c3,1,14) "dist",
sum(r.n1) "nsf",
sum(r.n2) "sta",
sum(distinct(r.n3)) "fte"
from t1 r, t2 d
where substr(r.c1,-2,2) = substr(d.c1,-2,2) and
d.c2 = 'CNTY' and
r.c2 = 'Y' and
substr(d.c1,-2,2) between '01' and '72'
group by d.c3, r.c1
order by d.c3
SELECT STATEMENT Optimizer=HINT: RULE
SORT (GROUP BY)
NESTED LOOPS
TABLE ACCESS (BY INDEX ROWID) OF T2
INDEX (RANGE SCAN) OF I_NU_T2_C2 (NON-UNIQUE)
TABLE ACCESS (BY INDEX ROWID) OF T1
INDEX (RANGE SCAN) OF I_NU_T1_C2 (NON-UNIQUE)
Though the C2 index is getting used (your r.c2 = 'Y' part in the where clause) the main problem your having here is the JOIN column (C1 in both tables) is not getting used. So the join you have ...
where substr(r.c1,-2,2) = substr(d.c1,-2,2)
isn't using an index and you want it too. There are 2 solutions to correct this..
Solution #1
The first is to make a function-based index for data. Since your doing SUBSTR on C1 that C1 index does not contain that partial information so it will not use it. Below is the syntax to make a function based index for this partial data ...
CREATE INDEX I_NU_T1_C1_SUBSTR ON T1 (SUBSTR(C1,-2,2));
CREATE INDEX I_NU_T2_C1_SUBSTR ON T2 (SUBSTR(C1,-2,2));
or also this way if it's still not using the above indexes ...
CREATE INDEX I_NU_T1_C1_SUBSTR ON T1 (SUBSTR(C1,-2,2),C1);
CREATE INDEX I_NU_T2_C1_SUBSTR ON T2 (SUBSTR(C1,-2,2),C1);
Solution #2
The second solution is to make another column in both table and place this 2 digit information in it, and then index this new column. That way the join will look like ...
where r.c_new_column = d.c_new_column
and
r.c_new_column between '01' and '72'
also with this new column the BETWEEN clause at the end you will not need the substring as well. Also remember BETWEEN on character values is different than numbers.
Final Notes
I just tried creating the functional index and I can't get it to be used it for some reason (I might not have the right amount of data), but I really think that is your best option here. As long as it uses the functional index you won't have to change your code. You might want to try using INDEX() in the hint to get it to be used, but hopefully it will use it right away. Try all 4 types of optimizer modes (CHOOSE, RULE, ALL_ROWS, FIRST_ROWS) in your primary hints to see if it will use the new function-based index.
You really do need to get explain plan going. Even if you make these functional indexes you won't know if its going to be using them until you look at the EP results. You can do EP manually (the SQL of how to produce the results is in OTN, though I find free TOAD is by far the easiest) and you will still need to have run the utlxplan.sql script. Oracle I do think has some GUI tools, maybe in OEM, that have explain plan built in as well.
I hope this helps ya,
Tyler D.
Similar Messages
-
Pass username and password ADFS without using query string, Please help.
pass username and password ADFS without using query string, Please help.
I used query string , but it is unsecured to pass credentials over url, with simple tool like httpwatch , anyone can easily get the password and decrypt it.Hi,
According to your post, my understanding is that you had an issue about the ADFS.
As this issue is related to ADFS, I recommend you post your issue to the forum for ADFS.
http://social.msdn.microsoft.com/Forums/vstudio/en-US/home?forum=Geneva
The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us.
Thank you for your understanding and support.
Thanks,
Jason
Forum Support
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
[email protected]
Jason Guo
TechNet Community Support -
Query Performance Tuning - Help
Hello Experts,
Good Day to all...
TEST@ora10g>select * from v$version;
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
"CORE 10.2.0.4.0 Production"
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
SELECT fa.user_id,
fa.notation_type,
MAX(fa.created_date) maxDate,
COUNT(*) bk_count
FROM book_notations fa
WHERE fa.user_id IN
( SELECT user_id
FROM
( SELECT /*+ INDEX(f2,FBK_AN_ID_IDX) */ f2.user_id,
MAX(f2.notatn_id) f2_annotation_id
FROM book_notations f2,
title_relation tdpr
WHERE f2.user_id IN ('100002616221644',
'100002616221645',
'100002616221646',
'100002616221647',
'100002616221648')
AND f2.pack_id=tdpr.pack_id
AND tdpr.title_id =93402
GROUP BY f2.user_id
ORDER BY 2 DESC)
WHERE ROWNUM <= 10)
GROUP BY fa.user_id,
fa.notation_type
ORDER BY 3 DESC;Cost of the Query is too much...
Below is the explain plan of the query
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 29 | 1305 | 52 (10)| 00:00:01 |
| 1 | SORT ORDER BY | | 29 | 1305 | 52 (10)| 00:00:01 |
| 2 | HASH GROUP BY | | 29 | 1305 | 52 (10)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID | book_notations | 11 | 319 | 4 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 53 | 2385 | 50 (6)| 00:00:01 |
| 5 | VIEW | VW_NSO_1 | 5 | 80 | 29 (7)| 00:00:01 |
| 6 | HASH UNIQUE | | 5 | 80 | | |
|* 7 | COUNT STOPKEY | | | | | |
| 8 | VIEW | | 5 | 80 | 29 (7)| 00:00:01 |
|* 9 | SORT ORDER BY STOPKEY | | 5 | 180 | 29 (7)| 00:00:01 |
| 10 | HASH GROUP BY | | 5 | 180 | 29 (7)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID | book_notations | 5356 | 135K| 26 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 6917 | 243K| 27 (0)| 00:00:01 |
| 13 | MAT_VIEW ACCESS BY INDEX ROWID| title_relation | 1 | 10 | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | IDX_TITLE_ID | 1 | | 1 (0)| 00:00:01 |
| 15 | INLIST ITERATOR | | | | | |
|* 16 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 5356 | | 4 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 746 | | 1 (0)| 00:00:01 |
Table Details
SELECT COUNT(*) FROM book_notations; --111367
Columns
user_id -- nullable field - VARCHAR2(50 BYTE)
pack_id -- NOT NULL --NUMBER
notation_type-- VARCHAR2(50 BYTE) -- nullable field
CREATED_DATE - DATE -- nullable field
notatn_id - VARCHAR2(50 BYTE) -- nullable field
Index
FBK_AN_ID_IDX - Non unique - Composite columns --> (user_id and pack_id)
SELECT COUNT(*) FROM title_relation; --12678
Columns
pack_id - not null - number(38) - PK
title_id - not null - number(38)
Index
IDX_TITLE_ID - Non Unique - TITLE_ID
Please help...
Thanks...Linus wrote:
Thanks Bravid for your reply; highly appreciate that.
So as you say; index creation on the NULL column doesnt have any impact. OK fine.
What happens to the execution plan, performance and the stats when you remove the index hint?
Find below the Execution Plan and Predicate information
"PLAN_TABLE_OUTPUT"
"Plan hash value: 126058086"
"| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |"
"| 0 | SELECT STATEMENT | | 25 | 1125 | 55 (11)| 00:00:01 |"
"| 1 | SORT ORDER BY | | 25 | 1125 | 55 (11)| 00:00:01 |"
"| 2 | HASH GROUP BY | | 25 | 1125 | 55 (11)| 00:00:01 |"
"| 3 | TABLE ACCESS BY INDEX ROWID | book_notations | 10 | 290 | 4 (0)| 00:00:01 |"
"| 4 | NESTED LOOPS | | 50 | 2250 | 53 (8)| 00:00:01 |"
"| 5 | VIEW | VW_NSO_1 | 5 | 80 | 32 (10)| 00:00:01 |"
"| 6 | HASH UNIQUE | | 5 | 80 | | |"
"|* 7 | COUNT STOPKEY | | | | | |"
"| 8 | VIEW | | 5 | 80 | 32 (10)| 00:00:01 |"
"|* 9 | SORT ORDER BY STOPKEY | | 5 | 180 | 32 (10)| 00:00:01 |"
"| 10 | HASH GROUP BY | | 5 | 180 | 32 (10)| 00:00:01 |"
"| 11 | TABLE ACCESS BY INDEX ROWID | book_notations | 5875 | 149K| 28 (0)| 00:00:01 |"
"| 12 | NESTED LOOPS | | 7587 | 266K| 29 (0)| 00:00:01 |"
"| 13 | MAT_VIEW ACCESS BY INDEX ROWID| title_relation | 1 | 10 | 1 (0)| 00:00:01 |"
"|* 14 | INDEX RANGE SCAN | IDX_TITLE_ID | 1 | | 1 (0)| 00:00:01 |"
"| 15 | INLIST ITERATOR | | | | | |"
"|* 16 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 5875 | | 4 (0)| 00:00:01 |"
"|* 17 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 775 | | 1 (0)| 00:00:01 |"
"Predicate Information (identified by operation id):"
" 7 - filter(ROWNUM<=10)"
" 9 - filter(ROWNUM<=10)"
" 14 - access(""TDPR"".""TITLE_ID""=93402)"
" 16 - access((""F2"".""USER_ID""='100002616221644' OR ""F2"".""USER_ID""='100002616221645' OR "
" ""F2"".""USER_ID""='100002616221646' OR ""F2"".""USER_ID""='100002616221647' OR "
" ""F2"".""USER_ID""='100002616221648') AND ""F2"".""PACK_ID""=""TDPR"".""PACK_ID"")"
" 17 - access(""FA"".""USER_ID""=""$nso_col_1"")"
The cost is the same because the plan is the same. The optimiser chose to use that index anyway. The point is, now that you have removed it, the optimiser is free to choose other indexes or a full table scan if it wants to.
>
Statistics
BEGIN
DBMS_STATS.GATHER_TABLE_STATS ('TEST', 'BOOK_NOTATIONS');
END;
"COLUMN_NAME" "NUM_DISTINCT" "NUM_BUCKETS" "HISTOGRAM"
"NOTATION_ID" 110269 1 "NONE"
"USER_ID" 213 212 "FREQUENCY"
"PACK_ID" 20 20 "FREQUENCY"
"NOTATION_TYPE" 8 8 "FREQUENCY"
"CREATED_DATE" 87 87 "FREQUENCY"
"CREATED_BY" 1 1 "NONE"
"UPDATED_DATE" 2 1 "NONE"
"UPDATED_BY" 2 1 "NONE"
After removing the hint ; the query still shows the same "COST"
Autotrace
recursive calls 1
db block gets 0
consistent gets 34706
physical reads 0
redo size 0
bytes sent via SQL*Net to client 964
bytes received via SQL*Net from client 1638
SQL*Net roundtrips to/from client 2
sorts (memory) 3
sorts (disk) 0
Output of query
"USER_ID" "NOTATION_TYPE" "MAXDATE" "COUNT"
"100002616221647" "WTF" 08-SEP-11 20000
"100002616221645" "LOL" 08-SEP-11 20000
"100002616221644" "OMG" 08-SEP-11 20000
"100002616221648" "ABC" 08-SEP-11 20000
"100002616221646" "MEH" 08-SEP-11 20000Thanks...I still don't know what we're working towards at the moment. WHat is the current run time? What is the expected run time?
I can't tell you if there's a better way to write this query or if indeed there is another way to write this query because I don't know what it is attempting to achieve.
I can see that you're accessing 100k rows from a 110k row table and it's using an index to look those rows up. That seems like a job for a full table scan rather than index lookups.
David -
Hello Experts,
Please help me how the table "digital_compatibility" be modified for faster performance?
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.1.0 - Productio
NLSRTL Version 10.2.0.1.0 - Production
Count of records for the tables:-
SELECT count(*) FROM DEVICE_TYPE; --421
SELECT count(*) FROM DIGITAL_COMPATIBILITY; --227757
CREATE TABLE DEVICE_TYPE
DEVICE_TYPE_ID NUMBER(38,0),
DEVICE_TYPE_MAKE VARCHAR2(256 BYTE),
DEVICE_TYPE_MODEL VARCHAR2(256 BYTE),
DEVICE_DISPLAY_NAME VARCHAR2(256 BYTE),
PARTNER_DEVICE_TYPE VARCHAR2(256 BYTE),
DEVICE_IMAGE_URL VARCHAR2(256 BYTE),
FOH_BUTTON_NAME VARCHAR2(256 BYTE),
FOH_ACTIVE_FLAG CHAR(1 BYTE),
BB_RETAIL_FLAG CHAR(1 BYTE),
DISPLAY_DESCRIPTION VARCHAR2(256 BYTE),
DEVICE_CATEGORY_ID NUMBER(38,0),
DEVICE_SUB_CATEGORY_ID NUMBER(38,0),
DEVICE_BRAND_ID NUMBER(38,0),
PARENT_ID NUMBER(38,0),
POWERED_BY VARCHAR2(256 BYTE),
CARRIER VARCHAR2(256 BYTE),
CAPABILITY_SET_ID NUMBER(38,0),
CREATED_BY VARCHAR2(32 BYTE),
CREATED_DATE DATE,
UPDATED_BY VARCHAR2(32 BYTE),
UPDATED_DATE DATE,
POWERED_BY_DEVICE_TYPE VARCHAR2(64 BYTE),
OPERATING_SYSTEM VARCHAR2(32 BYTE),
OPERATING_SYSTEM_VERSION VARCHAR2(32 BYTE),
BROWSER VARCHAR2(32 BYTE),
BROWSER_VERSION VARCHAR2(32 BYTE),
CLASSIFICATION VARCHAR2(32 BYTE),
CONSTRAINT PK_DEVICE_TYPE PRIMARY KEY ( DEVICE_TYPE_ID));
CREATE INDEX DEVICE_TYPE_IDX ON DEVICE_TYPE
CAPABILITY_SET_ID ,
UPPER( PARTNER_DEVICE_TYPE )
CREATE TABLE DIGITAL_COMPATIBILITY
DIGITAL_COMPATIBILITY_ID NUMBER NOT NULL ENABLE,
CAPABILITY_SET_ID NUMBER,
OBJECT_TYPE VARCHAR2(38 BYTE) NOT NULL ENABLE,
CREATED_DATE DATE NOT NULL ENABLE,
CREATED_BY VARCHAR2(38 BYTE) NOT NULL ENABLE,
UPDATED_DATE DATE NOT NULL ENABLE,
UPDATED_BY VARCHAR2(38 BYTE) NOT NULL ENABLE,
OBJECT_ID VARCHAR2(114 BYTE),
ENCODE_PROFILE_ID NUMBER
CREATE INDEX ENCODE_PROFILE_ID_IDX ON DIGITAL_COMPATIBILITY
ENCODE_PROFILE_ID,
OBJECT_ID,
OBJECT_TYPE
Query
=====
EXPLAIN PLAN FOR
SELECT /*+ INDEX(dc, ENCODE_PROFILE_ID_IDX) */
DISTINCT dc.object_id AS title_id
FROM digital_compatibility dc,
device_type dt
WHERE dc.capability_set_id = dt.capability_set_id
AND upper(dt.partner_device_type) = :1
AND dc.object_id IN (:2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32, :33)
AND dc.object_type =:"SYS_B_0";
Explain plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 2 | 472 | 274 (4)|
| 1 | HASH UNIQUE | | 2 | 472 | 274 (4)|
|* 2 | MAT_VIEW ACCESS BY INDEX ROWID| DIGITAL_COMPATIBILITY | 1 | 93 | 68 (3)|
| 3 | NESTED LOOPS | | 2 | 472 | 273 (4)|
|* 4 | INDEX FULL SCAN | DEVICE_TYPE_IDX | 4 | 572 | 1 (0)|
|* 5 | INDEX FULL SCAN | ENCODE_PROFILE_ID_IDX | 8 | | 67 (3)|
Predicate Information (identified by operation id):
2 - filter("DC"."CAPABILITY_SET_ID"="DT"."CAPABILITY_SET_ID")
4 - access(UPPER("PARTNER_DEVICE_TYPE")=:1)
filter(UPPER("PARTNER_DEVICE_TYPE")=:1)
5 - access("DC"."OBJECT_TYPE"=:SYS_B_0)
filter(("DC"."OBJECT_ID"=:2 OR "DC"."OBJECT_ID"=:3 OR "DC"."OBJECT_ID"=:4 OR
"DC"."OBJECT_ID"=:5 OR "DC"."OBJECT_ID"=:6 OR "DC"."OBJECT_ID"=:7 OR
"DC"."OBJECT_ID"=:8 OR "DC"."OBJECT_ID"=:9 OR "DC"."OBJECT_ID"=:10 OR
"DC"."OBJECT_ID"=:11 OR "DC"."OBJECT_ID"=:12 OR "DC"."OBJECT_ID"=:13 OR
"DC"."OBJECT_ID"=:14 OR "DC"."OBJECT_ID"=:15 OR "DC"."OBJECT_ID"=:16 OR
"DC"."OBJECT_ID"=:17 OR "DC"."OBJECT_ID"=:18 OR "DC"."OBJECT_ID"=:19 OR
"DC"."OBJECT_ID"=:20 OR "DC"."OBJECT_ID"=:21 OR "DC"."OBJECT_ID"=:22 OR
"DC"."OBJECT_ID"=:23 OR "DC"."OBJECT_ID"=:24 OR "DC"."OBJECT_ID"=:25 OR
"DC"."OBJECT_ID"=:26 OR "DC"."OBJECT_ID"=:27 OR "DC"."OBJECT_ID"=:28 OR
"DC"."OBJECT_ID"=:29 OR "DC"."OBJECT_ID"=:30 OR "DC"."OBJECT_ID"=:31 OR
"DC"."OBJECT_ID"=:32 OR "DC"."OBJECT_ID"=:33) AND "DC"."OBJECT_TYPE"=:SYS_B_0)
Note
- 'PLAN_TABLE' is old version
Trace
recursive calls 280
db block gets 16
consistent gets 97
physical reads 0
redo size 3224
bytes sent via SQL*Net to client 589
bytes received via SQL*Net from client 1598
SQL*Net roundtrips to/from client 2
sorts (memory) 4
sorts (disk) 0
Thanks ....You index on DIGITAL_COMPATIBILITY is on ENCODE_PROFILE_ID, OBJECT_ID, OBJECT_TYPE
But you query for object_id and object_type.
How much rows you you identify with this? What the PK?
The way it's now, it needs to read the full index, then the table and as DEVICE_TYPE is small it makes a NL to it.
Makes sense.
If you would add an index on OBJECT_ID, OBJECT_TYPE and capability_set_id ORACLE would just need to read the INDEX. -
ABAP QUERY REPORT : please help me ASAP.
hi friends,
I need to change the existing custom abap query.
1) I need to add two more fields on selection screen. I have added using INFOSET and checked the input and output check boxes.
But the text of the one field should be different from standard tetx.
"Reference date" to be changed to "Dairy date".
I have changed in Infosets. updated text is displaying as "Dairy date" in the out put LIST, but standard text (Reference date) is appearing on the selection screen.
I need it to be displaed as "Dairy date"
2)The column "Dairy" date is adding as the last column in the list after executing the query. But I need it to be displayed at 8th column.
Please help me ASAP.hi Eric,
Could you please explain in detail.
1)I HAVE CHANGED THE NAME IN iNFOSET, FIELD GROUP.
IT IS NOT REFLECTING IN SELECTION SCREEN. ONLY REFLECTING IN THE OUT PUT.
2) I have tried by changing the sort sequence number. but still it is displaying at the end of the columns.
please explain in detail. -
Aggregates Question (Performance) Please Help
I have 2 Questions first one is
<b><i>1. Its been mentioned in the Forum that we can anlyse in the workload Monitor (ST03N) i went through that and did not find any Data for anlysis rather its showing information for Load Analysis.</i></b>
<i><b>2. Its been mentioned that we also check in RSDDSTAT table contents i checked this table but could not find data (How this table is Populated)</b></i>
When i checked in
<b>InfoCube Manage Screen --> Performance --> Check Statistics (Refresh Aggregate Statistics)</b>
What are these for?
Can we Analyse to create aggregates or not with out BW Statistics data and just checking ST03N and RSDDSTAT and RSDDAGGRDIR tables?Please Help Me
I am using BI 7.Points will be assigned (Thanks)
Message was edited by:
SV S
Message was edited by:
SV S
nullHi,
For ST03N
From Document
BI Administration Cockpit and New BI Statistics Content in SAP NetWeaver 7.0
As of SAP NetWeaver 7.0 BI, transaction ST03 is based on the Technical Content InfoProviders (unlike prior releases). Therefore, using transaction ST03 for BI Monitoring requires the Technical Content to be activated and to be populated periodically with statistics data.
So, looks like you have to install the new statistics technical content.
From thread /message/3461465#3461465 [original link is broken]
Rajani Saralaya K
IN BI 7.0, ST03n is based on BI Statistics cubes, so unless you install these cubes and schedule the dataflow you cant see any result in there. Even the same thing is mentioned in the note 934848.
For information about RSDDSTAT,
see /message/3627627#3627627 [original link is broken]
Raj. -
Low performance (please help)
Hi!
My DB objects are:create or replace TYPE T_G5RPP AS VARRAY(1000) OF NUMBER(2);
CREATE TABLE "ROGADM"."ROG_TEMP_G5LKL" (
"ID" VARCHAR2(100 BYTE),
"STATUS" NUMBER(2,0) DEFAULT 1,
"G5IDL" VARCHAR2(200 BYTE),
"G5TLOK" NUMBER(1,0),
"G5PEW" NUMBER,
"G5PPP" NUMBER,
"G5LIZ" NUMBER(32,0),
"G5LPP" NUMBER(6,0),
"G5RPP" "ROGADM"."T_G5RPP",
"G5WRT" NUMBER(32,0),
"G5DWR" DATE,
"G5DTW" DATE,
"G5DTU" DATE,
"ID_G5ADR_RADR" VARCHAR2(100 BYTE),
"ID_G5JDR_RJDR" VARCHAR2(100 BYTE),
"ID_G5BUD_RBUD" VARCHAR2(100 BYTE),
"IDR" VARCHAR2(100 BYTE),
"PLS_ID" NUMBER
CREATE INDEX "T_G5LKL_ADR_FK" ON "ROG_TEMP_G5LKL" (PLS_ID, "ID_G5ADR_RADR");
CREATE INDEX "T_G5LKL_BUD_FK" ON "ROG_TEMP_G5LKL" (PLS_ID, "ID_G5BUD_RBUD");
CREATE INDEX "T_G5LKL_JDR_FK" ON "ROG_TEMP_G5LKL" (PLS_ID, "ID_G5JDR_RJDR");
create unique index T_G5LKL_PK on ROG_TEMP_G5LKL(PLS_ID, ID);
function get_obr_ark_dzi(p_obk_typ varchar2, p_obk_id varchar2, p_co varchar2, p_pls_id number)
return varchar2 deterministic
AS
v_obk_typ varchar2(10);
v_obk_id G5ADR.ID%type;
v_out varchar2(400);
v_sep varchar2(2) := '';
v_old varchar2(10) := '';
v_pls_id number;
begin
if p_obk_typ not in ('G5DZE','G5BUD','G5LKL','G5OBR','G5KLU','G5UDZ','G5UDW','G5JDR', 'G5ADR') then
return null;
end if;
v_obk_typ := p_obk_typ;
v_obk_id := p_obk_id;
v_pls_id := p_pls_id;
if v_obk_typ = 'G5ADR' then
-- sprawdzenie, adresem czego jest ten adres
-- (odczytane będą dane dla adresów działek, budynków lub lokali, które są podpięte do tylko 1 obiektu)
declare
v_id G5LKL.ID%type;
v_co varchar2(3); -- ...jest adresem tego
begin
/** START **/
begin
v_co := 'LKL';
execute immediate 'SELECT ID FROM ROG_TEMP_G5LKL WHERE ID_G5ADR_RADR = :obk AND pls_id = :pls'
into v_id
using v_obk_id, v_pls_id;
exception when no_data_found then
v_co := null;
end;
/** END **/
if v_co is not null then
v_obk_typ := 'G5'||v_co;
v_obk_id := v_id;
else
return null;
end if;
end;
end if;
if v_obk_typ = 'G5LKL' then
execute immediate
' SELECT case when :p_co=''obreb'' then SUBSTR(G5IDL,10,4)
when :p_co=''arkusz'' then SUBSTR(G5IDL,18,INSTR(G5IDL,''.'',-1,3)-18)
when :p_co=''dzialka'' then SUBSTR(G5IDL,
instr(G5IDL,''.'',1,2 + case when SUBSTR(G5IDL,15, 3)=''AR_'' then 1 else 0 end) + 1,
instr(G5IDL,''.'',-1,2)-1 - instr(G5IDL,''.'',1,2 + case when SUBSTR(G5IDL,15, 3)=''AR_'' then 1 else 0 end))
end
FROM ROG_TEMP_G5LKL
WHERE ID = :p_obk_id AND pls_id = :pls' into v_out using p_co,p_co,p_co, v_obk_id, v_pls_id;
else
return null;
end if;
return v_out;
exception
when no_data_found then
return null;
end get_obr_ark_dzi;I have a query:SELECT log_pls_id,log_obk_id,
log_obk_typ
,rog_pck_utl.get_obr_ark_dzi(log_obk_typ, log_obk_id, 'obreb', log_pls_id) AS log_obk_obreb
FROM (SELECT distinct log_obk_id, log_obk_typ, log_pls_id
FROM rog_log_ksat_zdarzenia
JOIN rog_log on logkz_log_id = log_id
WHERE logkz_f_obsluzony <> 'T')
where log_obk_typ = 'G5ADR' and log_pls_id = 635 and rownum <= 200; which runs 7.2 seconds. When I comment the block bounded by /** START **/ and /** END **/ the query runs only 0.043 seconds.
Here is part of trace file:SELECT log_pls_id,log_obk_id,
log_obk_typ
,rog_pck_utl.get_obr_ark_dzi(log_obk_typ, log_obk_id, 'obreb', log_pls_id) AS log_obk_obreb
FROM (SELECT distinct log_obk_id, log_obk_typ, log_pls_id
FROM rog_log_ksat_zdarzenia
JOIN rog_log on logkz_log_id = log_id
WHERE logkz_f_obsluzony <> 'T')
where log_obk_typ = 'G5ADR' and log_pls_id = 635 and rownum <= 200
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.06 0.10 0 2480 0 200
total 3 0.06 0.11 0 2480 0 200
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 379 (ROGADM)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 COUNT (STOPKEY)
0 VIEW
0 SORT (UNIQUE NOSORT)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'ROG_LOG_KSAT_ZDARZENIA' (TABLE)
0 NESTED LOOPS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'ROG_LOG' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'LOG_I'
(INDEX (UNIQUE))
0 INDEX MODE: ANALYZED (RANGE SCAN) OF
'LOGKZ_LOG_FK_I' (INDEX)
SELECT ID
FROM
ROG_TEMP_G5LKL WHERE ID_G5ADR_RADR = :obk AND pls_id = :pls
call count cpu elapsed disk query current rows
Parse 42 0.00 0.00 0 0 0 0
Execute 98 0.00 0.00 0 0 0 0
Fetch 98 7.82 7.58 0 4519564 0 0
total 238 7.82 7.58 0 4519564 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 379 (ROGADM) (recursive depth: 1)
Rows Row Source Operation
0 TABLE ACCESS BY INDEX ROWID ROG_TEMP_G5LKL (cr=138354 pr=0 pw=0 time=231295 us)
205593 INDEX RANGE SCAN T_G5LKL_PK (cr=576 pr=0 pw=0 time=411229 us)(object id 433034)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'ROG_TEMP_G5LKL' (TABLE)
205593 INDEX MODE: ANALYZED (RANGE SCAN) OF 'T_G5LKL_ADR_FK' (INDEX)
(...)What do you think is the reason of such big value in 'query' column of 'Fetch'?
Explain plan executed from SQL Developer is different:explain plan for
SELECT ID
FROM ROG_TEMP_G5LKL
WHERE ID_G5ADR_RADR = :adr
AND pls_id = 630;
PLAN_TABLE_OUTPUT
Plan hash value: 3052821629
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 117 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| ROG_TEMP_G5LKL | 1 | 117 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_G5LKL_PK | 1 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(ID_G5ADR_RADR=:ADR)
2 - access(PLS_ID=630)Please help...Yes, this index can be unique:SELECT count(*)
FROM ROG_TEMP_G5LKL
GROUP BY pls_id, ID_G5ADR_RADR
HAVING count(*) > 1;
no rows selectedUnfortunately statistics are not up to date. I can't gather it, because DBA must fix some DB blocks first.
Edited by: JackK on Aug 5, 2011 7:20 AM
I've found something! When I add a condition to WHERE clause:SELECT log_pls_id,log_obk_id,
log_obk_typ
,rog_pck_utl.get_obr_ark_dzi(log_obk_typ, log_obk_id, 'obreb', log_pls_id) AS log_obk_obreb
,rog_pck_utl.get_obr_ark_dzi(log_obk_typ, log_obk_id, 'arkusz', log_pls_id) AS log_obk_AR
,rog_pck_utl.get_obr_ark_dzi(log_obk_typ, log_obk_id, 'dzialka', log_pls_id) AS log_obk_dze
FROM (SELECT distinct log_obk_id, log_obk_typ, log_pls_id
FROM rog_log_ksat_zdarzenia
JOIN rog_log on logkz_log_id = log_id
WHERE logkz_f_obsluzony <> 'T')
where log_obk_typ = 'G5ADR' and log_pls_id = 635 and rownum <= 200
and exists (select 1 from g5lkl where id_g5adr_radr = log_obk_id); -- added conditionthe statement runs in about 1 second. Without the condition it takes 22.4 seconds to complete. I think that's because in the latter case the querySELECT ID FROM G5LKL ... did not find any row.
Edited by: JackK on Aug 8, 2011 6:16 AM
I changed the index T_G5LKL_ADR_FK to:CREATE INDEX T_G5LKL_ADR_FK_ID ON ROG_TEMP_G5LKL (PLS_ID, ID_G5ADR_RADR, ID);and it runs fast enought - *200 rows in 1-2 seconds*. Now the plan shows only INDEX RANGE SCAN. -
Sql Query . Please help its urgent.
Suppose in table EMP there are 2 columns (Roll_no and Name)
Roll NO Name
00001 A
00002 B
00010 X
My requirement is to trunc preceding Zero's. For ex : for Roll no: 00001 the output should be 1 and for 00002 --> 2 .
Please help its very urgent.try this
select
to_number(roll_no) roll_no,
name
from
emp;
Regards
Singh -
SQL QUERY, URGENT PLEASE HELP .....
Hi,
There is a table which stores the sales record, weekly basis.
For example
WEEK______ ITEMNO______SALES______QTY
200201_____10001______10,000______50
200202_____10001______18,000______55
200230_____10001______55,000_____330
Now the report should display the week nos and a Cumulative average.
like
ITEM NO - 10001
WEEKNO____WK-AVG____13WK-AVG____26WK-AVG____52WK-AVG
200201
200202
200203
200230
The WK-AVG is calculated for that perticular (weeks sales /weeks qty) but for 13WK-AVG,26-AVG and 52WK-AVG , The calculationis the (cumulative of last 13 week sales /cumulative of last 13 wk qty)
for example at week 200230 the 13WK-AVG should be
(cumulative sales from week 200218 to 200230 / cumulative qty from week 200218 to 200230 )
the same hold good for 26WK-AVG AND 52WK-AVG. Please suggest me how to do it . This is very urgent . Please help me .
Thanks
FerozFeroz,
One way is to use subselects. E.g.,
SELECT WK_AVG, 13WK_AVG, 26WK_AVG, 56WK_AVG FROM
(SELECT (SALES/QTY) AS WK_AVG FROM TABLE WHERE ITEMNO=x AND WEEK = ...),
(SELECT (SUM(SALES)/SUM(QTY)) AS 13WK_AVG WHERE ITEMNO=X AND WEEK > Y AND WEEK <= Z),
hope this helps.
regards,
Stewart -
Connection query. Please Help
My set up is....
BT line to Orange livebox. (wireless turned off)
livebox to TC (cat 5 cables)
TC to PC (not being backed up, Ethernet to provide net access)
Wireless to iMac. 802.11n (b/g compatible)
Internet dropouts, can't find server messages, you are not connected to the Internet, & such, are becoming the bain of my life.
So someone please help. The airport signal from TC in the menu bar is always strong, so how can I have lost the Internet?? Is it possible to loose the Internet but still have a full signal in airport? Where is the weak link? Time Capsule? Cable from livebox to TC? or is this a livebox/orange (ISP) issue??Hi, settings has follows...
Internet Connection
Connect Using: Ethernet
Configure IPv4: Using DHCP
IP Address: 192.. (Same as Livebox IP) Although TC has its own on the first page you come to when launching APU.
Subnet Mask: as livebox
Router Address: as livebox
DNS Server: as livebox
Domain Name (blank)
DHCP Client ID: (blank)
Ethernet WAN Port: Automatic (Default)
Connection Sharing: Share a public IP address
DHCP
DHCP Beginning Address: 10.0.1.2
DHCP Ending Address:10.0.1.200
DHCP Lease: 4 hours
The rest are blank
NAT
NAT port mapping is enabled
Funny thing is the internet connection has been very reliable since last thursday, I looked in the APU when the connection fell & got messages about an unreliable IP address, cant remember the exact message. Also got double NAT errors in the Console! But as i say no trouble since, but I dread it happening again & think its going to any minute.
Thanks for your help. -
Urgently help needs on query performance please
HI,
Eache iteration taking four seconds while execution...
What could be the reason for this????
FOR tem_rec IN temp LOOP
UPDATE t_routing_operations_api SET routing= tem_rec.sub_routing
WHERE routing_id=latest_routing_id and routing_id != 3 and routing = tem_rec.routing ;
UPDATE t_next_operations_api SET routing = tem_rec.sub_routing
WHERE routing_id=latest_routing_id and routing_id != 3 and routing = tem_rec.routing;
--COMMIT;
END LOOP;
regards,
Khaleel.Hi there, thank you all for showing the interest in solving my problem....
here I am going bit more clear about my question...
My compete procedure is....
CREATE OR REPLACE
procedure rand_route_gen1 is
CURSOR get_routing_id IS SELECT * FROM t_routings_api WHERE routing_id=3;
CURSOR temp IS SELECT * FROM c_rname_tem;
CURSOR get_operation_id(v_routing_id number) IS SELECT *
FROM t_routing_operations_api WHERE routing_id=v_routing_id;
CURSOR get_next_oper(v_operation_id number) IS SELECT * FROM t_next_operations_api
WHERE operation_id=v_operation_id;
random_no INTEGER:=0;
rt_id INTEGER:=0;
latest_routing_id INTEGER:=0;
BEGIN
DELETE FROM c_rname_tem;
--COMMIT;
INSERT INTO c_Rname_tem SELECT distinct routing,null FROM t_routing_operations_api;
--COMMIT;
FOR routing_rec IN get_routing_id LOOP
FOR c in 2..10 LOOP
FOR tem_rec IN temp LOOP
random_no:=dbms_random.value(1,3);
UPDATE c_rname_tem SET sub_routing='T'||random_no WHERE routing LIKE tem_rec.routing;
END LOOP;
INSERT INTO T_ROUTINGS_API values ('R'||rout_id.nextval,'1000',null,null,'R',rout_id.nextval,null,1,null,'Test Routing'
,null,null,null,null,1,null,'N',null,null,'XL Sheet',null,null,null,null,null,null);
--COMMIT;
SELECT max(operation_id) INTO rt_id FROM t_routing_operations_api;
rt_id:=rt_id+1;
FOR Operation_Rec IN get_operation_id(routing_rec.routing_id) LOOP
IF operation_rec.routing is not null THEN
insert into T_ROUTING_OPERATIONS_API values (rout_id.currval,/*'T'||i,*/operation_rec.routing,null,
null,rt_id,null,null,null,90,null,null,operation_rec.primary_flag,Operation_Rec.operation_rank,null,null,null,null,null,
null,null,null,10,null,null,null,null);
ELSE
insert into T_ROUTING_OPERATIONS_API values (rout_id.currval,null,Operation_Rec.operation,null,rt_id,
null,null,null,90,null,null,operation_rec.primary_flag,Operation_Rec.operation_rank,null,null,null,null,null,null,null,
null,10,null,null,null,null);
END IF;
--commit;
For next_oper_rec in get_next_oper(operation_rec.operation_id) loop
BEGIN
IF next_oper_rec.routing is not null THEN
insert into T_NEXT_OPERATIONS_API values ( rt_id,rout_id.currval,null,null,null,null,
next_oper_rec.operation_rank,null,null,null,null,
/*'T'||i*/
next_oper_rec.routing );
ELSE
insert into T_NEXT_OPERATIONS_API values ( rt_id,rout_id.currval,next_oper_rec.operation,null,
null,null,next_oper_rec.operation_rank,null,null,null,null,null);
--COMMIT;
END IF;
EXCEPTION
when no_data_found then null;
END;
END LOOP;
rt_id:=rt_id+1;
--COMMIT;
SELECT max(routing_id) INTO latest_routing_id FROM t_routings_api;
DBMS_OUTPUT.PUT_LINE(' UPDATE STARTED AT::'||TO_CHAR(SYSDATE,'SSSSS'));
/*THIS BLOCK TAKING TOO MUCH TIME*/
FOR tem_rec IN temp LOOP
UPDATE t_routing_operations_api SET routing= tem_rec.sub_routing
WHERE routing_id=latest_routing_id and routing_id != 3 and routing = tem_rec.routing ;
UPDATE t_next_operations_api SET routing = tem_rec.sub_routing
WHERE routing_id=latest_routing_id and routing_id != 3 and routing = tem_rec.routing;
-- COMMIT;
END LOOP;
--COMMIT;
END LOOP;
--COMMIT;
END LOOP;
END LOOP;
COMMIT;
END;
For in that abouve probedure THE SHOWN BLOCK TAKING AROUND 4 SECONDS FOR EACH ITERATION.
IN THE MAIN TABLES THERE ARE VERY LESS RECORDS, AROUND 15 ROWS. SO FOR ONE MAIN ITERATION ITS TAKING ONE MUNITE THERE ITSELF. SO TO COMPLETE ALL THIS PROCEDURE ITS TAKING AROUND 9 MUNITES...
I HAVE VERY VERY LESS DATA IN THE TABLES.
SO I HOPE IT GIVE YOU CLEAR IDEA FOR YOU. AWITING FOR YOUR REPLIES.....
REGARDS,
KHALEEL. -
Enter query problem -please help
Hello guys,
Currently i am facing one problem.I have a from which is showing first three cols of emp table(empname,empno,dept).The form has one button.When user is pressing that button user can see more information about the employee.The form which is showing the details of the employees is a new form.This form has another button called return.When user is pressing that button the user can see first form.
In the return button i have written the code
exit_form.
but the problem is after returning to the the form when i am pressing the enter query button it is throwing the error
41003-this function can not be performed here.
But after clicking on emprow when i am pressing enter query button it is running fine.
Can anybody please tell me how can i use enter query button without clicking on the empno row.
Thanks
RajatRajat,
Sounds like when you exit your detail form the focus is not returned to a database block/item. In order to enter "Query" mode, the focus must be on a "Queryable" item. One solution would be to code your Form level Execute-Query trigger to move the focus to the EMP block and then execute_query(). For example:
BEGIN
GO_BLOCK('EMP');
EXECUTE_QUERY();
END;Hope this helps.
Craig...
-- If my response or the response of another is helpful or answers your question please mark the response accordingly. Thanks! -
How to optimize this query? Please help
i have one table(argus) having 80,000 rows and another table (p0f) having 30,000 rows and i have to join both table on the basis of time field. the query is as follows
select distinct(start_time),res.port, res.dst_port from (select * from argus where argus.start_time between '2007-06-13 19:00:00' and '2007-06-22 20:00:00') res left outer join p0f on res.start_time=p0f.p0f_timestamp ;
the query is taking very large time . i have created index on the start_time and p0f_timestamp ,it increased the performance but not so much. My date comparisons would vary every time i have to execute a new query.
Plz tell me is there another way to execute such a query to output same results?
plz help me as my records are increasing day by day
Thanks
ShavetaFrom my small testcase it seems that both queries are absolute identical and don't actually take too much time:
SQL> create table argus as (select created start_time, object_id port, object_id dst_port from all_objects union all
2 select created start_time, object_id port, object_id dst_port from all_objects)
3 /
Table created.
SQL> create table p0f as select created p0f_timestamp, object_id p0f_port, object_id p0f_dst_port from all_objects
2 /
Table created.
SQL> create index argus_idx on argus (start_time)
2 /
Index created.
SQL> create index p0f_idx on p0f (p0f_timestamp)
2 /
Index created.
SQL>
SQL> begin
2 dbms_stats.gather_table_stats(user,'argus',cascade=>true);
3 dbms_stats.gather_table_stats(user,'p0f',cascade=>true);
4 end;
5 /
PL/SQL procedure successfully completed.
SQL>
SQL> select count(*) from argus
2 /
COUNT(*)
94880
SQL> select count(*) from p0f
2 /
COUNT(*)
47441
SQL>
SQL> set timing on
SQL> set autotrace traceonly explain statistics
SQL>
SQL> select distinct (start_time), res.port, res.dst_port
2 from (select *
3 from argus
4 where argus.start_time between to_date('2007-06-13 19:00:00','RRRR-MM-DD HH24:MI:SS')
5 and to_date('2007-06-22 20:00:00','RRRR-MM-DD HH24:MI:SS')) res
6 left outer join
7 p0f on res.start_time = p0f.p0f_timestamp
8 ;
246 rows selected.
Elapsed: 00:00:02.51
Execution Plan
Plan hash value: 1442901002
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 21313 | 520K| | 250 (6)| 00:00:04 |
| 1 | HASH UNIQUE | | 21313 | 520K| 1352K| 250 (6)| 00:00:04 |
|* 2 | FILTER | | | | | | |
|* 3 | HASH JOIN RIGHT OUTER| | 21313 | 520K| | 91 (11)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | P0F_IDX | 3661 | 29288 | | 11 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | ARGUS | 7325 | 121K| | 79 (12)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
HH24:MI:SS')<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD HH24:MI:SS'))
3 - access("ARGUS"."START_TIME"="P0F"."P0F_TIMESTAMP"(+))
4 - access("P0F"."P0F_TIMESTAMP"(+)>=TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
HH24:MI:SS') AND "P0F"."P0F_TIMESTAMP"(+)<=TO_DATE('2007-06-22
20:00:00','RRRR-MM-DD HH24:MI:SS'))
5 - filter("ARGUS"."START_TIME">=TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
HH24:MI:SS') AND "ARGUS"."START_TIME"<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD
HH24:MI:SS'))
Statistics
1 recursive calls
0 db block gets
304 consistent gets
0 physical reads
0 redo size
7354 bytes sent via SQL*Net to client
557 bytes received via SQL*Net from client
18 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
246 rows processed
SQL>
SQL> select distinct start_time, port, dst_port
2 from argus left outer join p0f on start_time = p0f_timestamp
3 where start_time between to_date ('2007-06-13 19:00:00','RRRR-MM-DD HH24:MI:SS')
4 and to_date ('2007-06-22 20:00:00','RRRR-MM-DD HH24:MI:SS')
5 /
246 rows selected.
Elapsed: 00:00:02.47
Execution Plan
Plan hash value: 1442901002
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 21313 | 520K| | 250 (6)| 00:00:04 |
| 1 | HASH UNIQUE | | 21313 | 520K| 1352K| 250 (6)| 00:00:04 |
|* 2 | FILTER | | | | | | |
|* 3 | HASH JOIN RIGHT OUTER| | 21313 | 520K| | 91 (11)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | P0F_IDX | 3661 | 29288 | | 11 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | ARGUS | 7325 | 121K| | 79 (12)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
HH24:MI:SS')<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD HH24:MI:SS'))
3 - access("START_TIME"="P0F_TIMESTAMP"(+))
4 - access("P0F_TIMESTAMP"(+)>=TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
HH24:MI:SS') AND "P0F_TIMESTAMP"(+)<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD
HH24:MI:SS'))
5 - filter("ARGUS"."START_TIME">=TO_DATE('2007-06-13 19:00:00','RRRR-MM-DD
HH24:MI:SS') AND "ARGUS"."START_TIME"<=TO_DATE('2007-06-22 20:00:00','RRRR-MM-DD
HH24:MI:SS'))
Statistics
1 recursive calls
0 db block gets
304 consistent gets
0 physical reads
0 redo size
7354 bytes sent via SQL*Net to client
557 bytes received via SQL*Net from client
18 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
246 rows processedCan you show us a similar testcase with explain plan and statistics? -
Need example for BAPI query. Please, help.
Hi,
badly need help on BAPI_ACC_ACTIVITY_ALLOC_POST.
Does anybody have some example code for jCO query?
Thanks.
VladimirHi,
Try this code...
package jco;
import com.sap.mw.jco.*;
public class jcosample
public static void main(String args[])
JCO.Client myConnection = null;
JCO.Repository mRepository = null;
JCO.Function myFunction = null;
try
myConnection = JCO.createClient("client","username","password" ,"language","ip address","system no");
myConnection.connect();
mRepository = new JCO.Repository("WIPRO",myConnection);
try
if (mRepository == null )
System.out.println("NuLL");
try
IFunctionTemplate ft=mRepository.getFunctionTemplate("BAPI_COMPANYCODE_GETLIST");
myFunction=ft.getFunction();
}catch (Exception ex)
throw new Exception(ex + "Problem retrieving JCO.Function object.");
if (myFunction == null)
System.exit(1);
myConnection.execute(myFunction);
}catch (Exception ex)
ex.printStackTrace();
System.exit(1);
myConnection.execute(myFunction);
JCO.Table codes = null;
codes =myFunction.getTableParameterList().getTable("COMPANYCODE_LIST");
int size;
size =codes.getNumRows();
if (size == 0)
System.out.println("No value matches the selection cretaria");
else
for (int i = 0;i<=size; i++)
codes.setRow(i); System.out.print(codes.getString("COMP_CODE")); System.out.println(codes.getString("COMP_NAME"));
}catch( Exception e)
e.printStackTrace();
Hope that helps...
Note: in case for the BAPI that inserts or modified data you must call the BAPI_TRANSACTION_COMMIT also for the changes to get reflected in the Database.
Please let me if that helps you.
Cheers
Kathir~ -
Hi All,
I would like to tune the following query in a better manner
SELECT hou.NAME organization_name
,haou.name parent_org_name
,msi.secondary_inventory_name sub_inventory_code
,msi.availability_type nettable_sub_inventory
,msib.segment1 item_name
,msib.description item_description
,mc.concatenated_segments category_name
,msib.primary_uom_code item_uom_code
,XXTEST_TEST_ONHAND(msib.organization_id,msib.inventory_item_id,msi.secondary_inventory_name) AVAILABLE_ONHAND
,NVL((SELECT SUM(quantity_shipped - quantity_received)
FROM rcv_shipment_lines rmlv
WHERE rmlv.to_organization_id = msib.organization_id
AND rmlv.item_id = msib.inventory_item_id
AND rmlv.to_subinventory = msi.secondary_inventory_name
AND source_document_code IN ('REQ','INVENTORY')
AND rmlv.shipment_line_status_code in ('PARTIALLY RECEIVED','EXPECTED')),0) intransit_qunatity
,msib.organization_id
,msib.inventory_item_id
,mic.category_set_id
FROM mtl_system_items_b msib
,hr_organization_units hou
,mtl_secondary_inventories msi
,mtl_item_categories mic
,mtl_categories_b_kfv mc
,per_org_structure_versions posv
,per_org_structure_elements pose
,hr_all_organization_units haou
,per_organization_structures pos
WHERE hou.organization_id = msi.organization_id
AND msib.organization_id = hou.organization_id
AND mic.inventory_item_id = msib.inventory_item_id
AND mic.organization_id = msib.organization_id
AND mc.category_id = mic.category_id
AND mic.category_set_id = FND_PROFILE.VALUE('XXTEST_INV_INVENTORY_CAT_SET')
AND pos.organization_structure_id = posv.organization_structure_id
AND posv.org_structure_version_id = pose.org_structure_version_id
AND haou.organization_id = pose.organization_id_parent
AND pos.name = FND_PROFILE.VALUE('XXTEST_INV_ORG_HIERARCHY')
AND pose.organization_id_child = msib.organization_id; Purpose :
Actually this is for creating form view , and the custom function is encapsulating a oracle apps API. We can also put the custom function in POST_QUERY of the form block. But I feel it is better to put in view itself. We are expecting that this query will fetch around 500,000 records.
Expected record count
mtl_system_items_b - less than 100,000 records
,hr_organization_units less than 1000 records
,mtl_secondary_inventories less than 1000 records
,mtl_item_categories - - less than 300,000 records
,mtl_categories_b_kfv - less than 100 records
,per_org_structure_versions less than 1000 records
,per_org_structure_elements hou less than 1000 records
,hr_all_organization_units less than 1000 records
,per_organization_structures less than 1000 records
Vesrion of DB
10.2.0.4.0Aside from what others have said, your WHERE clause isn't doing much filtering mostly joining so I would guess (only a guess) that indexes wouldn't be used.
Also, you're doing two things in the SELECT that may hurt performance, a function call and another SELECT statement.
Can you turn that inline SELECT into an outer join in the main query? What does that function call do? Can you embed the logic into your statement? Otherwise it's going to get called 500,000 times.
Maybe you are looking for
-
I never did find the "Gift Certificate" link even thoug support said where it "should" be, and "No" Nowhere near where you would reedem them.
-
Implement new AbstractCommand, AbstractScreenflow, and WcmResourceControl
Hi all... I am currently trying to implement a new SAP KM functionality which allows the user to see how a specific resource property has changed through all it's revisions. I decompiled the code for UIPropertiesCommand (UIPropertiesCommand, Properti
-
Accesing an RFC from webdynpro throws exception
I am trying to call an RFC from Webdynpro which in turn calls another RFC in another system..... Example: I am calling an RFC function thats exists in XYZ system from webdynpro. This RfC function is calling another RFC function that exists in ABC sy
-
How do I download whatapp on my ipad2
I have a iPhone and ipad2 have downloaded whatapp to my phone but I can't download it to my ipad2 why? can anyone advise please
-
After using modify/contents some frames go out of stage dimension
Hello, I have a question to all you animators out there. I work in after effects and sometimes do some simple illustrations/animations in flash - where I usually switch the bg colour to gray as not to see the stage rectangular as it's distracting to