802.1x Query
Hi, I am implementing 802.1x in my network where I have some avaya switches & some cisco switches. My First question is that how can I implement it on avaya switches.Second how can I configure a cisco switch port on which another cisco switch is connected and its a trunk port.
Can any one explore 802.1x for my scenario. I am also planing to have NAC box. So how will it be integrated in this network.
Configuring radius/802.1x on Avaya switch
http://support.avaya.com/elmodocs2/p330/P330/Configuring%20Steel.pdf
configuring 802.1X Port-Based Authentication
http://www.cisco.com/en/US/docs/switches/lan/catalyst3550/software/release/12.1_8_ea1/configuration/guide/sw8021x.html
You can not configure 802.1x on trunk port as 802.1x is not supported on these port types:
Trunk port
Dynamic-access ports
Dynamic ports
Switched Port Analyzer (SPAN)
Hope this helps.
Regards,
JK
Similar Messages
-
Left outer join with NVL as part of the join criteria
Hi,
I have query like this:
Select
Main.X_ID,
NVL(Option1.Y_ID, Option2.Y_ID),
Main.Z_ID
from TableMain Main
left join (Select X_ID, Y_ID from TableOption where type_cd = 'Type1') Option1
on Main.X_ID = Option1.X_ID
left join (Select X_ID, Y_ID from TableOption where type_cd = 'Type2') Option2
on Main.X_ID = Option2.X_ID
left join TableSub Sub
on Main.Z_ID = Sub.Z_ID
and Sub.Y_ID = NVL(Option1.Y_ID, Option2.Y_ID)
where Sub.Z_ID is null and Sub.Y_ID is null
Basically i want to show all in Z_IDs TableMain that are not in TableSub where the joining Y_ID is Type1 and if there is not a Y_ID for Type1, then use the Y_ID for Type2.
The query works if Type1 exists but doesnt if Type1 doesnt and Type2 does.
Is NVL the correct function to use in the join? Or is there a better way to write such a query?
Any help would be greatly appreciated. Thanks!!
FYI all IDs are NUMBERs.Hopefully this is easier to understand :)
Im using 11g.
DDL and data:
create table tablemain (
x_id number (20),
z_id number (20),
amount number(20)
insert into tablemain values (1, 1000, 6767.45);
insert into tablemain values (1, 1001, 767.45);
insert into tablemain values (1, 1002, 67.85);
insert into tablemain values (1, 1003, 997.85);
insert into tablemain values (2, 1002, 1997.85);
insert into tablemain values (2, 1004, 197.85);
insert into tablemain values (2, 1005, 7.85);
insert into tablemain values (3, 1000, 7.44);
insert into tablemain values (3, 1006, 447.88);
create table tableoption (
y_id number (20),
x_id number (20),
type_cd varchar2(20)
insert into tableoption values (800, 1, 'Type1');
insert into tableoption values (800, 3, 'Type2');
insert into tableoption values (801, 1, 'Type2');
insert into tableoption values (802, 2, 'Type1');
create table tablesub (
y_id number (20),
z_id number (20)
insert into tablesub values (800, 1000);
insert into tablesub values (800, 1001);
insert into tablesub values (800, 1004);
insert into tablesub values (800, 1006);
insert into tablesub values (801, 1001);
insert into tablesub values (801, 1002);
insert into tablesub values (801, 1005);
insert into tablesub values (801, 1006);
insert into tablesub values (802, 1005);
insert into tablesub values (802, 1004);Query:
SELECT Nvl(option1.y_id, option2.y_id) as y_id,
Nvl(option1.x_id, option2.x_id) as x_id,
mains.z_id,
mains.amount
FROM tablemain mains
left join (SELECT x_id,
y_id
FROM tableoption
WHERE type_cd = 'Type1') option1
ON mains.x_id = option1.x_id
left join (SELECT x_id,
y_id
FROM tableoption
WHERE type_cd = 'Type2') option2
ON mains.x_id = option2.x_id
left join tablesub sub
ON mains.z_id = sub.z_id
AND sub.y_id = Nvl(option1.y_id, option2.y_id)
WHERE sub.z_id IS NULL
AND sub.y_id IS NULL What the output should be:
y_id ---- x_id ---- z_id ---- amount
800 ---- 1 ---- 1002 ---- 67.85
800 ---- 1 ---- 1003 ---- 997.85
801 ---- 1 ---- 1000 ---- 6767.45
801 ---- 1 ---- 1003 ---- 997.85
802 ---- 2 ---- 1002 ---- 1997.85
Currently the output of the query is:
800 ---- 1 ---- 1002 ---- 67.85
800 ---- 1 ---- 1003 ---- 997.85
802 ---- 2 ---- 1002 ---- 1997.85
It is missing where 801 is type2 only in the tableoption:
801 ---- 1 ---- 1000 ---- 6767.45
801 ---- 1 ---- 1003 ---- 997.85
Edited by: Hazy on Feb 22, 2012 3:25 PM -
XML Generation using a sql query in an efficient way -Help needed urgently
Hi
I am facing the following issue while generating xml using an sql query. I get the below given table using a query.
CODE ID MARK
==================================
1 4 2331 809
2 4 1772 802
3 4 2331 845
4 5 2331 804
5 5 2331 800
6 5 2210 801
I need to generate the below given xml using a query
<data>
<CODE>4</CODE>
<IDS>
<ID>2331</ID>
<ID>1772</ID>
</IDS>
<MARKS>
<MARK>809</MARK>
<MARK>802</MARK>
<MARK>845</MARK>
</MARKS>
</data>
<data>
<CODE>5</CODE>
<IDS>
<ID>2331</ID>
<ID>2210</ID>
</IDS>
<MARKS>
<MARK>804</MARK>
<MARK>800</MARK>
<MARK>801</MARK>
</MARKS>
</data>
Can anyone help me with some idea to generate the above given CLOB messagenot sure if this is the right way to do it but
/* Formatted on 10/12/2011 12:52:28 PM (QP5 v5.149.1003.31008) */
WITH data AS (SELECT 4 code, 2331 id, 809 mark FROM DUAL
UNION
SELECT 4, 1772, 802 FROM DUAL
UNION
SELECT 4, 2331, 845 FROM DUAL
UNION
SELECT 5, 2331, 804 FROM DUAL
UNION
SELECT 5, 2331, 800 FROM DUAL
UNION
SELECT 5, 2210, 801 FROM DUAL)
SELECT TO_CLOB (
'<DATA>'
|| listagg (xml, '</DATA><DATA>') WITHIN GROUP (ORDER BY xml)
|| '</DATA>')
xml
FROM ( SELECT '<CODE>'
|| code
|| '</CODE><IDS><ID>'
|| LISTAGG (id, '</ID><ID>') WITHIN GROUP (ORDER BY id)
|| '</ID><IDS><MARKS><MARK>'
|| LISTAGG (mark, '</MARK><MARK>') WITHIN GROUP (ORDER BY id)
|| '</MARK></MARKS>'
xml
FROM data
GROUP BY code) -
SQL Query running really slow, any help in improving will be Great!
Hi,
I am really new to this performance tuning and optimization techniques. Explain plan also, I only have theoretical knowledge, no clues on how to find out the real issue which is making the query slow..So if anyone can give me a good direction on where to start even, it will be great..
Now, my issue is, I have a query which runs really really slow. If I run this query for a small subset of data, it runs fast(Its flying, actually..) but if I give the same query for everything(Full required data), its running for ages..(Actually it is running for 2 days now and still running.)
I am pasting my query here, the output shows that the query stucks after "Table created"
SQL> @routinginfo
Table dropped.
Table created.
Please please help!
I also ran explain plan for this query and there are a number of rows in the plan_table now..
SORRY!IS there a way to insert a file here, as I want to attach my explain plan also?
My query -Routinginfo.sql
set trimspool on
set heading on
set verify on
set serveroutput on
drop table routinginfo;
CREATE TABLE routinginfo
( POST_TOWN_NAME VARCHAR2(22 BYTE),
DELIVERY_OFFICE_NAME VARCHAR2(40 BYTE),
ROUTE_ID NUMBER(10),
ROUTE_NAME VARCHAR2(40 BYTE),
BUILDING_ID NUMBER(10),
SUB_BUILDING_ID NUMBER(10),
SEQUENCE_NO NUMBER(4),
PERSONAL_NAME VARCHAR2(60 BYTE),
ADDRESS VARCHAR2(1004 BYTE),
BUILDING_USE VARCHAR2(1 BYTE),
COMMENTS VARCHAR2(200 BYTE),
EAST NUMBER(17,5),
NORTH NUMBER(17,5)
insert into routinginfo
(post_town_name,delivery_office_name,route_id,route_name,
building_id,sub_building_id,sequence_no,personal_name,
address,building_use,comments,east,north)
select
p.name,
d.name,
b.route_id,
r.name,
b.building_id,
s.sub_build_id,
b.sequence_no,
b.personal_name,
ad.addr_line_1||' '||ad.addr_line_2||' '||ad.addr_line_3||' '||ad.addr_line_4||' '||ad.addr_line_5,
b.building_use,
rtrim(replace(b.comments,chr(10),'')),
b.east,
b.north
from t_buildings b,
(select * from t_sub_buildings where nvl(invalid,'N') = 'N') s,
t_routes r,
t_delivery_offices d,
t_post_towns p,
t_address_model ad
where b.building_id = s.building_id(+)
and s.building_id is null
and r.route_id=b.route_id
and (nvl(b.residential_delivery_points,0) > 0 OR nvl(b.commercial_delivery_points,0) > 0)
and r.delivery_office_id=d.delivery_office_id
--and r.delivery_office_id=303
and D.POST_TOWN_ID=P.post_town_id
and ad.building_id=b.building_id
and ad.sub_building_id is null
and nvl(b.invalid, 'N') = 'N'
and nvl(b.derelict, 'N') = 'N'
union
select
p.name,
d.name ,
b.route_id ,
r.name ,
b.building_id ,
s.sub_build_id ,
NVL(s.sequence_no,b.sequence_no),
b.personal_name ,
ad.addr_line_1||' '||ad.addr_line_2||' '||ad.addr_line_3||' '||ad.addr_line_4||' '||ad.addr_line_5,
b.building_use,
rtrim(replace(b.comments,chr(10),'')),
b.east,
b.north
from t_buildings b,
(select * from t_sub_buildings where nvl(invalid,'N') = 'N') s,
t_routes r,
t_delivery_offices d,
t_post_towns p,
t_address_model ad
where s.building_id = b.building_id
and r.route_id = s.route_id
and (nvl(b.residential_delivery_points,0) > 0 OR nvl(b.commercial_delivery_points,0) > 0)
and r.delivery_office_id=d.delivery_office_id
--and r.delivery_office_id=303
and D.POST_TOWN_ID=P.post_town_id
and ad.building_id=b.building_id
and ad.sub_building_id = s.sub_build_id
and nvl(b.invalid, 'N') = 'N'
and nvl(b.derelict, 'N') = 'N'
union
select
p.name,
d.name,
b.route_id ,
r.name ,
b.building_id,
s.sub_build_id ,
NVL(s.sequence_no,b.sequence_no) ,
b.personal_name ,
ad.addr_line_1||' '||ad.addr_line_2||' '||ad.addr_line_3||' '||ad.addr_line_4||' '||ad.addr_line_5 ,
b.building_use,
rtrim(replace(b.comments,chr(10),'')),
b.east,
b.north
from t_buildings b,
(select * from t_sub_buildings where nvl(invalid,'N') = 'N') s,
t_routes r,
t_delivery_offices d,
t_post_towns p,
t_localities l,
t_localities lo,
t_localities loc,
t_tlands tl,
t_address_model ad
where s.building_id = b.building_id
and s.route_id is null
and r.route_id = b.route_id
and (nvl(b.residential_delivery_points,0) > 0 OR nvl(b.commercial_delivery_points,0) > 0)
and r.delivery_office_id=d.delivery_office_id
--and r.delivery_office_id=303
and D.POST_TOWN_ID=P.post_town_id
and ad.building_id=b.building_id
and ad.sub_building_id = s.sub_build_id
and nvl(b.invalid, 'N') = 'N'
and nvl(b.derelict, 'N') = 'N';
commit; Edited by: Krithi on 16-Jun-2009 01:48
Edited by: Krithi on 16-Jun-2009 01:51
Edited by: Krithi on 16-Jun-2009 02:44This link is helpful alright..but as a beginner, it is taking me too long to understand..But I am going to learn the techniques for sure..
Fo the time being,I am pasting my explain plan for the above query here, so that I hope any expert can really help me on this one..
STATEMENT_ID TIMESTAMP REMARKS OPERATION OPTIONS OBJECT_NODE OBJECT_OWNER OBJECT_NAME OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER SEARCH_COLUMNS ID PARENT_ID POSITION COST CARDINALITY BYTES
06/16/2009 09:33:01 SELECT STATEMENT CHOOSE 0 829,387,159,200 829,387,159,200 3,720,524,291,654,720 703,179,091,122,042,000
06/16/2009 09:33:01 SORT UNIQUE 1 0 1 829,387,159,200 3,720,524,291,654,720 703,179,091,122,042,000
06/16/2009 09:33:01 UNION-ALL 2 1 1
06/16/2009 09:33:01 HASH JOIN 3 2 1 11,209 87,591 15,853,971
06/16/2009 09:33:01 FILTER 4 3 1
06/16/2009 09:33:01 HASH JOIN OUTER 5 4 1
06/16/2009 09:33:01 HASH JOIN 6 5 1 5,299 59,325 6,585,075
06/16/2009 09:33:01 VIEW GEO2 index$_join$_006 6 7 6 1 4 128 1,792
06/16/2009 09:33:01 HASH JOIN 8 7 1 5,299 59,325 6,585,075
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 POST_TOWN_NAME_I NON-UNIQUE ANALYZED 9 8 1 1 128 1,792
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 POST_TOWN_PK UNIQUE ANALYZED 10 8 2 1 128 1,792
06/16/2009 09:33:01 HASH JOIN 11 6 2 5,294 59,325 5,754,525
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_DELIVERY_OFFICES 5 ANALYZED 12 11 1 7 586 10,548
06/16/2009 09:33:01 HASH JOIN 13 11 2 5,284 59,325 4,686,675
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_ROUTES 4 ANALYZED 14 13 1 7 4,247 118,916
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_BUILDINGS 1 ANALYZED 15 13 2 5,265 59,408 3,029,808
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_SUB_BUILDINGS 3 ANALYZED 16 5 2 851 278,442 3,898,188
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_ADDRESS_MODEL 7 ANALYZED 17 3 2 3,034 1,582,421 88,615,576
06/16/2009 09:33:01 NESTED LOOPS 18 2 2 10,217 1 189
06/16/2009 09:33:01 NESTED LOOPS 19 18 1 10,216 1 175
06/16/2009 09:33:01 HASH JOIN 20 19 1 10,215 1 157
06/16/2009 09:33:01 HASH JOIN 21 20 1 6,467 80,873 8,168,173
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_ROUTES 11 ANALYZED 22 21 1 7 4,247 118,916
06/16/2009 09:33:01 HASH JOIN 23 21 2 6,440 80,924 5,907,452
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_BUILDINGS 8 ANALYZED 24 23 1 5,265 59,408 3,029,808
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_SUB_BUILDINGS 10 ANALYZED 25 23 2 851 278,442 6,125,724
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_ADDRESS_MODEL 14 ANALYZED 26 20 2 3,034 556,000 31,136,000
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_DELIVERY_OFFICES 12 ANALYZED 27 19 2 1 1 18
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 DELIVERY_OFFICE_PK UNIQUE ANALYZED 1 28 27 1 1
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_POST_TOWNS 13 ANALYZED 29 18 2 1 1 14
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 POST_TOWN_PK UNIQUE ANALYZED 1 30 29 1 1
06/16/2009 09:33:01 MERGE JOIN CARTESIAN 31 2 3 806,976,583,802 3,720,524,291,567,130 703,179,091,106,188,000
06/16/2009 09:33:01 MERGE JOIN CARTESIAN 32 31 1 16,902,296 73,359,971,046 13,865,034,527,694
06/16/2009 09:33:01 MERGE JOIN CARTESIAN 33 32 1 1,860 1,207,174 228,155,886
06/16/2009 09:33:01 MERGE JOIN CARTESIAN 34 33 1 1,580 20 3,780
06/16/2009 09:33:01 NESTED LOOPS 35 34 1 1,566 1 189
06/16/2009 09:33:01 NESTED LOOPS 36 35 1 1,565 1 175
06/16/2009 09:33:01 NESTED LOOPS 37 36 1 1,564 1 157
06/16/2009 09:33:01 NESTED LOOPS 38 37 1 1,563 1 129
06/16/2009 09:33:01 NESTED LOOPS 39 38 1 1,207 178 12,994
06/16/2009 09:33:01 TABLE ACCESS FULL GEO2 T_SUB_BUILDINGS 17 ANALYZED 40 39 1 851 178 3,916
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_BUILDINGS 15 ANALYZED 41 39 2 2 1 51
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 BUILDING_PK UNIQUE ANALYZED 1 42 41 1 1 31
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_ADDRESS_MODEL 25 ANALYZED 43 38 2 2 1 56
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 MODEL_MODEL2_UK UNIQUE ANALYZED 2 44 43 1 1 1
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_ROUTES 18 ANALYZED 45 37 2 1 1 28
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 ROUTE_PK UNIQUE ANALYZED 1 46 45 1 1
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_DELIVERY_OFFICES 19 ANALYZED 47 36 2 1 1 18
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 DELIVERY_OFFICE_PK UNIQUE ANALYZED 1 48 47 1 1
06/16/2009 09:33:01 TABLE ACCESS BY INDEX ROWID GEO2 T_POST_TOWNS 20 ANALYZED 49 35 2 1 1 14
06/16/2009 09:33:01 INDEX UNIQUE SCAN GEO2 POST_TOWN_PK UNIQUE ANALYZED 1 50 49 1 1
06/16/2009 09:33:01 BUFFER SORT 51 34 2 1,579 60,770
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 LOCAL_COUNTY_FK_I NON-UNIQUE ANALYZED 52 51 1 14 60,770
06/16/2009 09:33:01 BUFFER SORT 53 33 2 1,846 60,770
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 LOCAL_COUNTY_FK_I NON-UNIQUE ANALYZED 54 53 1 14 60,770
06/16/2009 09:33:01 BUFFER SORT 55 32 2 16,902,282 60,770
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 LOCAL_COUNTY_FK_I NON-UNIQUE ANALYZED 56 55 1 14 60,770
06/16/2009 09:33:01 BUFFER SORT 57 31 2 806,976,583,788 50,716
06/16/2009 09:33:01 INDEX FAST FULL SCAN GEO2 TLAND_COUNTY_FK_I NON-UNIQUE ANALYZED 58 57 1 11 50,716 -------------------------------------------------------------
Edited by: Krithi on 16-Jun-2009 02:47 -
Urgent Help required in Tunning this query
I have table ACCOUNT SPONSOR HOMESTORE ASH with more than 30 million rows.
My batch daily need to update or insert into this table from a temporary table TEMP_HSTRALCT. The data for temporary table is populated by below query which selects from two tables TRANSACTION POINTS and REDEMPTIONS.However both these tables are partitioned on date time and is run daily and this is running for hours.
Can anyone please help me on tuning this query
INSERT INTO temp_hstralct
(tmp_n_collector_account_num, tmp_v_location_id,
tmp_v_sponsor_id, tmp_v_source_file_name,
tmp_n_psc_insert_ind, tmp_n_psc_update_ind,
tmp_n_transaction_amount, tmp_n_transaction_points,
tmp_n_acc_insert_ind, tmp_n_ash_insert_ind,
tmp_n_col_insert_ind, tmp_n_check_digit,
tmp_n_collector_issue_num, tmp_n_csl_insert_ind,
tmp_v_offer_code, tmp_n_psa_insert_ind)
SELECT DISTINCT trp_n_collector_account_num account_num,
trp_v_location_id location_id,
trp_v_sponsor_id sponsor_id,
trp_c_creation_user batch_id, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0
FROM transaction_points, ACCOUNT, locations_master,homestores
WHERE hsr_v_accrual_allowed = 'Y'
AND trp_n_collector_account_num = ACCOUNT.acc_n_account_num(+)
AND ( ( ( ACCOUNT.acc_v_account_type = 'C'
OR ACCOUNT.acc_v_account_type IS NULL
AND hsr_v_b2c_accounts = 'Y'
OR ( ACCOUNT.acc_v_account_type = 'B'
AND hsr_v_nfb_accounts = 'Y'
OR ( ACCOUNT.acc_v_account_type = 'H'
AND hsr_v_hybrid_accounts = 'Y'
AND trp_d_creation_date_time BETWEEN SYSDATE-3
AND SYSDATE
AND trp_v_sponsor_id = 'JSAINSBURY'
AND trp_v_location_id =
locations_master.lnm_v_location_id
AND locations_master.lnm_v_partner_id = 'JSAINSBURY'
AND ( ( ( (INSTR
(hsr_v_store_status,
locations_master.lnm_c_location_status
) > 0
AND (INSTR
(hsr_v_store_type,
locations_master.lnm_c_location_type
) > 0
AND hsr_v_homestore_assignment = 'ST'
OR ( ( locations_master.lnm_c_homestore_ind =
'Y'
AND (INSTR
(hsr_v_store_status,
locations_master.lnm_c_location_status
) > 0
AND hsr_v_homestore_assignment = 'HS'
UNION ALL
SELECT DISTINCT rdm_n_collector_account_num account_num,
rdm_v_location_id location_id,
rom_v_supplier_id sponsor_id,
rdm_c_creation_user batch_id, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0
FROM redemption_details,
reward_offer_master,
ACCOUNT,
locations_master,
HOMESTORES
WHERE hsr_v_redemption_allowed = 'Y'
AND rdm_n_collector_account_num = ACCOUNT.acc_n_account_num(+)
AND ( ( ( ACCOUNT.acc_v_account_type = 'C'
OR ACCOUNT.acc_v_account_type IS NULL
AND hsr_v_b2c_accounts = 'Y'
OR ( ACCOUNT.acc_v_account_type = 'B'
AND hsr_v_nfb_accounts = 'Y'
OR ( ACCOUNT.acc_v_account_type = 'H'
AND hsr_v_hybrid_accounts = 'Y'
AND rdm_d_creation_date_time BETWEEN SYSDATE-3
AND SYSDATE
AND rom_v_reward_offer_id = rdm_v_reward_id
AND rom_v_supplier_id = 'JSAINSBURY'
AND rdm_v_location_id =
locations_master.lnm_v_location_id
AND locations_master.lnm_v_partner_id ='JSAINSBURY'
AND ( ( ( (INSTR
(hsr_v_store_status,
locations_master.lnm_c_location_status
) > 0
AND (INSTR
(hsr_v_store_type,
locations_master.lnm_c_location_type
) > 0
AND hsr_v_homestore_assignment = 'ST'
OR ( ( locations_master.lnm_c_homestore_ind =
'Y'
AND (INSTR
(hsr_v_store_status,
locations_master.lnm_c_location_status
) > 0
AND hsr_v_homestore_assignment = 'HS'
);I have copied the explain as it is and can you please try pasting in the text pad.Can you let me know whether parallel hint on this will speed up the select queries.
Plan
INSERT STATEMENT CHOOSECost: 410,815 Bytes: 2,798,394 Cardinality: 15,395
32 UNION-ALL
15 SORT UNIQUE Cost: 177,626 Bytes: 2,105,592 Cardinality: 11,896
14 FILTER
13 HASH JOIN Cost: 177,312 Bytes: 2,105,592 Cardinality: 11,896
2 TABLE ACCESS BY INDEX ROWID LMHOLTP.LOCATIONS_MASTER Cost: 37 Bytes: 23,184 Cardinality: 966
1 INDEX RANGE SCAN NON-UNIQUE LMHOLTP.IX_LOCATIONS_MASTER_3 Cost: 3 Cardinality: 1
12 FILTER
11 HASH JOIN OUTER
8 MERGE JOIN CARTESIAN Cost: 155,948 Bytes: 702,656,660 Cardinality: 4,845,908
3 TABLE ACCESS FULL LMHOLTP.HOMESTORES Cost: 2 Bytes: 104 Cardinality: 1
7 BUFFER SORT Cost: 155,946 Bytes: 198,682,228 Cardinality: 4,845,908
6 PARTITION RANGE ITERATOR Partition #: 12
5 TABLE ACCESS BY LOCAL INDEX ROWID LMHOLTP.TRANSACTION_POINTS Cost: 155,946 Bytes: 198,682,228 Cardinality: 4,845,908 Partition #: 12
4 INDEX RANGE SCAN NON-UNIQUE LMHOLTP.IX_TRANSACTION_POINTS_1 Cost: 24,880 Cardinality: 6,978,108 Partition #: 12
10 PARTITION RANGE ALL Partition #: 15 Partitions accessed #1 - #5
9 TABLE ACCESS FULL LMHOLTP.ACCOUNT Cost: 6,928 Bytes: 68,495,680 Cardinality: 8,561,960 Partition #: 15 Partitions accessed #1 - #5
31 SORT UNIQUE Cost: 233,189 Bytes: 692,802 Cardinality: 3,499
30 FILTER
29 FILTER
28 NESTED LOOPS OUTER
24 HASH JOIN Cost: 226,088 Bytes: 664,810 Cardinality: 3,499
16 TABLE ACCESS FULL LMHOLTP.REWARD_OFFER_MASTER Cost: 8 Bytes: 2,280 Cardinality: 114
23 HASH JOIN Cost: 226,079 Bytes: 8,327,280 Cardinality: 48,984
20 TABLE ACCESS BY INDEX ROWID LMHOLTP.LOCATIONS_MASTER Cost: 37 Bytes: 432 Cardinality: 18
19 NESTED LOOPS Cost: 39 Bytes: 2,304 Cardinality: 18
17 TABLE ACCESS FULL LMHOLTP.HOMESTORES Cost: 2 Bytes: 104 Cardinality: 1
18 INDEX RANGE SCAN NON-UNIQUE LMHOLTP.IX_LOCATIONS_MASTER_3 Cost: 3 Cardinality: 966
22 PARTITION RANGE ITERATOR Partition #: 28
21 TABLE ACCESS FULL LMHOLTP.REDEMPTION_DETAILS Cost: 226,019 Bytes: 261,636,270 Cardinality: 6,229,435 Partition #: 28
27 PARTITION RANGE ITERATOR Partition #: 30
26 TABLE ACCESS BY LOCAL INDEX ROWID LMHOLTP.ACCOUNT Cost: 2 Bytes: 8 Cardinality: 1 Partition #: 30
25 INDEX UNIQUE SCAN UNIQUE LMHOLTP.CO_PK_ACCOUNT Cost: 1 Cardinality: 1 Partition #: 30 -
NOT Using Named Parameters in a Native Query
Hi!
like it is written in the Toplink JPA Extensions reference Toplink supports using named parameters in native queries.
See example from http://www.oracle.com/technology/products/ias/toplink/jpa/resources/toplink-jpa-extensions.html:
Example 1-11 Specifying a Named Parameter With #
Query queryEmployees = entityManager.createNativeQuery(
"SELECT OBJECT(emp) FROM Employee emp WHERE emp.firstName = #firstname"
queryEmployeeByFirstName.setParameter("firstName", "Joan");
Collection employees = queryEmployees.getResultList();
But I want to use "#" in the SQL statement so I want to disable this parameter binding somehow and let the query just execute the SELECT statement I deliver.
I have tried to add the property
<property name="toplink.jdbc.bind-parameters" value="false"/>
into the persistence.xml and I have also tried to add the query hint
query.setHint(TopLinkQueryHints.BIND_PARAMETERS, HintValues.FALSE);
but none of them prevents the # to be uses as a paramter. this causes the exception:
Exception [TOPLINK-6132] (Oracle TopLink Essentials - 9.1 (Build b22)): oracle.toplink.essentials.exceptions.QueryException
Exception Description: Query argument ######' not found in list of parameters provided during query execution.
Query: ReadAllQuery(de.merck.compas.material.SimilarLocalMaterialListItemBV)
The SQL Statement is
SELECT * FROM TABLE (
CAST ( RCGC.PHA_P_COMPASMDM.LookupSimilarMats
( 'COC003','10021500150000000997','30','30','040','standard','','001001','001001','123456','##########','X','X','X','X','U' )
AS RCGC.PHA_COMPAS_SIMILAR_MATS ) )
I am using the latest Toplink essentials build together with an Oracle 9.2i DB in a Java SE web application.
Can anyone give me a little hint what to do?
Thanks and best regards,
AlexI hope this is what you need:
[TopLink Fine]: 2007.01.19 02:56:48.278--ServerSession(17712783)--Connection(723185)--Thread(Thread[http-8080-Processor2
3,5,main])--SELECT APPL_CGP_MATLOC, CHANGE_REQ_MATLOC, CD_PHA_MAT_LOC, LCOMP, LOC_MAT_NAME, APPL_GDM_MATLOC, LOC_MAT_NAM
E_SHORT, APPL_PDW_MATLOC, PHA_MAT_LOC_LCOMMENT, APPL_TP_MATLOC, DIVISION, APPL_WRS_MATLOC, ARTICLE_ID_MDA, LU_PHA_MAT_LO
C, LOC_MAT_NAME_OLD, LDT_PHA_MAT_LOC, PHA_MAT_LOC_STATUS, CU_PHA_MAT_LOC, MAT_LOC_ID, RCOMP, MAT_ID FROM RCGC.PHA_V_COMP
AS_MAT_MATLOC WHERE ((MAT_LOC_ID = '123456') AND (RCOMP = '001001'))
[TopLink Fine]: 2007.01.19 02:56:48.398--ServerSession(17712783)--Connection(32404901)--Thread(Thread[http-8080-Processo
r23,5,main])--SELECT MAT_ID, DESCRIPTION, PHA_MATERIAL_LCOMMENT, PACKSIZE, LU_PHA_MATERIAL, PACKSIZE_UNIT, LDT_PHA_MATER
IAL, CONTAINER_NAME, CU_PHA_MATERIAL, PURPOSE, CD_PHA_MATERIAL, CONTENT, CONTAINER_ID, AI_FACTOR, PHA_MATERIAL_STATUS, C
MG_SPEC_APP, PROD_LEVEL, PACKAGE_SIZE, PRODUCT_ID FROM RCGC.PHA_V_COMPAS_MATERIAL WHERE (MAT_ID = '10021500150000000997'
[TopLink Fine]: 2007.01.19 02:56:48.468--ServerSession(17712783)--Connection(723185)--Thread(Thread[http-8080-Processor2
3,5,main])--SELECT PRODUCT_ID, PROD_GRP_ID, APPL_FORM_NAME, PRODUCT_NAME, GALENIC_FORM_NAME, APPL_FORM_ID, PHA_PRODUCT_S
TATUS, PHA_PRODUCT_LCOMMENT, GALENIC_FORM_ID, LU_PHA_PRODUCT, INN, LDT_PHA_PRODUCT FROM RCGC.PHA_V_COMPAS_PRODUCT WHERE
(PRODUCT_ID = 'COC003')
[TopLink Fine]: 2007.01.19 02:56:48.568--ServerSession(17712783)--Connection(32404901)--Thread(Thread[http-8080-Processo
r23,5,main])--SELECT APPL_CGP_MATLOC, CHANGE_REQ_MATLOC, CD_PHA_MAT_LOC, LCOMP, LOC_MAT_NAME, APPL_GDM_MATLOC, LOC_MAT_N
AME_SHORT, APPL_PDW_MATLOC, PHA_MAT_LOC_LCOMMENT, APPL_TP_MATLOC, DIVISION, APPL_WRS_MATLOC, ARTICLE_ID_MDA, LU_PHA_MAT_
LOC, LOC_MAT_NAME_OLD, LDT_PHA_MAT_LOC, PHA_MAT_LOC_STATUS, CU_PHA_MAT_LOC, MAT_LOC_ID, RCOMP, MAT_ID FROM RCGC.PHA_V_CO
MPAS_MAT_MATLOC WHERE (MAT_ID = '10021500150000000997')
[TopLink Warning]: 2007.01.19 02:56:48.638--UnitOfWork(4469532)--Thread(Thread[http-8080-Processor23,5,main])--Exception
[TOPLINK-6132] (Oracle TopLink Essentials - 9.1 (Build b22)): oracle.toplink.essentials.exceptions.QueryException
Exception Description: Query argument ######' not found in list of parameters provided during query execution.
Query: ReadAllQuery(de.merck.compas.material.SimilarLocalMaterialListItemBV)
Local Exception Stack:
Exception [TOPLINK-6132] (Oracle TopLink Essentials - 9.1 (Build b22)): oracle.toplink.essentials.exceptions.QueryExcept
ion
Exception Description: Query argument ######' not found in list of parameters provided during query execution.
Query: ReadAllQuery(de.merck.compas.material.SimilarLocalMaterialListItemBV)
at oracle.toplink.essentials.exceptions.QueryException.namedArgumentNotFoundInQueryParameters(QueryException.jav
a:206)
at oracle.toplink.essentials.internal.databaseaccess.DatasourceCall.getValueForInParameter(DatasourceCall.java:6
86)
at oracle.toplink.essentials.internal.databaseaccess.DatasourceCall.getValueForInOutParameter(DatasourceCall.jav
a:704)
at oracle.toplink.essentials.internal.databaseaccess.DatasourceCall.translateQueryString(DatasourceCall.java:630
at oracle.toplink.essentials.internal.databaseaccess.DatabaseCall.translate(DatabaseCall.java:850)
at oracle.toplink.essentials.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQuer
yMechanism.java:212)
at oracle.toplink.essentials.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQuer
yMechanism.java:199)
at oracle.toplink.essentials.internal.queryframework.DatasourceCallQueryMechanism.executeSelectCall(DatasourceCa
llQueryMechanism.java:270)
at oracle.toplink.essentials.internal.queryframework.DatasourceCallQueryMechanism.selectAllRows(DatasourceCallQu
eryMechanism.java:600)
at oracle.toplink.essentials.queryframework.ReadAllQuery.executeObjectLevelReadQuery(ReadAllQuery.java:302)
at oracle.toplink.essentials.queryframework.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:
709)
at oracle.toplink.essentials.queryframework.DatabaseQuery.execute(DatabaseQuery.java:609)
at oracle.toplink.essentials.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:677)
at oracle.toplink.essentials.queryframework.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:7
31)
at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2218)
at oracle.toplink.essentials.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:937)
at oracle.toplink.essentials.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:909)
at oracle.toplink.essentials.internal.ejb.cmp3.base.EJBQueryImpl.executeReadQuery(EJBQueryImpl.java:346)
at oracle.toplink.essentials.internal.ejb.cmp3.base.EJBQueryImpl.getResultList(EJBQueryImpl.java:447)
at de.merck.compas.material.SimilarLocalMaterialListItemBC.selectList(SimilarLocalMaterialListItemBC.java:40)
at de.merck.compas.material.SimilarLocalMaterialListItemBC.selectSimilar(SimilarLocalMaterialListItemBC.java:53)
at de.merck.compas.material.MaterialMaintainFS.init(MaterialMaintainFS.java:158)
at de.merck.compas.material.MaterialListSV.startMaterialMaintain(MaterialListSV.java:186)
at de.merck.compas.material.MaterialListSV.processPage(MaterialListSV.java:97)
at de.merck.jsfw.servletFramework.AbstractServlet.processPage(AbstractServlet.java:141)
at de.merck.jsfw.servletFramework.ControllerServlet.processLastPage(ControllerServlet.java:240)
at de.merck.jsfw.servletFramework.ControllerServlet.processRequest(ControllerServlet.java:206)
at de.merck.jsfw.servletFramework.ControllerServlet.doGet(ControllerServlet.java:181)
at de.merck.jsfw.servletFramework.ControllerServlet.doPost(ControllerServlet.java:191)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at de.merck.comp.ntml.NTLMFilter.negotiate(NTLMFilter.java:384)
at de.merck.comp.ntml.NTLMFilter.doFilter(NTLMFilter.java:165)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:432)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java
:664)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
at java.lang.Thread.run(Thread.java:595)
The Toplink manifest file with version number:
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.6.5
Created-By: 1.5.0_09-b03 (Sun Microsystems Inc.)
Specification-Title: Java Persistence API
Specification-Vendor: Sun Microsystems, Inc., Oracle Corp.
Specification-Version: 1.0
Implementation-Title: TopLink Essentials
Implementation-Vendor: Sun Microsystems, Inc., Oracle Corp.
Implementation-Version: 9.1 build: b22
Thanks again for your help! -
Content presenter: content source based on a query resulting in error
Hi all,
I am using a content presenter taskflow showing content based on a query. After some changes were made to the Webcenter Content instance (extra metadata field, extra profiles) the query is resulting in an error. In my WC_Spaces log I see the following message:
Caused By: oracle.webcenter.content.integration.RepositoryException: Jun 13, 2012 11:15:11 AM oracle.webcenter.content.integration.spi.ucm.search.SearchService search
SEVERE: An error occurred when searching repository WebCenterSpaces-UCM. When calling service GET_SEARCH_RESULTS, as user weblogic, at timestamp 6/13/12 11:15 AM, received status code -32. The search was Search[repositoryId=WebCenterSpaces-UCM, max to return=100, useFullTextSearch=true, useCache=true, sort="toProperty('dDocTitle') ASC", fullText="
Metadata criteria((cm_contentType equals IDC:Profile:qblIntranetArtikel)AND(xqblGroep [any] equals Quobell)AND(xqblIntranetSubGroep [any] equals Opleidingen))
isOr=false"] and the parameter map was {ResultCount=51, FolderPathInSearchResults=1, SortField=dDocTitle, IdcService=GET_SEARCH_RESULTS, SortOrder=ASC, vcrAppendObjectClassInfo=1, StartRow=1, QueryText=(xqblGroep <matches> `Quobell`) <AND> (xqblIntranetSubGroep <matches> `Opleidingen`), vcrContentType=IDC:Profile:qblIntranetArtikel}.
The profile qblIntranetArtikel I am using already existed before the change were made and the query worked fine then.
In WCC I see the following error:
(internal)/6 06.13 11:15:10.272 IdcServer-551 -1 exception backtrace:java.lang.ArrayIndexOutOfBoundsException: -1(internal)/6 06.13 11:15:10.272 IdcServer-551 at java.util.Vector.get(Vector.java:696)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.data.DataResultSet.getStringValue(DataResultSet.java:2183)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at collections.CollectionFilters.doFilter(CollectionFilters.java:95)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.shared.PluginFilters.filterWithAction(PluginFilters.java:115)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.shared.PluginFilters.filter(PluginFilters.java:68)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.server.Service.executeFilter(Service.java:4095)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.server.SearchService.doResponse(SearchService.java:2081)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.server.ServiceRequestImplementor.doRequest(ServiceRequestImplementor.java:802)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.server.Service.doRequest(Service.java:1956)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.server.ServiceManager.processCommand(ServiceManager.java:437)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.server.IdcServerThread.processRequest(IdcServerThread.java:265)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at intradoc.server.IdcServerThread.run(IdcServerThread.java:160)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:545)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
(internal)/6 06.13 11:15:10.272 IdcServer-551 at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
services/3 06.13 11:15:10.274 IdcServer-551 !csUserEventMessage,weblogic,CIS!csSystemCodeExecutionError java.lang.ArrayIndexOutOfBoundsException: -1services/3 06.13 11:15:10.274 IdcServer-551 at java.util.Vector.get(Vector.java:696)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.data.DataResultSet.getStringValue(DataResultSet.java:2183)
services/3 06.13 11:15:10.274 IdcServer-551 at collections.CollectionFilters.doFilter(CollectionFilters.java:95)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.shared.PluginFilters.filterWithAction(PluginFilters.java:115)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.shared.PluginFilters.filter(PluginFilters.java:68)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.server.Service.executeFilter(Service.java:4095)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.server.SearchService.doResponse(SearchService.java:2081)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.server.ServiceRequestImplementor.doRequest(ServiceRequestImplementor.java:802)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.server.Service.doRequest(Service.java:1956)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.server.ServiceManager.processCommand(ServiceManager.java:437)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.server.IdcServerThread.processRequest(IdcServerThread.java:265)
services/3 06.13 11:15:10.274 IdcServer-551 at intradoc.server.IdcServerThread.run(IdcServerThread.java:160)
services/3 06.13 11:15:10.274 IdcServer-551 at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:545)
services/3 06.13 11:15:10.274 IdcServer-551 at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
services/3 06.13 11:15:10.274 IdcServer-551 at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
When I select a single item to be shown in the content presenter taskflow, I works fine. It also works fine when showing all documents in a folder.
Does any body have an idea what might be the problem I am facing and how to fix it?
thanks in advance,
HaroldHi all,
I found what caused the error. Someone had set the CollectionID metadata field in WCC to non-searchable. This resulted in an error in the CollectionsFilter.
Setting the field back to searchable in WCC solved the problem.
regards,
Harold -
Sql query slow due to case statement on Joins
Hi
The sql query runs very slow for 30 min when the below case statement is added on the joins. Could you please let me know how to tune it. if the case statement is not there then it runs only for 1 min.
*( CASE*
WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
THEN PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
THEN DT_REQ_ALL.EMPMGMTCD
END =DB2.DB2_FDW_MGMT_V.MH_CHILD )
SELECT DISTINCT
D.DB2_FDW_MGMT_V.RC_PARENT,
DT_REQ_ALL.FULL_NAME,
DT_REQ_ALL.EMP_COMPANY_CODE,
DT_REQ_ALL.EMP_COST_CENTER,
PO.PO_VENDORS.VENDOR_NAME,
PO_PO_HEADERS_ALL2.SEGMENT1,
PO_PO_HEADERS_ALL2.CREATION_DATE,
PO_DIST_GL_CODE_COMB.SEGMENT1,
PO_DIST_GL_CODE_COMB.SEGMENT2,
PO_PO_HEADERS_ALL2.CURRENCY_CODE,
PO_INV_DIST_ALL.INVOICE_NUM,
PO_INV_DIST_ALL.INVOICE_DATE,
(PO_INV_DIST_ALL.INVOICE_AMOUNT* PO_Rates_GL_DR.CONVERSION_RATE),
(NVL(to_number(PO_DIST_ALL.AMOUNT_BILLED),0) * PO_Rates_GL_DR.CONVERSION_RATE),
PO_LINES_LOC.LINE_NUM,
GL.GL_SETS_OF_BOOKS.NAME,
CASE
WHEN TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) > PO_INV_DIST_ALL.INVOICE_DATE
THEN 1
ELSE 0
END ,
PO.PO_REQUISITION_LINES_ALL.LINE_LOCATION_ID,
TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE,'WW') + 8 WEEK_Ending
FROM
DB2.DB2_FDW_MGMT_V,
PO.PO_VENDORS,
PO.PO_HEADERS_ALL PO_PO_HEADERS_ALL2,
GL.GL_CODE_COMBINATIONS PO_DIST_GL_CODE_COMB,
AP.AP_INVOICES_ALL PO_INV_DIST_ALL,
PO.PO_DISTRIBUTIONS_ALL PO_DIST_ALL,
PO.PO_LINES_ALL PO_LINES_LOC,
GL.GL_SETS_OF_BOOKS,
PO.PO_REQUISITION_LINES_ALL,
PO.PO_LINE_LOCATIONS_ALL,
AP.AP_INVOICE_DISTRIBUTIONS_ALL PO_DIST_INV_DIST_ALL,
APPS.HR_OPERATING_UNITS,
PO.PO_REQ_DISTRIBUTIONS_ALL,
SELECT DISTINCT
PO_RDA.DISTRIBUTION_ID,
PO_RLA.requisition_line_id,
PO_RHA.DESCRIPTION PO_Descr,
PO_RHA.NOTE_TO_AUTHORIZER PO_Justification,
Req_Emp.FULL_NAME,
GL_CC.SEGMENT1 Req_Company_Code,
GL_CC.SEGMENT2 Req_Cost_Center,
Req_Emp_CC.SEGMENT1 Emp_Company_Code,
Req_Emp_CC.SEGMENT2 Emp_Cost_Center,
(Case
When GL_CC.SEGMENT2 <> 8000
Then TRUNC(GL_CC.SEGMENT1) || TRUNC(GL_CC.SEGMENT2) || '_' || NVL(GL_CC.SEGMENT6,'000')
Else TRUNC(Req_Emp_CC.SEGMENT1) || TRUNC(Req_Emp_CC.SEGMENT2) || '_' || NVL(Req_Emp_CC.SEGMENT6,'000')
End) EmpMgmtCD
FROM
PO.po_requisition_lines_all PO_rla,
PO.po_requisition_headers_all PO_rha,
PO.PO_REQ_DISTRIBUTIONS_ALL po_RDA,
GL.GL_CODE_COMBINATIONS gl_cc,
HR.PER_ALL_PEOPLE_F Req_Emp,
HR.PER_ALL_ASSIGNMENTS_F Req_Emp_Assign,
HR.hr_all_organization_units Req_Emp_Org,
HR.pay_cost_allocation_keyflex Req_Emp_CC
WHERE
PO_RDA.CODE_COMBINATION_ID = GL_CC.CODE_COMBINATION_ID and
PO_RLA.REQUISITION_LINE_ID = PO_RDA.REQUISITION_LINE_ID AND
PO_RLA.to_person_id = Req_Emp.PERSON_ID AND
PO_RLA.REQUISITION_HEADER_ID = PO_RHA.REQUISITION_HEADER_ID AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp.effective_start_date and Req_Emp.effective_end_date OR
Req_Emp.effective_start_date IS NULL) AND
Req_Emp.PERSON_ID = Req_Emp_Assign.PERSON_ID AND
Req_Emp_Assign.organization_id = Req_Emp_Org.organization_id AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp_Assign.effective_start_date and Req_Emp_Assign.effective_end_date OR
Req_Emp_Assign.effective_start_date IS NULL) AND
Req_Emp_Assign.primary_flag = 'Y' AND
Req_Emp_Assign.assignment_type = 'E' AND
Req_Emp_Org.cost_allocation_keyflex_id = Req_Emp_CC.cost_allocation_keyflex_id
) DT_REQ_ALL,
SELECT
FROM_CURRENCY,
TO_CURRENCY,
CONVERSION_DATE,
CONVERSION_RATE
FROM GL.GL_DAILY_RATES
UNION
SELECT Distinct
'USD',
'USD',
CONVERSION_DATE,
1
FROM GL.GL_DAILY_RATES
) PO_Rates_GL_DR
WHERE
( PO_DIST_GL_CODE_COMB.CODE_COMBINATION_ID=PO_DIST_ALL.CODE_COMBINATION_ID )
AND ( PO_DIST_ALL.LINE_LOCATION_ID=PO.PO_LINE_LOCATIONS_ALL.LINE_LOCATION_ID )
AND ( PO_PO_HEADERS_ALL2.VENDOR_ID=PO.PO_VENDORS.VENDOR_ID )
AND ( PO_PO_HEADERS_ALL2.ORG_ID=APPS.HR_OPERATING_UNITS.ORGANIZATION_ID )
AND ( GL.GL_SETS_OF_BOOKS.SET_OF_BOOKS_ID=APPS.HR_OPERATING_UNITS.SET_OF_BOOKS_ID )
AND ( PO_PO_HEADERS_ALL2.CURRENCY_CODE=PO_Rates_GL_DR.FROM_CURRENCY )
AND ( trunc(PO_PO_HEADERS_ALL2.CREATION_DATE)=PO_Rates_GL_DR.CONVERSION_DATE )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=PO.PO_REQ_DISTRIBUTIONS_ALL.DISTRIBUTION_ID(+) )
AND ( PO.PO_REQ_DISTRIBUTIONS_ALL.REQUISITION_LINE_ID=PO.PO_REQUISITION_LINES_ALL.REQUISITION_LINE_ID(+) )
AND ( PO_LINES_LOC.PO_HEADER_ID=PO_PO_HEADERS_ALL2.PO_HEADER_ID )
AND ( PO.PO_LINE_LOCATIONS_ALL.PO_LINE_ID=PO_LINES_LOC.PO_LINE_ID )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=DT_REQ_ALL.DISTRIBUTION_ID(+) )
AND ( PO_DIST_ALL.PO_DISTRIBUTION_ID=PO_DIST_INV_DIST_ALL.PO_DISTRIBUTION_ID(+) )
AND ( PO_INV_DIST_ALL.INVOICE_ID(+)=PO_DIST_INV_DIST_ALL.INVOICE_ID )
AND ( PO_INV_DIST_ALL.SOURCE(+) <> 'XML GATEWAY' )
AND
( NVL(PO_PO_HEADERS_ALL2.CANCEL_FLAG,'N') <> 'Y' )
AND
( NVL(PO_PO_HEADERS_ALL2.CLOSED_CODE, 'OPEN') <> 'FINALLY CLOSED' )
AND
( NVL(PO_PO_HEADERS_ALL2.AUTHORIZATION_STATUS,'IN PROCESS') <> 'REJECTED' )
AND
( TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) BETWEEN TO_DATE('01-jan-2011') AND TO_DATE('04-jan-2011') )
AND
PO_Rates_GL_DR.TO_CURRENCY = 'USD'
AND
DB2.DB2_FDW_MGMT_V.RC_PARENT In ( 'Unavailable','Corp','Commercial' )
AND
( CASE
WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
THEN PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
THEN DT_REQ_ALL.EMPMGMTCD
END =DB2.DB2_FDW_MGMT_V.MH_CHILD )Explain plan. sorry can't get the explain plan from sql. this is from toad.
Plan
SELECT STATEMENT ALL_ROWSCost: 53,932 Bytes: 2,607 Cardinality: 1
79 HASH UNIQUE Cost: 53,932 Bytes: 2,607 Cardinality: 1
78 NESTED LOOPS OUTER Cost: 53,931 Bytes: 2,607 Cardinality: 1
75 NESTED LOOPS OUTER Cost: 53,928 Bytes: 2,560 Cardinality: 1
72 NESTED LOOPS Cost: 53,902 Bytes: 2,552 Cardinality: 1
69 NESTED LOOPS OUTER Cost: 53,900 Bytes: 2,533 Cardinality: 1
66 NESTED LOOPS OUTER Cost: 53,898 Bytes: 2,521 Cardinality: 1
63 HASH JOIN OUTER Cost: 53,896 Bytes: 2,509 Cardinality: 1
40 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_DISTRIBUTIONS_ALL Cost: 3 Bytes: 26 Cardinality: 1
39 NESTED LOOPS Cost: 17,076 Bytes: 2,400 Cardinality: 1
37 NESTED LOOPS Cost: 17,073 Bytes: 2,374 Cardinality: 1
34 NESTED LOOPS Cost: 17,070 Bytes: 2,362 Cardinality: 1
31 NESTED LOOPS Cost: 17,066 Bytes: 2,347 Cardinality: 1
29 NESTED LOOPS Cost: 17,066 Bytes: 2,339 Cardinality: 1
26 NESTED LOOPS Cost: 17,065 Bytes: 2,312 Cardinality: 1
23 NESTED LOOPS Cost: 17,064 Bytes: 2,287 Cardinality: 1
20 NESTED LOOPS Cost: 17,062 Bytes: 2,261 Cardinality: 1
17 NESTED LOOPS Cost: 17,056 Bytes: 6,678 Cardinality: 3
15 HASH JOIN Cost: 17,056 Bytes: 6,663 Cardinality: 3
13 MERGE JOIN CARTESIAN Cost: 135 Bytes: 30,352 Cardinality: 14
5 VIEW VIEW DB2.DB2_FDW_MGMT_V Cost: 4 Bytes: 2,128 Cardinality: 1
4 SORT UNIQUE Cost: 4 Cardinality: 1
3 UNION-ALL
1 REMOTE REMOTE SERIAL_FROM_REMOTE PRDFDW.WORLD
2 FAST DUAL Cost: 3 Cardinality: 1
12 BUFFER SORT Cost: 135 Bytes: 560 Cardinality: 14
11 VIEW DB2. Cost: 131 Bytes: 560 Cardinality: 14
10 SORT UNIQUE Cost: 131 Bytes: 310 Cardinality: 14
9 UNION-ALL
7 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_DAILY_RATES Cost: 65 Bytes: 270 Cardinality: 9
6 INDEX SKIP SCAN INDEX (UNIQUE) GL.GL_DAILY_RATES_U1 Cost: 64 Cardinality: 1
8 INDEX SKIP SCAN INDEX (UNIQUE) GL.GL_DAILY_RATES_U1 Cost: 64 Bytes: 4,368 Cardinality: 546
14 TABLE ACCESS FULL TABLE PO.PO_HEADERS_ALL Cost: 16,920 Bytes: 32,754 Cardinality: 618
16 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ORGANIZATION_UNITS_PK Cost: 0 Bytes: 5 Cardinality: 1
19 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ORGANIZATION_INFORMATION Cost: 2 Bytes: 35 Cardinality: 1
18 INDEX RANGE SCAN INDEX HR.HR_ORGANIZATION_INFORMATIO_FK2 Cost: 1 Cardinality: 2
22 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ORGANIZATION_INFORMATION Cost: 2 Bytes: 26 Cardinality: 1
21 INDEX RANGE SCAN INDEX HR.HR_ORGANIZATION_INFORMATIO_FK2 Cost: 1 Cardinality: 1
25 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_SETS_OF_BOOKS Cost: 1 Bytes: 25 Cardinality: 1
24 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_SETS_OF_BOOKS_U2 Cost: 0 Cardinality: 1
28 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_VENDORS Cost: 1 Bytes: 27 Cardinality: 1
27 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_VENDORS_U1 Cost: 0 Cardinality: 1
30 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ALL_ORGANIZATION_UNTS_TL_PK Cost: 0 Bytes: 8 Cardinality: 1
33 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_LINES_ALL Cost: 4 Bytes: 60 Cardinality: 4
32 INDEX RANGE SCAN INDEX (UNIQUE) PO.PO_LINES_U2 Cost: 2 Cardinality: 4
36 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_LINE_LOCATIONS_ALL Cost: 3 Bytes: 12 Cardinality: 1
35 INDEX RANGE SCAN INDEX PO.PO_LINE_LOCATIONS_N1 Cost: 2 Cardinality: 1
38 INDEX RANGE SCAN INDEX PO.PO_DISTRIBUTIONS_N1 Cost: 2 Cardinality: 1
62 VIEW DB2. Cost: 36,819 Bytes: 1,090 Cardinality: 10
61 HASH UNIQUE Cost: 36,819 Bytes: 2,580 Cardinality: 10
60 NESTED LOOPS Cost: 36,818 Bytes: 2,580 Cardinality: 10
57 NESTED LOOPS Cost: 36,798 Bytes: 2,390 Cardinality: 10
54 NESTED LOOPS Cost: 36,768 Bytes: 2,220 Cardinality: 10
51 NESTED LOOPS Cost: 36,758 Bytes: 1,510 Cardinality: 10
48 NESTED LOOPS Cost: 36,747 Bytes: 1,050 Cardinality: 10
45 HASH JOIN Cost: 36,737 Bytes: 960 Cardinality: 10
43 HASH JOIN Cost: 34,602 Bytes: 230,340 Cardinality: 3,490
41 TABLE ACCESS FULL TABLE HR.PER_ALL_PEOPLE_F Cost: 1,284 Bytes: 1,848,420 Cardinality: 44,010
42 TABLE ACCESS FULL TABLE PO.PO_REQUISITION_LINES_ALL Cost: 31,802 Bytes: 18,340,080 Cardinality: 764,170
44 TABLE ACCESS FULL TABLE HR.PER_ALL_ASSIGNMENTS_F Cost: 2,134 Bytes: 822,540 Cardinality: 27,418
47 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ALL_ORGANIZATION_UNITS Cost: 1 Bytes: 9 Cardinality: 1
46 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ORGANIZATION_UNITS_PK Cost: 0 Cardinality: 1
50 TABLE ACCESS BY INDEX ROWID TABLE HR.PAY_COST_ALLOCATION_KEYFLEX Cost: 1 Bytes: 46 Cardinality: 1
49 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.PAY_COST_ALLOCATION_KEYFLE_PK Cost: 0 Cardinality: 1
53 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQUISITION_HEADERS_ALL Cost: 1 Bytes: 71 Cardinality: 1
52 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQUISITION_HEADERS_U1 Cost: 0 Cardinality: 1
56 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQ_DISTRIBUTIONS_ALL Cost: 3 Bytes: 17 Cardinality: 1
55 INDEX RANGE SCAN INDEX PO.PO_REQ_DISTRIBUTIONS_N1 Cost: 2 Cardinality: 1
59 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 2 Bytes: 19 Cardinality: 1
58 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1
65 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQ_DISTRIBUTIONS_ALL Cost: 2 Bytes: 12 Cardinality: 1
64 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQ_DISTRIBUTIONS_U1 Cost: 1 Cardinality: 1
68 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQUISITION_LINES_ALL Cost: 2 Bytes: 12 Cardinality: 1
67 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQUISITION_LINES_U1 Cost: 1 Cardinality: 1
71 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 2 Bytes: 19 Cardinality: 1
70 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1
74 TABLE ACCESS BY INDEX ROWID TABLE AP.AP_INVOICE_DISTRIBUTIONS_ALL Cost: 26 Bytes: 16 Cardinality: 2
73 INDEX RANGE SCAN INDEX AP.AP_INVOICE_DISTRIBUTIONS_N7 Cost: 2 Cardinality: 37
77 TABLE ACCESS BY INDEX ROWID TABLE AP.AP_INVOICES_ALL Cost: 3 Bytes: 47 Cardinality: 1
76 INDEX RANGE SCAN INDEX (UNIQUE) AP.AP_INVOICES_U1 Cost: 2 Cardinality: 1 ThanksForming a new table "new_table" with 3 tables which particiapate in CASE statement logic.
with DT_REQ_ALL as
SELECT DISTINCT
PO_RDA.DISTRIBUTION_ID,
PO_RLA.requisition_line_id,
PO_RHA.DESCRIPTION PO_Descr,
PO_RHA.NOTE_TO_AUTHORIZER PO_Justification,
Req_Emp.FULL_NAME,
GL_CC.SEGMENT1 Req_Company_Code,
GL_CC.SEGMENT2 Req_Cost_Center,
Req_Emp_CC.SEGMENT1 Emp_Company_Code,
Req_Emp_CC.SEGMENT2 Emp_Cost_Center,
(Case
When GL_CC.SEGMENT2 8000
Then TRUNC(GL_CC.SEGMENT1) || TRUNC(GL_CC.SEGMENT2) || '_' || NVL(GL_CC.SEGMENT6,'000')
Else TRUNC(Req_Emp_CC.SEGMENT1) || TRUNC(Req_Emp_CC.SEGMENT2) || '_' || NVL(Req_Emp_CC.SEGMENT6,'000')
End) EmpMgmtCD
FROM
PO.po_requisition_lines_all PO_rla,
PO.po_requisition_headers_all PO_rha,
PO.PO_REQ_DISTRIBUTIONS_ALL po_RDA,
GL.GL_CODE_COMBINATIONS gl_cc,
HR.PER_ALL_PEOPLE_F Req_Emp,
HR.PER_ALL_ASSIGNMENTS_F Req_Emp_Assign,
HR.hr_all_organization_units Req_Emp_Org,
HR.pay_cost_allocation_keyflex Req_Emp_CC
WHERE
PO_RDA.CODE_COMBINATION_ID = GL_CC.CODE_COMBINATION_ID and
PO_RLA.REQUISITION_LINE_ID = PO_RDA.REQUISITION_LINE_ID AND
PO_RLA.to_person_id = Req_Emp.PERSON_ID AND
PO_RLA.REQUISITION_HEADER_ID = PO_RHA.REQUISITION_HEADER_ID AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp.effective_start_date and Req_Emp.effective_end_date OR
Req_Emp.effective_start_date IS NULL) AND
Req_Emp.PERSON_ID = Req_Emp_Assign.PERSON_ID AND
Req_Emp_Assign.organization_id = Req_Emp_Org.organization_id AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp_Assign.effective_start_date and Req_Emp_Assign.effective_end_date OR
Req_Emp_Assign.effective_start_date IS NULL) AND
Req_Emp_Assign.primary_flag = 'Y' AND
Req_Emp_Assign.assignment_type = 'E' AND
Req_Emp_Org.cost_allocation_keyflex_id = Req_Emp_CC.cost_allocation_keyflex_id
SELECT DISTINCT
D.DB2_FDW_MGMT_V.RC_PARENT,
DT_REQ_ALL.FULL_NAME,
DT_REQ_ALL.EMP_COMPANY_CODE,
DT_REQ_ALL.EMP_COST_CENTER,
PO.PO_VENDORS.VENDOR_NAME,
PO_PO_HEADERS_ALL2.SEGMENT1,
PO_PO_HEADERS_ALL2.CREATION_DATE,
PO_DIST_GL_CODE_COMB.SEGMENT1,
PO_DIST_GL_CODE_COMB.SEGMENT2,
PO_PO_HEADERS_ALL2.CURRENCY_CODE,
PO_INV_DIST_ALL.INVOICE_NUM,
PO_INV_DIST_ALL.INVOICE_DATE,
(PO_INV_DIST_ALL.INVOICE_AMOUNT* PO_Rates_GL_DR.CONVERSION_RATE),
(NVL(to_number(PO_DIST_ALL.AMOUNT_BILLED),0) * PO_Rates_GL_DR.CONVERSION_RATE),
PO_LINES_LOC.LINE_NUM,
GL.GL_SETS_OF_BOOKS.NAME,
CASE
WHEN TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) > PO_INV_DIST_ALL.INVOICE_DATE
THEN 1
ELSE 0
END ,
PO.PO_REQUISITION_LINES_ALL.LINE_LOCATION_ID,
TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE,'WW') + 8 WEEK_Ending
FROM
( SELECT * FROM
DB2.DB2_FDW_MGMT_V,
GL.GL_CODE_COMBINATIONS PO_DIST_GL_CODE_COMB,
DT_REQ_ALL
WHERE
DB2.DB2_FDW_MGMT_V.RC_PARENT In ( 'Unavailable','Corp','Commercial' )
AND
CASE
WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
THEN PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
THEN DT_REQ_ALL.EMPMGMTCD
END =DB2.DB2_FDW_MGMT_V.MH_CHILD
) new_table,
PO.PO_VENDORS,
PO.PO_HEADERS_ALL PO_PO_HEADERS_ALL2,
AP.AP_INVOICES_ALL PO_INV_DIST_ALL,
PO.PO_DISTRIBUTIONS_ALL PO_DIST_ALL,
PO.PO_LINES_ALL PO_LINES_LOC,
GL.GL_SETS_OF_BOOKS,
PO.PO_REQUISITION_LINES_ALL,
PO.PO_LINE_LOCATIONS_ALL,
AP.AP_INVOICE_DISTRIBUTIONS_ALL PO_DIST_INV_DIST_ALL,
APPS.HR_OPERATING_UNITS,
PO.PO_REQ_DISTRIBUTIONS_ALL,
SELECT
FROM_CURRENCY,
TO_CURRENCY,
CONVERSION_DATE,
CONVERSION_RATE
FROM GL.GL_DAILY_RATES
UNION
SELECT Distinct
'USD',
'USD',
CONVERSION_DATE,
1
FROM GL.GL_DAILY_RATES
) PO_Rates_GL_DR
WHERE
( PO_DIST_GL_CODE_COMB.CODE_COMBINATION_ID=PO_DIST_ALL.CODE_COMBINATION_ID )
AND ( PO_DIST_ALL.LINE_LOCATION_ID=PO.PO_LINE_LOCATIONS_ALL.LINE_LOCATION_ID )
AND ( PO_PO_HEADERS_ALL2.VENDOR_ID=PO.PO_VENDORS.VENDOR_ID )
AND ( PO_PO_HEADERS_ALL2.ORG_ID=APPS.HR_OPERATING_UNITS.ORGANIZATION_ID )
AND ( GL.GL_SETS_OF_BOOKS.SET_OF_BOOKS_ID=APPS.HR_OPERATING_UNITS.SET_OF_BOOKS_ID )
AND ( PO_PO_HEADERS_ALL2.CURRENCY_CODE=PO_Rates_GL_DR.FROM_CURRENCY )
AND ( trunc(PO_PO_HEADERS_ALL2.CREATION_DATE)=PO_Rates_GL_DR.CONVERSION_DATE )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=PO.PO_REQ_DISTRIBUTIONS_ALL.DISTRIBUTION_ID(+) )
AND ( PO.PO_REQ_DISTRIBUTIONS_ALL.REQUISITION_LINE_ID=PO.PO_REQUISITION_LINES_ALL.REQUISITION_LINE_ID(+) )
AND ( PO_LINES_LOC.PO_HEADER_ID=PO_PO_HEADERS_ALL2.PO_HEADER_ID )
AND ( PO.PO_LINE_LOCATIONS_ALL.PO_LINE_ID=PO_LINES_LOC.PO_LINE_ID )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=DT_REQ_ALL.DISTRIBUTION_ID(+) )
AND ( PO_DIST_ALL.PO_DISTRIBUTION_ID=PO_DIST_INV_DIST_ALL.PO_DISTRIBUTION_ID(+) )
AND ( PO_INV_DIST_ALL.INVOICE_ID(+)=PO_DIST_INV_DIST_ALL.INVOICE_ID )
AND ( PO_INV_DIST_ALL.SOURCE(+) 'XML GATEWAY' )
AND
( NVL(PO_PO_HEADERS_ALL2.CANCEL_FLAG,'N') 'Y' )
AND
( NVL(PO_PO_HEADERS_ALL2.CLOSED_CODE, 'OPEN') 'FINALLY CLOSED' )
AND
( NVL(PO_PO_HEADERS_ALL2.AUTHORIZATION_STATUS,'IN PROCESS') 'REJECTED' )
AND
( TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) BETWEEN TO_DATE('01-jan-2011') AND TO_DATE('04-jan-2011') )
AND
PO_Rates_GL_DR.TO_CURRENCY = 'USD'
-
WLC 5508: 802.1 AAA override; Authenication success no dynamic vlan assignment
WLC 5508: software version 7.0.98.0
Windows 7 Client
Radius Server: Fedora Core 13 / Freeradius with LDAP storage backend
I have followed the guide at http://www.cisco.com/en/US/tech/tk722/tk809/technologies_configuration_example09186a008076317c.shtml with respective to building the LDAP and free radius server. 802.1x authorization and authenication correctly work. The session keys are returned from the radius server and the wlc send the appropriate information for the client to generate the WEP key.
However, the WLC does not override the VLAN assignment, even though I was to believe I set everything up correctly. From the packet capture, you can see that verfication of client is authorized to use the WLAN returns the needed attributes:
AVP: l=4 t=Tunnel-Private-Group-Id(81): 10
AVP: l=6 t=Tunnel-Medium-Type(65): IEEE-802(6)
AVP: l=6 t=Tunnel-Type(64): VLAN(13)
I attached a packet capture and wlc config, any guidance toward the attributes that may be missing or not set correctly in the config would be most appreciated.Yes good catch, so I had one setting left off in freeradius that allowed the inner reply attributes back to the outer tunneled accept. I wrote up a medium high level config for any future viewers of this thread:
The following was tested and verified on a fedora 13 installation. This is a minimal setup; not meant for a "live" network (security issues with cleartext passwords, ldap not indexed properly for performance)
Install Packages
1. Install needed packages.
yum install openldap*
yum install freeradius*
2. Set the services to automatically start of system startup
chkconfig --level 2345 slapd on
chkconfig --level 2345 radiusd on
Configure and start LDAP
1. Copy the needed ladp schemas for radius. Your path may vary a bit
cp /usr/share/doc/freeradius*/examples/openldap.schema /etc/openldap/schema/radius.schema
2. Create a admin password for slapd. Record this password for later use when configuring the slapd.conf file
slappasswd
3. Add the ldap user and group; if it doesn't exisit. Depending on the install rpm, it may have been created
useradd ldap
groupadd ldap
4. Create the directory and assign permissions for the database files
mkdir /var/lib/ldap
chmod 700 /var/lib/ldap
chown ldap:ldap /var/lib/ldap
5. Edit the slapd.conf file.
cd /etc/openldap
vi slapd.conf
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#Default needed schemas
include /etc/openldap/schema/corba.schema
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/duaconf.schema
include /etc/openldap/schema/dyngroup.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/java.schema
include /etc/openldap/schema/misc.schema
include /etc/openldap/schema/nis.schema
include /etc/openldap/schema/openldap.schema
include /etc/openldap/schema/ppolicy.schema
include /etc/openldap/schema/collective.schema
#Radius include
include /etc/openldap/schema/radius.schema
#Samba include
#include /etc/openldap/schema/samba.schema
# Allow LDAPv2 client connections. This is NOT the default.
allow bind_v2
# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#referral ldap://root.openldap.org
pidfile /var/run/openldap/slapd.pid
argsfile /var/run/openldap/slapd.args
# ldbm and/or bdb database definitions
#Use the berkely database
database bdb
#dn suffix, domain components read in order
suffix "dc=cisco,dc=com"
checkpoint 1024 15
#root container node defined
rootdn "cn=Manager,dc=cisco,dc=com"
# Cleartext passwords, especially for the rootdn, should
# be avoided. See slappasswd(8) and slapd.conf(5) for details.
# Use of strong authentication encouraged.
# rootpw secret
rootpw
{SSHA}
cVV/4zKquR4IraFEU7NTG/PIESw8l4JI
# The database directory MUST exist prior to running slapd AND
# should only be accessible by the slapd and slap tools. (chown ldap:ldap)
# Mode 700 recommended.
directory /var/lib/ldap
# Indices to maintain for this database
index objectClass eq,pres
index uid,memberUid eq,pres,sub
# enable monitoring
database monitor
# allow onlu rootdn to read the monitor
access to *
by dn.exact="cn=Manager,dc=cisco,dc=com" read
by * none
6. Remove the slapd.d directory
cd /etc/openldap
rm -rf slapd.d
7. Hopefully if everything is correct, should be able to start up slapd with no problem
service slapd start
8. Create the initial database in a text file called /tmp/initial.ldif
dn: dc=cisco,dc=com
objectClass: dcobject
objectClass: organization
o: cisco
dc: cisco
dn: ou=people,dc=cisco,dc=com
objectClass: organizationalunit
ou: people
description: people
dn: uid=jonatstr,ou=people,dc=cisco,dc=com
objectClass: top
objectClass: radiusprofile
objectClass: inetOrgPerson
cn: jonatstr
sn: jonatstr
uid: jonatstr
description: user Jonathan Strickland
radiusTunnelType: VLAN
radiusTunnelMediumType: 802
radiusTunnelPrivateGroupId: 10
userPassword: ggsg
9. Add the file to the database
ldapadd -h localhost -W -D "cn=Manager, dc=cisco,dc=com" -f /tmp/initial.ldif
10. Issue a basic query to the ldap db, makes sure that we can request and receive results back
ldapsearch -h localhost -W -D cn=Manager,dc=cisco,dc=com -b dc=cisco,dc=com -s sub "objectClass=*"
Configure and Start FreeRadius
1. Configure ldap.attrmap, if needed. This step is only needed if we need to map and pass attributes back to the authenicator (dynamic vlan assignments as an example). Below is an example for dynamic vlan addresses
cd /etc/raddb
vi ldap.attrmap
For dynamic vlan assignments, verify the follow lines exist:
replyItem Tunnel-Type radiusTunnelType
replyItem Tunnel-Medium-Type radiusTunnelMediumType
replyItem Tunnel-Private-Group-Id radiusTunnelPrivateGroupId
Since we are planning to use the userpassword, we will let the mschap module perform the NT translations for us. Add the follow line to check ldap object for userpassword and store as Cleartext-Password:
checkItem Cleartext-Password userPassword
2. Configure eap.conf. The following sections attributes below should be verified. You may change other attributes as needed, they are just not covered in this document.
eap
{ default_eap_type = peap ..... }
tls {
#I will not go into details here as this is beyond scope of setting up freeradisu. The defaults will work, as freeradius comes with generated self signed certificates.
peap {
default_eap_type = mschapv2
#you will have to set this to allowed the inner tls tunnel attributes into the final accept message
use_tunneled_reply = yes
3. Change the authenication and authorization modules and order.
cd /etc/raddb/sites-enabled
vi default
For the authorize section, uncomment the ldap module.
For the authenicate section, uncomment the ldap module
vi inner-tunnel
Very importants, for the authorize section, ensure the ldap module is first, before mschap. Thus authorize will look like:
authorize
{ ldap mschap ...... }
4. Configure ldap module
cd /etc/raddb/modules
ldap
{ server=localhost identify = "cn=Manager,dc=cisco,dc=com" password=admin basedn="dc=cisco,dc=com" base_filter = "(objectclass=radiusprofile)" access_attr="uid" ............ }
5. Start up radius in debug mode on another console
radiusd -X
6. radtest localhost 12 testing123
You should get a Access-Accept back
7. Now to perform an EAP-PEAP test. This will require a wpa_supplicant test libarary called eapol_test
First install openssl support libraries, required to compile
yum install openssl*
yum install gcc
wget http://hostap.epitest.fi/releases/wpa_supplicant-0.6.10.tar.gz
tar xvf wpa_supplicant-0.6.10.tar.gz
cd wpa_supplicant-0.6.10/wpa_supplicant
vi defconfig
Uncomment CONFIG_EAPOL_TEST = y and save/exit
cp defconfig .config
make eapol_test
cp eapol_test /usr/local/bin
chmod 755 /usr/local/bin/eapol_test
8. Create a test config file named eapol_test.conf.peap
network=
{ eap=PEAP eapol_flags=0 key_mgmt=IEEE8021X identity="jonatstr" password="ggsg" \#If you want to verify the Server certificate the below would be needed \#ca_cert="/root/ca.pem" phase2="auth=MSCAHPV2" }
9. Run the test
eapol_test -c ~/eapol_test.conf.peap -a 127.0.0.1 -p 1812 -s testing123 -
Hi
I have a AEBS with Gigabit Ethernet upgraded to Firmware 7.2.1. I am running OS X Leopard 10.5.1 and I am unable to successfully forward ports. Every time I query a port that I believe to have mapped successfully it appears closed using canyouseeme.org.
I have followed numerous instructions, but they all seem to be instructions for Tiger, I'm wondering if a setting has been missed. I have successfully assigned a static IP address to my machine but still the port appears to be closed. I do run the Leopard firewall using specific applications option (the third one), but I have also tried it with the firewall off and it still hasn't changed anything, the ports are always closed.
So now I'm pretty much stumped. Has any one actually managed to get a setup similar to mine working, with the ports being open? All advice appreciated.
Thanks
Message was edited by: dalyboyHello dalyboy. Welcome to the Apple Discussions!
Try the following to see if it will help ...
To setup port mapping on an 802.11n AirPort Extreme Base Station (AEBSn), either connect to the AEBSn's wireless network or temporarily connect directly, using an Ethernet cable, to one of the LAN port of the AEBSn, and then use the AirPort Utility, in Manual Setup, to make these settings:
1. Reserve a DHCP-provided IP address for the host device.
Internet > DHCP tab
o On the DHCP tab, click the "+" (Add) button to enter DHCP Reservations.
o Description: <enter the desired description of the host device>
o Reserve address by: MAC Address
o Click Continue.
o MAC Address: <enter the MAC (what Apple calls Ethernet ID if you are using wired or AirPort ID if wireless) hardware address of the host computer>
o IPv4 Address: <enter the desired IP address>
o Click Done.
2. Setup Port Mapping on the AEBSn.
Advanced > Port Mapping tab
o Click the "+" (Add) button
o Service: <choose the appropriate service from the Service pop-up menu>
o Public UDP Port(s): <enter the appropriate UDP port values>
o Public TCP Port(s): <enter the appropriate TCP port values>
o Private IP Address: <enter the IP address of the host server>
o Private UDP Port(s): <enter the same as Public UDP Ports or your choice>
o Private TCP Port(s): <enter the same as Public TCP Ports or your choice>
o Click "Continue"
(ref: "Well Known" TCP and UDP ports used by Apple software products) -
How to improve performance of a query that is based on an xmltype table
Dear Friends,
I have a query that is pulling records from an xmltype table with 9000 rows and it is running very slow.
I am using XMLTABLE command to retreive the rows. It is taking upto 30 minutes to finish.
Would you be able to suggest how I can make it faster. Thanks.
Below is the query.....
INSERT INTO temp_sap_po_receipt_history_t
(po_number, po_line_number, doc_year,
material_doc, material_doc_item, quantity, sap_ref_doc_no_long,
reference_doc, movement_type_code,
sap_ref_doc_no, posting_date, entry_date, entry_time, hist_type)
SELECT :pin_po_number po_number,
b.po_line_number, b.doc_year,
b.material_doc, b.material_doc_item, b.quantity, b.sap_ref_doc_no_long,
b.reference_doc, b.movement_type_code,
b.sap_ref_doc_no, to_date(b.posting_date,'rrrr-mm-dd'),
to_date(b.entry_date,'rrrr-mm-dd'), b.entry_time, b.hist_type
FROM temp_xml t,
XMLTABLE(XMLNAMESPACES('urn:sap-com:document:sap:rfc:functions' AS "n0"),
'/n0:BAPI_PO_GETDETAIL1Response/POHISTORY/item'
PASSING t.object_value
COLUMNS PO_LINE_NUMBER VARCHAR2(20) PATH 'PO_ITEM',
DOC_YEAR varchar2(4) PATH 'DOC_YEAR',
MATERIAL_DOC varchar2(30) PATH 'MAT_DOC',
MATERIAL_DOC_ITEM VARCHAR2(10) PATH 'MATDOC_ITEM',
QUANTITY NUMBER(20,6) PATH 'QUANTITY',
SAP_REF_DOC_NO_LONG VARCHAR2(20) PATH 'REF_DOC_NO_LONG',
REFERENCE_DOC VARCHAR2(20) PATH 'REF_DOC',
MOVEMENT_TYPE_CODE VARCHAR2(4) PATH 'MOVE_TYPE',
SAP_REF_DOC_NO VARCHAR2(20) PATH 'REF_DOC_NO',
POSTING_DATE VARCHAR2(10) PATH 'PSTNG_DATE',
ENTRY_DATE VARCHAR2(10) PATH 'ENTRY_DATE',
ENTRY_TIME VARCHAR2(8) PATH 'ENTRY_TIME',
HIST_TYPE VARCHAR2(5) PATH 'HIST_TYPE') b;Based on response from mdrake on this thread:
Re: XML file processing into oracle
For large XML's, you can speed up the processing of XMLTABLE by using a registered schema...
declare
SCHEMAURL VARCHAR2(256) := 'http://xmlns.example.org/xsd/testcase.xsd';
XMLSCHEMA VARCHAR2(4000) := '<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" xdb:storeVarrayAsTable="true">
<xs:element name="cust_order" type="cust_orderType" xdb:defaultTable="CUST_ORDER_TBL"/>
<xs:complexType name="groupType" xdb:maintainDOM="false">
<xs:sequence>
<xs:element name="item" type="itemType" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="id" type="xs:byte" use="required"/>
</xs:complexType>
<xs:complexType name="itemType" xdb:maintainDOM="false">
<xs:simpleContent>
<xs:extension base="xs:string">
<xs:attribute name="id" type="xs:short" use="required"/>
<xs:attribute name="name" type="xs:string" use="required"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
<xs:complexType name="cust_orderType" xdb:maintainDOM="false">
<xs:sequence>
<xs:element name="group" type="groupType" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="cust_id" type="xs:short" use="required"/>
</xs:complexType>
</xs:schema>';
INSTANCE CLOB :=
'<cust_order cust_id="12345">
<group id="1">
<item id="1" name="Standard Mouse">100</item>
<item id="2" name="Keyboard">100</item>
<item id="3" name="Memory Module 2Gb">200</item>
<item id="4" name="Processor 3Ghz">25</item>
<item id="5" name="Processor 2.4Ghz">75</item>
</group>
<group id="2">
<item id="1" name="Graphics Tablet">15</item>
<item id="2" name="Keyboard">15</item>
<item id="3" name="Memory Module 4Gb">15</item>
<item id="4" name="Processor Quad Core 2.8Ghz">15</item>
</group>
<group id="3">
<item id="1" name="Optical Mouse">5</item>
<item id="2" name="Ergo Keyboard">5</item>
<item id="3" name="Memory Module 2Gb">10</item>
<item id="4" name="Processor Dual Core 2.4Ghz">5</item>
<item id="5" name="Dual Output Graphics Card">5</item>
<item id="6" name="28inch LED Monitor">10</item>
<item id="7" name="Webcam">5</item>
<item id="8" name="A3 1200dpi Laser Printer">2</item>
</group>
</cust_order>';
begin
dbms_xmlschema.registerSchema
schemaurl => SCHEMAURL
,schemadoc => XMLSCHEMA
,local => TRUE
,genTypes => TRUE
,genBean => FALSE
,genTables => TRUE
,ENABLEHIERARCHY => DBMS_XMLSCHEMA.ENABLE_HIERARCHY_NONE
execute immediate 'insert into CUST_ORDER_TBL values (XMLTYPE(:INSTANCE))' using INSTANCE;
end;
SQL> desc CUST_ORDER_TBL
Name Null? Type
TABLE of SYS.XMLTYPE(XMLSchema "http://xmlns.example.org/xsd/testcase.xsd" Element "cust_order") STORAGE Object-relational TYPE "cust_orderType222_T"
SQL> set autotrace on explain
SQL> set pages 60 lines 164 heading on
SQL> col cust_id format a8
SQL> select extract(object_value,'/cust_order/@cust_id') as cust_id
2 ,grp.id as group_id, itm.id as item_id, itm.inm as item_name, itm.qty as item_qty
3 from CUST_ORDER_TBL
4 ,XMLTABLE('/cust_order/group'
5 passing object_value
6 columns id number path '@id'
7 ,item xmltype path 'item'
8 ) grp
9 ,XMLTABLE('/item'
10 passing grp.item
11 columns id number path '@id'
12 ,inm varchar2(30) path '@name'
13 ,qty number path '.'
14 ) itm
15 /
CUST_ID GROUP_ID ITEM_ID ITEM_NAME ITEM_QTY
12345 1 1 Standard Mouse 100
12345 1 2 Keyboard 100
12345 1 3 Memory Module 2Gb 200
12345 1 4 Processor 3Ghz 25
12345 1 5 Processor 2.4Ghz 75
12345 2 1 Graphics Tablet 15
12345 2 2 Keyboard 15
12345 2 3 Memory Module 4Gb 15
12345 2 4 Processor Quad Core 2.8Ghz 15
12345 3 1 Optical Mouse 5
12345 3 2 Ergo Keyboard 5
12345 3 3 Memory Module 2Gb 10
12345 3 4 Processor Dual Core 2.4Ghz 5
12345 3 5 Dual Output Graphics Card 5
12345 3 6 28inch LED Monitor 10
12345 3 7 Webcam 5
12345 3 8 A3 1200dpi Laser Printer 2
17 rows selected.Need at least 10.2.0.3 for performance i.e. to avoid COLLECTION ITERATOR PICKLER FETCH in execution plan...
On 10.2.0.1:
Execution Plan
Plan hash value: 3741473841
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 24504 | 89M| 873 (1)| 00:00:11 |
| 1 | NESTED LOOPS | | 24504 | 89M| 873 (1)| 00:00:11 |
| 2 | NESTED LOOPS | | 3 | 11460 | 805 (1)| 00:00:10 |
| 3 | TABLE ACCESS FULL | CUST_ORDER_TBL | 1 | 3777 | 3 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | SYS_IOT_TOP_774117 | 3 | 129 | 1 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE | | | | |
Predicate Information (identified by operation id):
4 - access("NESTED_TABLE_ID"="CUST_ORDER_TBL"."SYS_NC0000900010$")
filter("SYS_NC_TYPEID$" IS NOT NULL)
Note
- dynamic sampling used for this statementOn 10.2.0.3:
Execution Plan
Plan hash value: 1048233240
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 17 | 132K| 839 (0)| 00:00:11 |
| 1 | NESTED LOOPS | | 17 | 132K| 839 (0)| 00:00:11 |
| 2 | MERGE JOIN CARTESIAN | | 17 | 131K| 805 (0)| 00:00:10 |
| 3 | TABLE ACCESS FULL | CUST_ORDER_TBL | 1 | 3781 | 3 (0)| 00:00:01 |
| 4 | BUFFER SORT | | 17 | 70839 | 802 (0)| 00:00:10 |
|* 5 | INDEX FAST FULL SCAN| SYS_IOT_TOP_56154 | 17 | 70839 | 802 (0)| 00:00:10 |
|* 6 | INDEX UNIQUE SCAN | SYS_IOT_TOP_56152 | 1 | 43 | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | SYS_C006701 | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - filter("SYS_NC_TYPEID$" IS NOT NULL)
6 - access("SYS_NTpzENS1H/RwSSC7TVzvlqmQ=="."NESTED_TABLE_ID"="SYS_NTnN5b8Q+8Txi9V
w5Ysl6x9w=="."SYS_NC0000600007$")
filter("SYS_NC_TYPEID$" IS NOT NULL AND
"NESTED_TABLE_ID"="CUST_ORDER_TBL"."SYS_NC0000900010$")
7 - access("SYS_NTpzENS1H/RwSSC7TVzvlqmQ=="."NESTED_TABLE_ID"="SYS_NTnN5b8Q+8Txi9V
w5Ysl6x9w=="."SYS_NC0000600007$")
Note
- dynamic sampling used for this statement----------------------------------------------------------------------------------------------------------
-- CLEAN UP
DROP TABLE CUST_ORDER_TBL purge;
exec dbms_xmlschema.deleteschema('http://xmlns.example.org/xsd/testcase.xsd'); -
UNX issue : Failed to initialize Query Panel (service issue?)
Hello all,
For a couple off days now we are trying to use universe(s) in Desing studio with no luck.
We use the Efashion single source relational universe on which we can run Webi's without a problem . When we add a datasource in DS we see and can select the unx but when we push the "edit query specification" button DS freezes and return the error : Failed to initialize Query Panel
We just upgrade tot DS1.2 SP01 both on server side as on client side.
This is the error from the error log : which seems to have a connection problem to a certain service:
Any of you have an idea how to proceed?
All servers run fine when we check the CMC and an APS is added for Design studio (analysis) .DS works fine with BEX queries...
Thanks in advance.
!ENTRY com.sap.ip.bi.zen.ui 4 0 2014-03-14 12:43:33.876
!MESSAGE Failed to initialize Query Panel
!STACK 0
com.businessobjects.dsl.services.workspace.impl.WorkspaceException: Unable to find servers in CMS vyouboxd01.you.com:6400 and cluster @vyouboxd01.you.com:6400 with kind connectionserver and service CS_CORBA_NetworkLayer. All such servers could be down or disabled by the administrator. (FWM 01014)
at com.businessobjects.dsl.services.dataprovider.impl.AbstractDataProviderWithUniverse.createDataSource(AbstractDataProviderWithUniverse.java:437)
at com.businessobjects.dsl.services.dataprovider.impl.QuerySpecDataProvider.createInitialDataSource(QuerySpecDataProvider.java:609)
at com.businessobjects.dsl.services.dataprovider.impl.QuerySpecDataProvider.finalizeCreation(QuerySpecDataProvider.java:852)
at com.businessobjects.dsl.services.workspace.impl.AbstractBuiltInDataProviderBuilder.buildDataProvider(AbstractBuiltInDataProviderBuilder.java:36)
at com.businessobjects.dsl.services.dataprovider.impl.DataProviderFactory.createDataProviderWithUniverse(DataProviderFactory.java:190)
at com.businessobjects.dsl.services.dataprovider.impl.DataProviderFactory.createAndAddDataProvider(DataProviderFactory.java:108)
at com.businessobjects.dsl.services.workspace.impl.WorkspaceServiceImpl.addDataProvider(WorkspaceServiceImpl.java:83)
at com.businessobjects.dsl.services.workspace.impl.WorkspaceServiceImpl.addDataProvider(WorkspaceServiceImpl.java:45)
at com.sap.ip.bi.zen.backends.bip.ui.dialogs.QueryPanelDialog.createQueryPanelContext(QueryPanelDialog.java:133)
at com.sap.ip.bi.zen.backends.bip.ui.dialogs.QueryPanelDialog.createDialogArea(QueryPanelDialog.java:97)
at org.eclipse.jface.dialogs.Dialog.createContents(Dialog.java:775)
at com.sap.ip.bi.zen.ui.internal.dialogs.ZenTrayDialog.createContents(ZenTrayDialog.java:63)
at org.eclipse.jface.window.Window.create(Window.java:432)
at org.eclipse.jface.dialogs.Dialog.create(Dialog.java:1104)
at com.sap.ip.bi.zen.ui.internal.dialogs.ZenTrayDialog.open(ZenTrayDialog.java:110)
at com.sap.ip.bi.zen.ui.internal.dialogs.datasource.DataSourceDialogController.editDataSourceClickedInternal(DataSourceDialogController.java:158)
at com.sap.ip.bi.zen.ui.internal.dialogs.datasource.DataSourceDialogController$1.run(DataSourceDialogController.java:147)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70)
at com.sap.ip.bi.zen.ui.internal.dialogs.datasource.DataSourceDialogController.editDataSourceClicked(DataSourceDialogController.java:144)
at com.sap.ip.bi.zen.ui.internal.dialogs.datasource.DataSourceSelectionPane.editQueryButtonClicked(DataSourceSelectionPane.java:253)
at com.sap.ip.bi.zen.ui.internal.dialogs.datasource.DataSourceSelectionPane$6.widgetSelected(DataSourceSelectionPane.java:233)
at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:248)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1057)
at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4170)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3759)
at org.eclipse.jface.window.Window.runEventLoop(Window.java:826)
at org.eclipse.jface.window.Window.open(Window.java:802)
at com.sap.ip.bi.zen.ui.internal.dialogs.ZenTrayDialog.open(ZenTrayDialog.java:112)
at com.sap.ip.bi.zen.ui.internal.commands.AddDataSourceHandler.execute(AddDataSourceHandler.java:24)
at org.eclipse.ui.internal.handlers.HandlerProxy.execute(HandlerProxy.java:290)
at org.eclipse.ui.internal.handlers.E4HandlerProxy.execute(E4HandlerProxy.java:90)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:56)
at org.eclipse.e4.core.internal.di.InjectorImpl.invokeUsingClass(InjectorImpl.java:243)
at org.eclipse.e4.core.internal.di.InjectorImpl.invoke(InjectorImpl.java:224)
at org.eclipse.e4.core.contexts.ContextInjectionFactory.invoke(ContextInjectionFactory.java:132)
at org.eclipse.e4.core.commands.internal.HandlerServiceHandler.execute(HandlerServiceHandler.java:167)
at org.eclipse.core.commands.Command.executeWithChecks(Command.java:499)
at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:508)
at org.eclipse.e4.core.commands.internal.HandlerServiceImpl.executeHandler(HandlerServiceImpl.java:213)
at org.eclipse.e4.ui.workbench.renderers.swt.HandledContributionItem.executeItem(HandledContributionItem.java:850)
at org.eclipse.e4.ui.workbench.renderers.swt.HandledContributionItem.handleWidgetSelection(HandledContributionItem.java:743)
at org.eclipse.e4.ui.workbench.renderers.swt.HandledContributionItem.access$7(HandledContributionItem.java:727)
at org.eclipse.e4.ui.workbench.renderers.swt.HandledContributionItem$4.handleEvent(HandledContributionItem.java:662)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1057)
at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4170)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3759)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$9.run(PartRenderingEngine.java:1113)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:997)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:138)
at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:610)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:567)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)
at com.sap.ip.bi.zen.ui.internal.application.ZenApplication.start(ZenApplication.java:36)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:354)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:636)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:591)
at org.eclipse.equinox.launcher.Main.run(Main.java:1450)
Caused by: com.businessobjects.mds.services.solver.ConnectionSolverException: Unable to find servers in CMS vyouboxd01.you.com:6400 and cluster @vyouboxd01.you.com:6400 with kind connectionserver and service CS_CORBA_NetworkLayer. All such servers could be down or disabled by the administrator. (FWM 01014)
at com.businessobjects.mds.services.solver.AbstractConnectionSolver.addCredentialInfo(AbstractConnectionSolver.java:238)
at com.businessobjects.mds.services.solver.AbstractConnectionSolver.solve(AbstractConnectionSolver.java:136)
at com.businessobjects.mds.services.solver.AbstractConnectionSolver.solveRelational(AbstractConnectionSolver.java:151)
at com.businessobjects.mds.services.solver.AbstractConnectionSolver.solveRelational(AbstractConnectionSolver.java:147)
at com.businessobjects.mds.services.solver.AbstractConnectionSolver.solveRelational(AbstractConnectionSolver.java:143)
at com.businessobjects.mds.services.helpers.DataFoundationHelper.getConnection(DataFoundationHelper.java:892)
at com.businessobjects.mds.services.helpers.DataFoundationHelper.isErpDataFoundation(DataFoundationHelper.java:2896)
at com.businessobjects.mds.services.helpers.UniverseHelper.isErpUniverse(UniverseHelper.java:1554)
at com.businessobjects.dsl.services.datasource.DataSourceFactory.createQueryCapability(DataSourceFactory.java:286)
at com.businessobjects.dsl.services.datasource.DataSourceFactory.fillQueryCapability(DataSourceFactory.java:959)
at com.businessobjects.dsl.services.datasource.DataSourceFactory.getDataSource(DataSourceFactory.java:839)
at com.businessobjects.dsl.services.dataprovider.impl.AbstractDataProviderWithUniverse.createDataSourceFromUniverse(AbstractDataProviderWithUniverse.java:572)
at com.businessobjects.dsl.services.dataprovider.impl.AbstractDataProviderWithUniverse.retrieveDataSource(AbstractDataProviderWithUniverse.java:542)
at com.businessobjects.dsl.services.dataprovider.impl.AbstractDataProviderWithUniverse.buildDataSource(AbstractDataProviderWithUniverse.java:448)
at com.businessobjects.dsl.services.dataprovider.impl.AbstractDataProviderWithUniverse.createDataSource(AbstractDataProviderWithUniverse.java:432)
... 72 more
Caused by: com.sap.connectivity.cs.core.CSError: Unable to find servers in CMS vyouboxd01.you.com:6400 and cluster @vyouboxd01.you.com:6400 with kind connectionserver and service CS_CORBA_NetworkLayer. All such servers could be down or disabled by the administrator. (FWM 01014)
at com.sap.connectivity.cs.remoting.corba.proxy.ConnectionManager.<init>(ConnectionManager.java:81)
at com.sap.connectivity.cs.remoting.corba.proxy.ConnectionServer.getConnectionManager(ConnectionServer.java:125)
at com.sap.connectivity.cs.core.Environment.CreateConnectionManager(Environment.java:530)
at com.sap.connectivity.cs.core.Environment.CreateConnectionManager(Environment.java:504)
at com.sap.connectivity.cs.api.trace.EnvironmentLogger.CreateConnectionManager(EnvironmentLogger.java:368)
at com.businessobjects.mds.services.relational.CsService.getAuthenticationMode(CsService.java:180)
at com.businessobjects.mds.services.relational.CsService.addCredentialInfo(CsService.java:485)
at com.businessobjects.mds.services.solver.AbstractConnectionSolver.addCredentialInfo(AbstractConnectionSolver.java:235)
... 86 more
Caused by: com.crystaldecisions.sdk.exception.SDKException$OCAFramework: Unable to find servers in CMS vyouboxd01.you.com:6400 and cluster @vyouboxd01.you.com:6400 with kind connectionserver and service CS_CORBA_NetworkLayer. All such servers could be down or disabled by the administrator. (FWM 01014)
cause:com.crystaldecisions.enterprise.ocaframework.OCAFrameworkException$AllServicesDown: Unable to find servers in CMS vyouboxd01.you.com:6400 and cluster @vyouboxd01.you.com:6400 with kind connectionserver and service CS_CORBA_NetworkLayer. All such servers could be down or disabled by the administrator. (FWM 01014)
detail:Unable to find servers in CMS vyouboxd01.you.com:6400 and cluster @vyouboxd01.you.com:6400 with kind connectionserver and service CS_CORBA_NetworkLayer. All such servers could be down or disabled by the administrator. (FWM 01014)
at com.crystaldecisions.sdk.exception.SDKException.map(SDKException.java:140)
at com.sap.connectivity.cs.remoting.bip.SvcFactoryBase.getManagedService(SvcFactoryBase.java:183)
at com.sap.connectivity.cs.remoting.bip.SvcFactoryBase.getManagedService(SvcFactoryBase.java:128)
at com.sap.connectivity.cs.remoting.corba.svchelpers.NetworkLayerSvcFactory.getServiceFromCriteria(NetworkLayerSvcFactory.java:53)
at com.sap.connectivity.cs.remoting.corba.svchelpers.NetworkLayerLookup.getServiceFromCriteria(NetworkLayerLookup.java:114)
at com.sap.connectivity.cs.remoting.corba.svchelpers.NetworkLayerLookup.getService(NetworkLayerLookup.java:83)
at com.sap.connectivity.cs.remoting.corba.proxy.ConnectionManager.<init>(ConnectionManager.java:68)
... 93 more
Caused by: com.crystaldecisions.enterprise.ocaframework.OCAFrameworkException$AllServicesDown: Unable to find servers in CMS vyouboxd01.you.com:6400 and cluster @vyouboxd01.you.com:6400 with kind connectionserver and service CS_CORBA_NetworkLayer. All such servers could be down or disabled by the administrator. (FWM 01014)
at com.crystaldecisions.enterprise.ocaframework.ServerController.redirectServer(ServerController.java:664)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.redirectServer(ServiceMgr.java:959)
at com.crystaldecisions.enterprise.ocaframework.ManagedSession.redirectServer(ManagedSession.java:338)
at com.crystaldecisions.enterprise.ocaframework.ManagedSession.get(ManagedSession.java:247)
at com.crystaldecisions.enterprise.ocaframework.ManagedSessions.get(ManagedSessions.java:299)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getManagedService_aroundBody4(ServiceMgr.java:520)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getManagedService_aroundBody5$advice(ServiceMgr.java:512)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getManagedService(ServiceMgr.java:1)
at com.sap.connectivity.cs.remoting.bip.SvcFactoryBase.getManagedService(SvcFactoryBase.java:164)
... 98 moreHi Bram,
I had a look in your Java-Exception and there a connectionserver is missing.
I know that there are two kind of connection servers 32 and 64 Bit. Do one of your APS Servers have connectivity and connectivity 32 Bit ???
I had an issue when i first wanted to use a .unx connection with DS with my splitted APS.
I had one APS only containing the Analysis service. To get to run my .unx access i had to add the Adaptive connectivity Service to the APS containing the analysis service.
On top of that i had to add the DSL Bridge (otherwise my adaptive connectivity Service did not start)
Perhabs this helps you out.
Good luck
Manfred -
Query taking time in Oracle 10g
Hi,
Recently we had a database upgrade from 9.2.0.8 to 10.2.0.4. We use HP-UX B11.23 as OS.The problem is we have a query which used to take 3 mins in 9i database but it is not returning any output in 10g database after running for 8 hours after which we need to kill it . The query is ,
SELECT DPPB.CO_CD, DPPB.PRC_BOOK_CD,NVL(PB.CO_PRC_BOOK_CD,'NULL') ,
NVL(BP.BASE_PROD_CD,'NULL'),NVL(FG.FG_CD,'NULL'),DPPB.EFFTV_STRT_DT,
DPPB.EFFTV_END_DT,PRC_BOOK_AMT, PRC_LST_RPT_IND ,
SYSDATE + (RANK () OVER (PARTITION BY PROD_PRC_BOOK_CD ORDER BY DPPB.EFFTV_STRT_DT)/(24*60*60)) "RANK",
SYSDATE FROM
DIM_PROD_PRC_BOOK DPPB,dim_prod FG,dim_prod BP,dim_prc_book PB
WHERE
DPPB.BASE_PROD_OID =BP.BASE_PROD_OID and bp.end_date>sysdate and bp.be_id=bp.base_prod_oid AND
FG.FG_OID=DPPB.FG_OID and fg.end_date>sysdate and fg.be_id=fg.fg_oid
AND DPPB.PRC_BOOK_OID=PB.prc_book_oid and pb.end_date>sysdate and pb.be_id=pb.PRC_BOOK_OID
AND DPPB.EFFTV_END_DT > ADD_MONTHS(TRUNC(SYSDATE), -15)
AND DPPB.CURR_IND='Y'
AND
PROD_PRC_BOOK_CD ||'-'||TO_CHAR(DPPB.END_DATE ,'DD-MM-YYYY hh24:mi:ss')
IN(
SELECT PROD_PRC_BOOK_CD ||'-'||TO_CHAR(MAX(DPPB.END_DATE ),'DD-MM-YYYY hh24:mi:ss')
FROM DIM_PROD_PRC_BOOK DPPB WHERE PROD_PRC_BOOK_CD IS NOT NULL GROUP BY PROD_PRC_BOOK_CD ,EFFTV_STRT_DT
)The explain plan of the query in 9i is,
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 1 2964
WINDOW SORT 1 661 2964
HASH JOIN 1 661 2958
TABLE ACCESS BY INDEX ROWID WHSUSR.DIM_PROD 1 73 1
NESTED LOOPS 1 355 290
NESTED LOOPS 1 282 289
HASH JOIN 164 32 K 284
TABLE ACCESS FULL WHSUSR.DIM_PRC_BOOK 1 57 2
TABLE ACCESS FULL WHSUSR.DIM_PROD_PRC_BOOK 6 K 957 K 281
TABLE ACCESS BY INDEX ROWID WHSUSR.DIM_PROD 1 77 1
INDEX RANGE SCAN WHSUSR.XN15_DIM_PROD 3 1
INDEX RANGE SCAN WHSUSR.XN22_DIM_PROD 5 1
VIEW SYS.VW_NSO_1 132 K 38 M 2665
SORT UNIQUE 132 K 6 M 2665
SORT GROUP BY 132 K 6 M 2665
TABLE ACCESS FULL WHSUSR.DIM_PROD_PRC_BOOK 132 K 6 M 281 And the explain plan of the query in 10g database is
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=ALL_ROWS 4 1702
WINDOW SORT 4 1 K 1702
FILTER
TABLE ACCESS BY INDEX ROWID WHSUSR.DIM_PROD 1 73 1
NESTED LOOPS 1 339 899
NESTED LOOPS 14 3 K 898
HASH JOIN 2 K 428 K 805
TABLE ACCESS FULL WHSUSR.DIM_PRC_BOOK 1 53 3
TABLE ACCESS FULL WHSUSR.DIM_PROD_PRC_BOOK 93 K 12 M 801
TABLE ACCESS BY INDEX ROWID WHSUSR.DIM_PROD 1 77 1
INDEX RANGE SCAN WHSUSR.XN15_DIM_PROD 2 1
INDEX RANGE SCAN WHSUSR.XN22_DIM_PROD 5 1
FILTER
HASH GROUP BY 1 K 59 K 802
TABLE ACCESS FULL WHSUSR.DIM_PROD_PRC_BOOK 117 K 5 M 794 Please help in identifying the problem and how to tune it.user605926 wrote:
Thanks Sir for your immense help. I used the hint /*+ optimizer_features_enable('9.2.0.8') */ and the query took only 2 seconds. I am really delighted.
Sorry for not clicking the 'helpful' button earlier since honestly I did not know about the rules. Going forward I will not forget to do that.Don't apologise, it wasn't intended as a personal criticism - it's just a footnote I tend to use at present as a general reminder to everyone that feedback is useful.
I have one question. Do i have to use this hint for each and every query that is becoming a headache or is their any permanent solution to fix all the queries that used to run good on 9.2.0.8 database ? Please suggest.When doing an upgrade it is always valid (in the short term) to set the optimizer_features_enable parameter to the value of the database your moving from so that you can get the code improvements (or bug fixes) of the newer software without risking execution plan changes.
After that the ideal is to test software and identify generic cases where a change like an index definition, or some statistical information needs to be corrected for a particular reason in classes of queries. Eventually you get down to the point where you have a few awkward queries which the optimizer can't handle where you need hints. The optimizer_features_enable is very convenient here. In 10g, however, you could then capture the older plan and record is as an SQL Baseline against the unhinted query rather than permanently including hints.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
A general reminder about "Forum Etiquette / Reward Points": http://forums.oracle.com/forums/ann.jspa?annID=718
If you never mark your questions as answered people will eventually decide that it's not worth trying to answer you because they will never know whether or not their answer has been of any use, or whether you even bothered to read it.
It is also important to mark answers that you thought helpful - again it lets other people know that you appreciate their help, but it also acts as a pointer for other people when they are researching the same question, moreover it means that when you mark a bad or wrong answer as helpful someone may be prompted to tell you (and the rest of the forum) what's so bad or wrong about the answer you found helpful. -
Newbie: help with join in a select query
Hi: I need some help with creating a select statement.
I have two tables t1 (fields: id, time, cost, t2id) and t2 (fields: id, time, cost). t2id from t1 is the primary key in t2. I want a single select statement to list all time and cost from both t1 and t2. I think I need to use join but can't seem to figure it out even after going through some tutorials.
Thanks in advance.
Rayt1 has following records
pkid, time, cost,product
1,123456,34,801
2,123457,20,802
3,345678,40,801
t2 has the following records
id,productid,time,cost
1,801,4356789,12
2,801,4356790,1
3,802,9845679,100
4,801,9345614,12
I want a query that will print following from t1 (time and cost for records that have product=801)
123456,34
345678,40
followed by following from t2 (time and cost for records that have productid=801)
4356789,12
4356790,1
9345614,12
Is this possible?
Thanks
ray -
Query with diff. explain plans
Hi,
Our query returns different execution plans in Prod and non-prod. It is slow in PROD. The data size is the same in both DBs and stats are gathered at 50% estimate for both schemas:
Prod (slow) explain plan:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 852 | 33 (4)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 1 | 852 | 33 (4)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 | 852 | 32 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 802 | 31 (0)| 00:00:01 |
| 5 | NESTED LOOPS OUTER | | 1 | 785 | 30 (0)| 00:00:01 |
| 6 | NESTED LOOPS OUTER | | 1 | 742 | 29 (0)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 732 | 29 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 678 | 26 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 666 | 26 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 1 | 623 | 25 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 580 | 24 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 576 | 24 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | 2 | 1076 | 13 (0)| 00:00:01 |
| 14 | NESTED LOOPS | | 2 | 1040 | 13 (0)| 00:00:01 |
| 15 | NESTED LOOPS | | 2 | 1028 | 13 (0)| 00:00:01 |
| 16 | NESTED LOOPS | | 2 | 996 | 13 (0)| 00:00:01 |
| 17 | NESTED LOOPS | | 2 | 988 | 13 (0)| 00:00:01 |
| 18 | NESTED LOOPS | | 2 | 954 | 13 (0)| 00:00:01 |
| 19 | NESTED LOOPS | | 2 | 944 | 13 (0)| 00:00:01 |
| 20 | NESTED LOOPS | | 2 | 920 | 13 (0)| 00:00:01 |
| 21 | NESTED LOOPS | | 2 | 912 | 13 (0)| 00:00:01 |
| 22 | NESTED LOOPS | | 2 | 826 | 11 (0)| 00:00:01 |
| 23 | NESTED LOOPS | | 1 | 370 | 9 (0)| 00:00:01 |
| 24 | NESTED LOOPS | | 1 | 306 | 8 (0)| 00:00:01 |
| 25 | NESTED LOOPS | | 1 | 263 | 7 (0)| 00:00:01 |
| 26 | NESTED LOOPS | | 1 | 220 | 6 (0)| 00:00:01 |
| 27 | NESTED LOOPS | | 1 | 177 | 5 (0)| 00:00:01 |
| 28 | NESTED LOOPS | | 1 | 129 | 4 (0)| 00:00:01 |
| 29 | NESTED LOOPS | | 1 | 86 | 3 (0)| 00:00:01 |
| 30 | TABLE ACCESS BY INDEX ROWID| SYMMETADATA | 1 | 43 | 2 (0)| 00:00:01 |
|* 31 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 1 (0)| 00:00:01 |
| 32 | TABLE ACCESS BY INDEX ROWID| SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
|* 33 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 34 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
|* 35 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 36 | TABLE ACCESS BY INDEX ROWID | TPRODUCT | 1 | 48 | 1 (0)| 00:00:01 |
|* 37 | INDEX UNIQUE SCAN | TPRODUCT_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 38 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
|* 39 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 40 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
|* 41 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 42 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
|* 43 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 44 | TABLE ACCESS BY INDEX ROWID | TPRODUCT | 1 | 64 | 1 (0)| 00:00:01 |
|* 45 | INDEX UNIQUE SCAN | TPRODUCT_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 46 | INLIST ITERATOR | | | | | |
| 47 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 2 | 86 | 2 (0)| 00:00:01 |
|* 48 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 2 | | 1 (0)| 00:00:01 |
| 49 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
|* 50 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
|* 51 | INDEX UNIQUE SCAN | SYMUSERCOUNT_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 52 | INDEX UNIQUE SCAN | SYMUSERCOUNT_PK | 1 | 12 | 0 (0)| 00:00:01 |
|* 53 | INDEX UNIQUE SCAN | SYMSKUTYPE_PK | 1 | 5 | 0 (0)| 00:00:01 |
|* 54 | INDEX UNIQUE SCAN | SYMSKUTYPE_PK | 1 | 17 | 0 (0)| 00:00:01 |
|* 55 | INDEX UNIQUE SCAN | SYMSKULANGUAGE_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 56 | INDEX UNIQUE SCAN | SYMSKULANGUAGE_PK | 1 | 16 | 0 (0)| 00:00:01 |
|* 57 | INDEX UNIQUE SCAN | SYMVENDOR_PK | 1 | 6 | 0 (0)| 00:00:01 |
|* 58 | INDEX UNIQUE SCAN | SYMVENDOR_PK | 1 | 18 | 0 (0)| 00:00:01 |
| 59 | TABLE ACCESS BY INDEX ROWID | SYMPRODUCTSKU | 1 | 38 | 6 (0)| 00:00:01 |
|* 60 | INDEX RANGE SCAN | I_PSKU_MERCH_LOOKUP | 1 | | 5 (0)| 00:00:01 |
|* 61 | INDEX UNIQUE SCAN | SYMMEDIATYPE_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 62 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
|* 63 | INDEX UNIQUE SCAN | SYMMETADATA_PK | 1 | | 0 (0)| 00:00:01 |
| 64 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
|* 65 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
|* 66 | INDEX UNIQUE SCAN | SYMMEDIATYPE_PK | 1 | 12 | 0 (0)| 00:00:01 |
| 67 | TABLE ACCESS BY INDEX ROWID | SYMPRODUCTSKU | 1 | 54 | 3 (0)| 00:00:01 |
|* 68 | INDEX RANGE SCAN | I_PSKU_MERCH_LOOKUP | 1 | | 2 (0)| 00:00:01 |
|* 69 | INDEX UNIQUE SCAN | SYMPCCOUNT_PK | 1 | 10 | 0 (0)| 00:00:01 |
| 70 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
|* 71 | INDEX UNIQUE SCAN | SYMMETADATA_PK | 1 | | 0 (0)| 00:00:01 |
|* 72 | TABLE ACCESS BY INDEX ROWID | TPRODUCTSKU | 1 | 17 | 1 (0)| 00:00:01 |
|* 73 | INDEX UNIQUE SCAN | TPRODUCTSKU_PK | 1 | | 0 (0)| 00:00:01 |
|* 74 | TABLE ACCESS BY INDEX ROWID | TPRODUCTSKU | 1 | 50 | 1 (0)| 00:00:01 |
|* 75 | INDEX UNIQUE SCAN | TPRODUCTSKU_PK | 1 | | 0 (0)| 00:00:01 |
Non Prod (Fast) Plan:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 383 | 28 (0)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | NESTED LOOPS | | 1 | 383 | 17 (0)| 00:00:01 |
| 3 | NESTED LOOPS OUTER | | 1 | 350 | 16 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 308 | 15 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 1 | 266 | 14 (0)| 00:00:01 |
| 6 | NESTED LOOPS OUTER | | 1 | 262 | 14 (0)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 258 | 14 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 2 | 438 | 7 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 2 | 428 | 7 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 2 | 420 | 7 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 2 | 410 | 7 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 2 | 402 | 7 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | 1 | 159 | 5 (0)| 00:00:01 |
| 14 | NESTED LOOPS | | 1 | 126 | 4 (0)| 00:00:01 |
| 15 | NESTED LOOPS | | 1 | 84 | 3 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID| SYMMETADATA | 1 | 42 | 2 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 1 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID| SYMMETADATA | 1 | 42 | 1 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 20 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 42 | 1 (0)| 00:00:01 |
|* 21 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 22 | TABLE ACCESS BY INDEX ROWID | TPRODUCT | 1 | 33 | 1 (0)| 00:00:01 |
|* 23 | INDEX UNIQUE SCAN | TPRODUCT_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 24 | INLIST ITERATOR | | | | | |
| 25 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 2 | 84 | 2 (0)| 00:00:01 |
|* 26 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 2 | | 1 (0)| 00:00:01 |
|* 27 | INDEX UNIQUE SCAN | SYMUSERCOUNT_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 28 | INDEX UNIQUE SCAN | SYMSKUTYPE_PK | 1 | 5 | 0 (0)| 00:00:01 |
|* 29 | INDEX UNIQUE SCAN | SYMSKULANGUAGE_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 30 | INDEX UNIQUE SCAN | SYMVENDOR_PK | 1 | 5 | 0 (0)| 00:00:01 |
| 31 | TABLE ACCESS BY INDEX ROWID | SYMPRODUCTSKU | 1 | 39 | 4 (0)| 00:00:01 |
|* 32 | INDEX RANGE SCAN | I_PSKU_MERCH_LOOKUP | 1 | | 3 (0)| 00:00:01 |
|* 33 | INDEX UNIQUE SCAN | SYMPCCOUNT_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 34 | INDEX UNIQUE SCAN | SYMMEDIATYPE_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 35 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 42 | 1 (0)| 00:00:01 |
|* 36 | INDEX UNIQUE SCAN | SYMMETADATA_PK | 1 | | 0 (0)| 00:00:01 |
| 37 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 42 | 1 (0)| 00:00:01 |
|* 38 | INDEX UNIQUE SCAN | SYMMETADATA_PK | 1 | | 0 (0)| 00:00:01 |
|* 39 | TABLE ACCESS BY INDEX ROWID | TPRODUCTSKU | 1 | 33 | 1 (0)| 00:00:01 |
|* 40 | INDEX UNIQUE SCAN | TPRODUCTSKU_PK | 1 | | 0 (0)| 00:00:01 |
| 41 | SORT AGGREGATE | | 1 | 252 | | |
| 42 | NESTED LOOPS | | 1 | 252 | 11 (0)| 00:00:01 |
| 43 | NESTED LOOPS | | 1 | 240 | 10 (0)| 00:00:01 |
| 44 | NESTED LOOPS | | 1 | 205 | 7 (0)| 00:00:01 |
| 45 | NESTED LOOPS | | 1 | 200 | 7 (0)| 00:00:01 |
| 46 | NESTED LOOPS | | 1 | 196 | 7 (0)| 00:00:01 |
| 47 | NESTED LOOPS | | 1 | 191 | 7 (0)| 00:00:01 |
| 48 | NESTED LOOPS | | 1 | 187 | 7 (0)| 00:00:01 |
| 49 | NESTED LOOPS | | 1 | 183 | 7 (0)| 00:00:01 |
| 50 | NESTED LOOPS | | 1 | 150 | 6 (0)| 00:00:01 |
| 51 | NESTED LOOPS | | 1 | 120 | 5 (0)| 00:00:01 |
| 52 | NESTED LOOPS | | 1 | 90 | 4 (0)| 00:00:01 |
| 53 | NESTED LOOPS | | 1 | 60 | 3 (0)| 00:00:01 |
| 54 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 2 (0)| 00:00:01 |
|* 55 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 1 (0)| 00:00:01 |
| 56 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 1 (0)| 00:00:01 |
|* 57 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 58 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 1 (0)| 00:00:01 |
|* 59 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 60 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 1 (0)| 00:00:01 |
|* 61 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 62 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 1 (0)| 00:00:01 |
|* 63 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
| 64 | TABLE ACCESS BY INDEX ROWID | TPRODUCT | 1 | 33 | 1 (0)| 00:00:01 |
|* 65 | INDEX UNIQUE SCAN | TPRODUCT_UNIQUE | 1 | | 0 (0)| 00:00:01 |
|* 66 | INDEX UNIQUE SCAN | SYMUSERCOUNT_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 67 | INDEX UNIQUE SCAN | SYMMEDIATYPE_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 68 | INDEX UNIQUE SCAN | SYMSKUTYPE_PK | 1 | 5 | 0 (0)| 00:00:01 |
|* 69 | INDEX UNIQUE SCAN | SYMSKULANGUAGE_PK | 1 | 4 | 0 (0)| 00:00:01 |
|* 70 | INDEX UNIQUE SCAN | SYMVENDOR_PK | 1 | 5 | 0 (0)| 00:00:01 |
| 71 | TABLE ACCESS BY INDEX ROWID | SYMPRODUCTSKU | 1 | 35 | 3 (0)| 00:00:01 |
|* 72 | INDEX RANGE SCAN | I_PSKU_MERCH_LOOKUP | 1 | | 2 (0)| 00:00:01 |
|* 73 | TABLE ACCESS BY INDEX ROWID | TPRODUCTSKU | 1 | 12 | 1 (0)| 00:00:01 |
|* 74 | INDEX UNIQUE SCAN | TPRODUCTSKU_PK | 1 | | 0 (0)| 00:00:01 |
Database version is 10.2.0.4. Can anyone help me understand what else to be looking for to make it work faster?Please see the following threads for the ideal information required:
How to post a sql tuning request:
HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long:
When your query takes too long ...
Maybe you are looking for
-
Error Message no. F5669 in Returns after installing Stack 17
Greetings Gurus, After applying the Stack 17 upgrade to our system we started getting Error Message no. F5669 Posting date is initial Message no. F5669 Diagnosis Fiscal year and period are derived from the posting date. The posting date, how
-
I've followed the instructions but can't get the ausst-server.local site to appear in my browser.
-
Numeric or value error: character to number conversion error
I'm having problems inserting a value from a date picker field (DD-MON-YYYY HH MI ) i'm submitting this value to a packaged procedure that accepts this field as VARCHAR2 . on the insert, i do a to_date( P_DATE, 'DD-MON-YYYY HH:MI PM' ) and i get the
-
my phone is on recovery mode and when i try to restore it then it shows device is not eligible for the request
-
Your Shopping cart's contents have changed Message
I've added things to my shopping cart and have been trying to purchase them and keep getting this error message with the above title: "Either the prices of some items have changed or items have been added or removed from another computer. Please revi