Improving performace for a Rule Based Optimizer DB
Hi,
I am looking for information on improving the current performance of an ancient 35GB Oracle 7.3.4 using RULE based optimizer mode. It is using 160 MB SGA and the physical memory on the system is 512MB RAM.
As of now, all the major tasks which take time, are run after peak hours so that the 130 user sessions are not affected significantly.
But recently am told some procedures take too long to execute ( procedure has to do with truncating tables and re-populating data into it ) and I do see 54% of the pie chart for WAITS are for "sequential reads" followed by "scattered reads" of 36%. There are a couple of large tables of around 4GB in this DB.
Autotrace doesn't help me much in terms of getting an explain plan of slow queries since COST option doesnt show up and am trying to find ways of improving the performance of DB in general.
Apart from the "redo log space requests" which I run into frequently (which btw is something I am trying to resolve ..thanks to some of you) I dont see much info on exactly how to proceed.
Is there any info that I can look towards in terms of improving performance on this rule based optimizer DB ? Or is identifying the top sql's in terms of buffer gets the only way to tune ?
Thank you for any suggestions provided.
Thanks Hemant.
This is for a 15 minute internal under moderate load early this morning.
Statistic Total Per Transact Per Logon Per Second
CR blocks created 275 .95 5.19 .29
Current blocks converted fo 10 .03 .19 .01
DBWR buffers scanned 74600 258.13 1407.55 78.44
DBWR free buffers found 74251 256.92 1400.96 78.08
DBWR lru scans 607 2.1 11.45 .64
DBWR make free requests 607 2.1 11.45 .64
DBWR summed scan depth 74600 258.13 1407.55 78.44
DBWR timeouts 273 .94 5.15 .29
OS Integral shared text siz 1362952204 4716097.59 25716079.32 1433177.92
OS Integral unshared data s 308759380 1068371.56 5825648.68 324668.12
OS Involuntary context swit 310493 1074.37 5858.36 326.49
OS Maximum resident set siz 339968 1176.36 6414.49 357.48
OS Page faults 3434 11.88 64.79 3.61
OS Page reclaims 6272 21.7 118.34 6.6
OS System time used 19157 66.29 361.45 20.14
OS User time used 195036 674.87 3679.92 205.09
OS Voluntary context switch 21586 74.69 407.28 22.7
SQL*Net roundtrips to/from 16250 56.23 306.6 17.09
SQL*Net roundtrips to/from 424 1.47 8 .45
background timeouts 646 2.24 12.19 .68
bytes received via SQL*Net 814224 2817.38 15362.72 856.18
bytes received via SQL*Net 24470 84.67 461.7 25.73
bytes sent via SQL*Net to c 832836 2881.79 15713.89 875.75
bytes sent via SQL*Net to d 42713 147.8 805.91 44.91
calls to get snapshot scn: 17103 59.18 322.7 17.98
calls to kcmgas 381 1.32 7.19 .4
calls to kcmgcs 228 .79 4.3 .24
calls to kcmgrs 20845 72.13 393.3 21.92
cleanouts and rollbacks - c 86 .3 1.62 .09
cleanouts only - consistent 40 .14 .75 .04
cluster key scan block gets 1051 3.64 19.83 1.11
cluster key scans 376 1.3 7.09 .4
commit cleanout failures: c 18 .06 .34 .02
commit cleanout number succ 2406 8.33 45.4 2.53
consistent changes 588 2.03 11.09 .62
consistent gets 929408 3215.94 17536 977.3
cursor authentications 1746 6.04 32.94 1.84
data blocks consistent read 588 2.03 11.09 .62
db block changes 20613 71.33 388.92 21.68
db block gets 40646 140.64 766.91 42.74
deferred (CURRENT) block cl 668 2.31 12.6 .7
dirty buffers inspected 3 .01 .06 0
enqueue conversions 424 1.47 8 .45
enqueue releases 1981 6.85 37.38 2.08
enqueue requests 1977 6.84 37.3 2.08
execute count 20691 71.6 390.4 21.76
free buffer inspected 2264 7.83 42.72 2.38
free buffer requested 490899 1698.61 9262.25 516.19
immediate (CR) block cleano 126 .44 2.38 .13
immediate (CURRENT) block c 658 2.28 12.42 .69
logons cumulative 53 .18 1 .06
logons current 1 0 .02 0
messages received 963 3.33 18.17 1.01
messages sent 963 3.33 18.17 1.01
no work - consistent read g 905734 3134.03 17089.32 952.4
opened cursors cumulative 2701 9.35 50.96 2.84
opened cursors current 147 .51 2.77 .15
parse count 2733 9.46 51.57 2.87
physical reads 490258 1696.39 9250.15 515.52
physical writes 2265 7.84 42.74 2.38
recursive calls 37296 129.05 703.7 39.22
redo blocks written 5222 18.07 98.53 5.49
redo entries 10575 36.59 199.53 11.12
redo size 2498156 8644.14 47135.02 2626.87
redo small copies 10575 36.59 199.53 11.12
redo synch writes 238 .82 4.49 .25
redo wastage 104974 363.23 1980.64 110.38
redo writes 422 1.46 7.96 .44
rollback changes - undo rec 1 0 .02 0
rollbacks only - consistent 200 .69 3.77 .21
session logical reads 969453 3354.51 18291.57 1019.4
session pga memory 35597936 123176.25 671659.17 37432.11
session pga memory max 35579576 123112.72 671312.75 37412.8
session uga memory 2729196 9443.58 51494.26 2869.82
session uga memory max 20580712 71213.54 388315.32 21641.13
sorts (memory) 1091 3.78 20.58 1.15
sorts (rows) 12249 42.38 231.11 12.88
table fetch by rowid 57246 198.08 1080.11 60.2
table fetch continued row 111 .38 2.09 .12
table scan blocks gotten 763421 2641.6 14404.17 802.76
table scan rows gotten 13740187 47543.9 259248.81 14448.15
table scans (long tables) 902 3.12 17.02 .95
table scans (short tables) 4614 15.97 87.06 4.85
total number commit cleanou 2489 8.61 46.96 2.62
transaction rollbacks 1 0 .02 0
user calls 15266 52.82 288.04 16.05
user commits 289 1 5.45 .3
user rollbacks 23 .08 .43 .02
write requests 331 1.15 6.25 .35Wait Events :
Event Name Count Total Time Avg Time
SQL*Net break/reset to client 7 0 0
SQL*Net message from client 16383 0 0
SQL*Net message from dblink 424 0 0
SQL*Net message to client 16380 0 0
SQL*Net message to dblink 424 0 0
SQL*Net more data from client 1 0 0
SQL*Net more data to client 24 0 0
buffer busy waits 169 0 0
control file sequential read 55 0 0
db file scattered read 74788 0 0
db file sequential read 176241 0 0
latch free 6134 0 0
log file sync 225 0 0
rdbms ipc message 10 0 0
write complete waits 4 0 0I did enable the timed_stats for the session but dont know why the times are 0's. Since I cant bounce the instance until weekend, cant enable the parameter in init.ora as well.
Similar Messages
-
Hi all,
On one of the production server we are using RULE BASED OPTIMIZER(Its application requirement).
I have to tune this database as users are complaining about the performance.
Any tips how can I tune for a RULE BASED optimizer database.
Does the tuning statergy will remain same as like seeing execution plan for missing index,instance paramets
execpt you cant generate stats.
Regards
UmairHi!
There are one thing about RBO, YOU must check all long-running queryis for it's
execution plans, try find better plans and after force RBO to using it.
You can use different hints for changing eceution plans. But for tuning RBO's database you must soent a very big time, YOU must be a CBO ;) -
Rule based optimizer vs Cost based optimizer - 9i
Is Rule based optimizer not used any more or can be used depending on the application etc.
I think Rule based optimizer still has some advantages. Please give your input if you think otherwise.
ThxI think Rule based optimizer still has some
advantages. Please give your input if you think
otherwise.You are absolutely correct. There are a few advantages to RBO.
RBO is better for any application that meets the following criteria:
- designed for Oracle version 7;
- has not been updated since Oracle 7;
- was hand tuned in Oracle 7;
- will not be upgraded to Oracle Database 10g (where RBO is obsolete);
- will not use Bitmap Indexes, Materialized Views, Query Rewrite, or vitrtually anything that was introduced in Oracle8 and beyond.
CBO, while not perfect, will allow new features to be used. And it is improving with every release. -
hi,
my database is 10.2.0.1...by default optimizer_mode=ALL_ROWS..
for some sessions..i need rule based optimizer...
so can i use
alter session set optimizer_mode=rule;
will it effect that session only or entire database....
and following also.i want to make them at session level...
ALTER SESSION SET "_HASH_JOIN_ENABLED" = FALSE;
ALTER SESSION SET "_OPTIMIZER_SORTMERGE_JOIN_ENABLED" = FALSE ;
ALTER SESSION SET "_OPTIMIZER_JOIN_SEL_SANITY_CHECK" = TRUE;
will those effect only session or entire database...please suggest< CBO outperforms RBO ALWAYS! > I disagree - mildlyWhen I tune SQL, the first thing I try is a RULE hint, and in very simple databases, the RBO still does a good job.
Of course, you should not use RULE hints in production (That's Oracle job).
When Oracle eBusiness suite migrated to the CBO, they placed gobs of RULE hints into their own SQL!!
Anyway, always adjust your CBO stats to replicate an RBO execution plan . . . .
specifically CAST() conversions from collections and pipelined functions.Interesting. Hsve you tried dynamic sampling for that?
Hope this helps. . .
Don Burleson
Oracle Press author
Author of “Oracle Tuning: The Definitive Reference”
http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm -
Rule-Based Optimizer doesn4t use the index
Does anybody know why the rule-based optimizer doesn4t use the index of all columns in the where clause?
I have a select that use the hint RULE to force the optimizer to work in rule mode and also one index to all columns used in the where clause. Analyzing the execution plan (EXPLAIN PLAN) I observed the optimizer accesses all tables, but one, using the index. There4s one table (the first of the execution plan) that is accessed using a Full Table Scan (FTS).
I've rebuilt the index for this table, but the execution plan doesn4t change.
Any suggestions?
Thanks in advance.
Eliane.Hi. Oracle may not use an index if it finds that a full table scan is quicker/more efficient. Try the hint /*+ INDEX (table index) */ and compare the query performance with that of the one without this hint. (As you know, if you force Rule-based approach, the COST column in EXPLAIN PLAN output will not be populated. You may have to use trace/tkprof.)
-
Hi,
Rule Based Optimization is a deprecated feature in Oracle 10g.We are in the process of migrating from Oracle 9i to 10g.I have never heard of this Rule based Optimization earlier.I have googled for the same.But, got confused with the results.
Can anybody shed some light on the below things...
Is this Optimization done by Oracle or as a developer do we need to take care of the rules while writing SQL statements?
There is another thing called Cost Based Optimization...
Who will instruct the Oracle whether to use Rule Based Optimization or cost Based Optimization?
Thanks & Regards,
user569598Hope the following explanation would be helpful.
Whenever a statement is fired, Oracle should goes through the following stages:
Parse -> Execute -> Fetch (fetch only for select statement).
During Parse, Oracle first evaluates, Syntatic checking (SELECT, FROM, WHERE, ORDER BY ,GROUP and etc) and then Semantic Checking (columns names, table name, user permission on the objects and etc). Once these two stages passes, then, it has to decided whether to do soft parse or hard parse. If similar cursor(statement) doesn't exits in the shared pool, Oracle goes for Hard parse where Optimizer comes in picture for generating query plan.
Oracle has to decide either RBO or CBO. It also depends on the OPTIMIZER_MODE parameter value. If RULE hint is used, RBO will be used, if there are no statistics for those tables involved in the query, Oracle decides RBO, (condition applies). If statistics are available, or dynamic samplying is defined then Oracle use CBO to prepare the Optimal execution plan.
RBO is simply relies on set of rules where CBO relies on statistical information.
Jaffar -
BADI for MDGF rule based workflow
Hi Experts,
I am really struggling to get a badi that can route on field assigned in my single and agent decision tables. I have used the standard BADI that was provided in RDS documentation for BP and Materials and just tried to change the entity name and field names without success.
Can anyone please provide me with an example where someone has used this to route on field in finance.
I am trying to route on Segment for Profit center
Your help will be highly appreciated
Thanks and best regards
RiaanHi Abdullah,
I am using an existing attribute in the data model OG in the entity PCTR. The field name is PCTRSEG and element is fb_segment. In my rule based workflow in the Single decision table I have added fb_segment and I have populated the values against step 00. I have also updated my agent decision table with the fb_segment value.
I am attaching the BADI that I am struggeling with.
The service name for the change request is in the BADI filter. Thus when the requestor submit the Badi will be called and it will route to the person assigned in my agent decision table against the relevant segment..
My problem is the following:
The issue is that I don't know where I should maintain PCTRSEG and where I should maintain fb_segment in the BADI. Thus, where do I use the attribute name from the data model and where do I use the data element from the model.
When the requestor submit the request it does not go to the next approver and I get the error" Agent could not be determined"
I found ,when I change any of the values for the segment in the single and agent decision tables to not equal, example <> 1001 that the workflow works but all change requests will go to the same person.
Thus my assumption is that something might be wrong with the BADI
Your help will be highly appreciated
Thanks
Riaan
Please find Badi below
method IF_USMD_SSW_RULE_CNTX_PREPARE~PREPARE_RULE_CONTEXT.
DATA:
lo_crequest TYPE
REF TO if_usmd_crequest_api,
lt_entities TYPEusmd_t_crequest_entity,
ls_entity TYPE usmd_s_crequest_entity,
lr_table TYPE REF TO data,
lt_sel TYPE usmd_ts_sel,
ls_sel TYPE usmd_s_sel,
lv_brf_expr_id TYPEif_fdt_types=>id,
ls_context TYPEusmd_s_fdt_context_value,
lv_exit TYPE c.
FIELD-SYMBOLS: <lt_fin_int>
TYPE ANY TABLE,
<ld_fin_int> TYPE
any,
<pctrseg> TYPEfb_segment,
<value> TYPE
any.
* Prepare export parameters
CLEAR et_message.
CLEAR et_rule_context_value.
* Get the CR API for the current
CR
CALL METHOD cl_usmd_crequest_api=>get_instance
EXPORTING
iv_crequest = iv_cr_number
IMPORTING
re_inst_crequest_api = lo_crequest.
* Create data instance of the
entity PCTR for read access
CALL METHOD lo_crequest->create_data_reference
EXPORTING
iv_entity ='PCTR'
i_struct =if_usmd_model=>gc_struct_key_attr
IMPORTING
er_table =lr_table
et_message =et_message.
CHECK et_message IS
INITIAL.
ASSIGN lr_table->*
TO <lt_fin_int>.
* Get the instance keys for entity
type PCTR
CALL METHOD lo_crequest->read_objectlist
EXPORTING
iv_entity_type = 'PCTR'
IMPORTING
et_entity = lt_entities
et_message =et_message.
CHECK et_message IS INITIAL.
* Read the PCTR entity of the one
and only PCTR of the CR
READ TABLE lt_entities INTOls_entity INDEX 1.
CHECK sy-subrc = 0.
ls_sel-fieldname ='PCTRSEG'.
ls_sel-sign = 'I'.
ls_sel-option = 'EQ'.
ls_sel-low = ls_entity-usmd_value.
APPEND ls_sel TO lt_sel.
CALL METHOD lo_crequest->read_value
EXPORTING
i_fieldname = 'PCTRSEG'
it_sel = lt_sel
if_edition_logic = abap_false
IMPORTING
et_data = <lt_fin_int>
et_message = et_message.
* Get the one and only FB_SEGMENT
of the one PCTR in the CR
LOOP AT <lt_fin_int> ASSIGNING <ld_fin_int>.
ASSIGN COMPONENT 'PCTRSEG'OF STRUCTURE <ld_fin_int>
TO <pctrseg>.
EXIT.
ENDLOOP.
CHECK sy-subrc = 0.
* fill out the return table
get_element_id(
EXPORTING
iv_cr_type = lo_crequest->ds_crequest-usmd_creq_type
iv_name ='PCTR'
IMPORTING
ev_brf_expr_id = lv_brf_expr_id ).
ls_context-id = lv_brf_expr_id.
CREATE DATA ls_context-value TYPE fb_segment.
ASSIGN ls_context-value->* TO <value>.
<value> = <pctrseg>.
APPEND ls_context TO et_rule_context_value.
endmethod. -
How to improve performace for this program
Hi,
I want to improve performance for this program.Pls help me.Here is the code.
Thanks
kumar
--# SCCS: @(#)nscdprpt_rpt_fam_fweq_detail_aggr_dml.sql 2.1 08/17/05 12:16:11
--# Location: /appl/dpdm/src/sql/SCCS/s.nscdprpt_rpt_fam_fweq_detail_aggr_dml.sql
-- =============================================================================
-- Purpose:
-- DML Script to calculate aggregate rows from table NSCDPRPT.RPT_FAM_FWEQ_DETAIL
-- inserts one (1) monthly total row for each comparison type (6 of them)
-- Modification History
-- Date Version Who What
-- 28-Jun-05 V1.00 J. Myers Initial Version
-- 17-Aug-05 gzuerc Replaced UNION ALL code with INSERT's to avoid
-- Oracle error out of undo tablespace.
-- ============================================================================
-- NOTES:
-- This process pulls data from RPT_FAM_FWEQ_DETAIL table.
-- The target table RPT_FAM_FWEQ_DETAIL_AGGR is truncated by plan_ctry and loaded daily;
-- it is the data used in the Forecast Accuracy Measurements Report.
-- The input data is 'wide' (6 quantities), and this script outputs a single
-- row for each quantity as a 'thin' table (single column F1_QTY and F2_QTY shared by all quantities), and
-- with a ROW_TYPE = 'TOTAL'
-- Two (2) other scripts add additional rows to this table as follows:
-- 1) nscdprpt_rpt_fam_FWEQ_detail_aggr_sesn_avg_dml.sql => calculates the
-- total quantities F1_QTY, F2_QTY and AE for each LEVEL_VALUE
-- 2) nscdprpt_rpt_fam_FWEQ_detail_aggr_pct_swg_dml.sql => calculates the
-- percent swing in quantities between DISP_EVENTS for each grouping
-- of PLAN_CTRY, DIVISION, SEASON and TRG_EVENT_NUM for only LEVEL_TYPE = 'SUB_CAT'
-- =============================================================================
-- The result set from joining RPT_FAM_FWEQ_DETAIL rows with comparison types
-- is inserted into the table RPT_FAM_FWEQ_DETAIL_AGGR
INSERT INTO rpt_fam_FWEQ_detail_aggr
SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
compare_name,
first_display_event_num,
level_type,
level_value,
'TOTAL' AS row_type,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd,
NULL AS promo_ind, -- for future use
-- Each RPT_FAM_FWEQ_DETAIL row's eight (8) quantity columns are broken down
-- into F1_QTY from DISP_aaaa_QTY, F2_QTY from TRG_aaaa_QTY and
-- AE from aaaa_AE where 'aaaa' is equal to one of the COMPARE_TYPEs below:
-- F1_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
disp_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
disp_net_qty
WHEN compare_type = 'NET_AO' THEN
disp_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
disp_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
disp_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
disp_auth_futr_qty
END as f1_qty,
-- F2_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
trg_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
trg_net_qty
WHEN compare_type = 'NET_AO' THEN
trg_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
trg_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
trg_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
trg_auth_futures_qty
END as f2_qty,
-- AE
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
abs(disp_dlvry_plan_qty - trg_dlvry_plan_qty)
WHEN compare_type = 'NET' THEN
abs(disp_net_qty - trg_net_qty)
WHEN compare_type = 'NET_AO' THEN
abs(disp_ao_qty - trg_ao_qty)
WHEN compare_type = 'NET_FTRS' THEN
abs(disp_futr_qty - trg_futr_qty)
WHEN compare_type = 'NET_REPLENS' THEN
abs(disp_replen_qty - trg_replen_qty)
WHEN compare_type = 'AUTH_FTRS' THEN
abs(disp_auth_futr_qty - trg_auth_futures_qty)
END as ae,
SYSDATE AS zz_insert_tmst
FROM
-- The following in-line view provides three (3) result sets from the RPT_FAM_FWEQ_DETAIL table
-- This in-line view returns only LEVEL_NUM = 1 or LEVEL_TYPE = 'SUB_CAT' data
SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
-- The following NULL'd columns' values cannot be saved due to aggregation
NULL AS material,
NULL AS key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
NULL AS sty_colr_sdesc,
NULL AS sty_grp_nbr,
NULL AS sty_grp_desc,
NULL AS matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
NULL AS curr_prod_i2_life_cyc_cd,
sum(disp_net_qty) AS disp_net_qty,
sum(trg_net_qty) AS trg_net_qty,
-- ABS(sum(trg_net_qty) - sum(disp_net_qty)) AS net_AE,
sum(disp_dlvry_plan_qty) AS disp_dlvry_plan_qty,
sum(trg_dlvry_plan_qty) AS trg_dlvry_plan_qty,
-- ABS(sum(trg_dlvry_plan_qty) - sum(disp_dlvry_plan_qty)) AS dlvry_plan_AE,
sum(disp_futr_qty) AS disp_futr_qty,
sum(trg_futr_qty) AS trg_futr_qty,
-- ABS(sum(trg_futr_qty) - sum(disp_futr_qty)) AS futr_AE,
sum(disp_ao_qty) AS disp_ao_qty,
sum(trg_ao_qty) AS trg_ao_qty,
-- ABS(sum(trg_ao_qty) - sum(disp_ao_qty)) AS ao_AE,
sum(disp_replen_qty) AS disp_replen_qty,
sum(trg_replen_qty) AS trg_replen_qty,
-- ABS(sum(trg_replen_qty) - sum(disp_replen_qty)) AS replen_AE,
sum(disp_futr_qty) AS disp_auth_futr_qty,
sum(trg_auth_futures_qty) AS trg_auth_futures_qty --,
-- ABS(sum(trg_auth_futures_qty) - sum(disp_futr_qty)) AS auth_futures_AE
FROM rpt_fam_FWEQ_detail
WHERE plan_ctry &where_plan_ctry
and level_num = 1 -- 'SUB-CAT'
-- AND (promo_ind <> 'Y' OR promo_ind IS null)
GROUP BY plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc -- 'Type Group' in Brio
) dtl,
-- The following in-line view returns all of the different combinations
-- of comparison types in the RPT_FAM_FWEQ_DETAIL table
-- This select returns the pairing of all forecast types
SELECT event_type, compare_type, compare_name
from
(SELECT event_type, compare_type, compare_type_description || ' to ' || compare_type_description AS compare_name
FROM rpt_fam_compare_types
WHERE event_type = 'FCST'
UNION
-- This select returns the pairing of all bookings types with forecast types
SELECT bkng.event_type, fcst.compare_type, bkng.compare_type_description || ' to ' || fcst.compare_type_description AS compare_name
FROM rpt_fam_compare_types fcst,
rpt_fam_compare_types bkng
WHERE fcst.event_type = 'FCST'
AND bkng.event_type = 'BKNG'
AND fcst.compare_type = bkng.compare_type
WHERE event_type || ' ' || compare_type <> 'FCST AUTH_FTRS'
AND compare_type NOT IN ('NET_SHIP', 'SHIP_NET_FTRS', 'NET_ETS', 'GROSS_FTRS')
) cmpr
-- The two (2) in-line views are joined by EVENT_TYPE (i.e. 'FCST' and 'BKNG')
-- to form a product of all RPT_FAM_FWEQ_DETAIL rows with comparison types
WHERE dtl.trg_event_type = cmpr.event_type
ORDER BY plan_ctry,
division,
season,
monthly_seq,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value
COMMIT
INSERT INTO rpt_fam_FWEQ_detail_aggr
SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
compare_name,
first_display_event_num,
level_type,
level_value,
'TOTAL' AS row_type,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd,
NULL AS promo_ind, -- for future use
-- Each RPT_FAM_FWEQ_DETAIL row's eight (8) quantity columns are broken down
-- into F1_QTY from DISP_aaaa_QTY, F2_QTY from TRG_aaaa_QTY and
-- AE from aaaa_AE where 'aaaa' is equal to one of the COMPARE_TYPEs below:
-- F1_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
disp_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
disp_net_qty
WHEN compare_type = 'NET_AO' THEN
disp_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
disp_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
disp_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
disp_auth_futr_qty
END as f1_qty,
-- F2_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
trg_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
trg_net_qty
WHEN compare_type = 'NET_AO' THEN
trg_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
trg_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
trg_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
trg_auth_futures_qty
END as f2_qty,
-- AE
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
abs(disp_dlvry_plan_qty - trg_dlvry_plan_qty)
WHEN compare_type = 'NET' THEN
abs(disp_net_qty - trg_net_qty)
WHEN compare_type = 'NET_AO' THEN
abs(disp_ao_qty - trg_ao_qty)
WHEN compare_type = 'NET_FTRS' THEN
abs(disp_futr_qty - trg_futr_qty)
WHEN compare_type = 'NET_REPLENS' THEN
abs(disp_replen_qty - trg_replen_qty)
WHEN compare_type = 'AUTH_FTRS' THEN
abs(disp_auth_futr_qty - trg_auth_futures_qty)
END as ae,
SYSDATE AS zz_insert_tmst
FROM
-- This in-line view returns only LEVEL_NUM = 2 or LEVEL_TYPE = 'STYLE_GROUP' data
(SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
-- The following NULL'd columns' values cannot be saved due to aggregation
NULL AS material,
NULL AS key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
NULL AS sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
NULL AS matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
NULL AS curr_prod_i2_life_cyc_cd,
sum(disp_net_qty) AS disp_net_qty,
sum(trg_net_qty) AS trg_net_qty,
-- ABS(sum(trg_net_qty) - sum(disp_net_qty)) AS net_AE,
sum(disp_dlvry_plan_qty) AS disp_dlvry_plan_qty,
sum(trg_dlvry_plan_qty) AS trg_dlvry_plan_qty,
-- ABS(sum(trg_dlvry_plan_qty) - sum(disp_dlvry_plan_qty)) AS dlvry_plan_AE,
sum(disp_futr_qty) AS disp_futr_qty,
sum(trg_futr_qty) AS trg_futr_qty,
-- ABS(sum(trg_futr_qty) - sum(disp_futr_qty)) AS futr_AE,
sum(disp_ao_qty) AS disp_ao_qty,
sum(trg_ao_qty) AS trg_ao_qty,
-- ABS(sum(trg_ao_qty) - sum(disp_ao_qty)) AS ao_AE,
sum(disp_replen_qty) AS disp_replen_qty,
sum(trg_replen_qty) AS trg_replen_qty,
-- ABS(sum(trg_replen_qty) - sum(disp_replen_qty)) AS replen_AE,
sum(disp_futr_qty) AS disp_auth_futr_qty,
sum(trg_auth_futures_qty) AS trg_auth_futures_qty --,
-- ABS(sum(trg_auth_futures_qty) - sum(disp_futr_qty)) AS auth_futures_AE
FROM rpt_fam_FWEQ_detail
WHERE plan_ctry &where_plan_ctry
and level_num = 2 -- 'STYLE-GRP' or 'STYLE'
-- AND (promo_ind <> 'Y' OR promo_ind IS null)
GROUP BY plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_grp_nbr,
sty_grp_desc,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc -- 'Type Group' in Brio
) dtl,
-- The following in-line view returns all of the different combinations
-- of comparison types in the RPT_FAM_FWEQ_DETAIL table
-- This select returns the pairing of all forecast types
SELECT event_type, compare_type, compare_name
from
(SELECT event_type, compare_type, compare_type_description || ' to ' || compare_type_description AS compare_name
FROM rpt_fam_compare_types
WHERE event_type = 'FCST'
UNION
-- This select returns the pairing of all bookings types with forecast types
SELECT bkng.event_type, fcst.compare_type, bkng.compare_type_description || ' to ' || fcst.compare_type_description AS compare_name
FROM rpt_fam_compare_types fcst,
rpt_fam_compare_types bkng
WHERE fcst.event_type = 'FCST'
AND bkng.event_type = 'BKNG'
AND fcst.compare_type = bkng.compare_type
WHERE event_type || ' ' || compare_type <> 'FCST AUTH_FTRS'
AND compare_type NOT IN ('NET_SHIP', 'SHIP_NET_FTRS', 'NET_ETS', 'GROSS_FTRS')
) cmpr
-- The two (2) in-line views are joined by EVENT_TYPE (i.e. 'FCST' and 'BKNG')
-- to form a product of all RPT_FAM_FWEQ_DETAIL rows with comparison types
WHERE dtl.trg_event_type = cmpr.event_type
ORDER BY plan_ctry,
division,
season,
monthly_seq,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value
COMMIT
INSERT INTO rpt_fam_FWEQ_detail_aggr
SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
compare_name,
first_display_event_num,
level_type,
level_value,
'TOTAL' AS row_type,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd,
NULL AS promo_ind, -- for future use
-- Each RPT_FAM_FWEQ_DETAIL row's eight (8) quantity columns are broken down
-- into F1_QTY from DISP_aaaa_QTY, F2_QTY from TRG_aaaa_QTY and
-- AE from aaaa_AE where 'aaaa' is equal to one of the COMPARE_TYPEs below:
-- F1_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
disp_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
disp_net_qty
WHEN compare_type = 'NET_AO' THEN
disp_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
disp_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
disp_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
disp_auth_futr_qty
END as f1_qty,
-- F2_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
trg_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
trg_net_qty
WHEN compare_type = 'NET_AO' THEN
trg_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
trg_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
trg_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
trg_auth_futures_qty
END as f2_qty,
-- AE
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
abs(disp_dlvry_plan_qty - trg_dlvry_plan_qty)
WHEN compare_type = 'NET' THEN
abs(disp_net_qty - trg_net_qty)
WHEN compare_type = 'NET_AO' THEN
abs(disp_ao_qty - trg_ao_qty)
WHEN compare_type = 'NET_FTRS' THEN
abs(disp_futr_qty - trg_futr_qty)
WHEN compare_type = 'NET_REPLENS' THEN
abs(disp_replen_qty - trg_replen_qty)
WHEN compare_type = 'AUTH_FTRS' THEN
abs(disp_auth_futr_qty - trg_auth_futures_qty)
END as ae,
SYSDATE AS zz_insert_tmst
FROM
-- This in-line view returns only LEVEL_NUM = 3 or LEVEL_TYPE = 'MATL' data
(SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd,
sum(disp_net_qty) AS disp_net_qty,
sum(trg_net_qty) AS trg_net_qty,
-- ABS(sum(trg_net_qty) - sum(disp_net_qty)) AS net_AE,
sum(disp_dlvry_plan_qty) AS disp_dlvry_plan_qty,
sum(trg_dlvry_plan_qty) AS trg_dlvry_plan_qty,
-- ABS(sum(trg_dlvry_plan_qty) - sum(disp_dlvry_plan_qty)) AS dlvry_plan_AE,
sum(disp_futr_qty) AS disp_futr_qty,
sum(trg_futr_qty) AS trg_futr_qty,
-- ABS(sum(trg_futr_qty) - sum(disp_futr_qty)) AS futr_AE,
sum(disp_ao_qty) AS disp_ao_qty,
sum(trg_ao_qty) AS trg_ao_qty,
-- ABS(sum(trg_ao_qty) - sum(disp_ao_qty)) AS ao_AE,
sum(disp_replen_qty) AS disp_replen_qty,
sum(trg_replen_qty) AS trg_replen_qty,
-- ABS(sum(trg_replen_qty) - sum(disp_replen_qty)) AS replen_AE,
sum(disp_futr_qty) AS disp_auth_futr_qty,
sum(trg_auth_futures_qty) AS trg_auth_futures_qty --,
-- ABS(sum(trg_auth_futures_qty) - sum(disp_futr_qty)) AS auth_futures_AE
FROM rpt_fam_FWEQ_detail
WHERE plan_ctry &where_plan_ctry
and level_num = 3 -- 'MATERIAL'
-- AND promo_ind <> 'Y'
GROUP BY plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd
) dtl,
-- The following in-line view returns all of the different combinations
-- of comparison types in the RPT_FAM_FWEQ_DETAIL table
-- This select returns the pairing of all forecast types
SELECT event_type, compare_type, compare_name
from
(SELECT event_type, compare_type, compare_type_description || ' to ' || compare_type_description AS compare_name
FROM rpt_fam_compare_types
WHERE event_type = 'FCST'
UNION
-- This select returns the pairing of all bookings types with forecast types
SELECT bkng.event_type, fcst.compare_type, bkng.compare_type_description || ' to ' || fcst.compare_type_description AS compare_name
FROM rpt_fam_compare_types fcst,
rpt_fam_compare_types bkng
WHERE fcst.event_type = 'FCST'
AND bkng.event_type = 'BKNG'
AND fcst.compare_type = bkng.compare_type
WHERE event_type || ' ' || compare_type <> 'FCST AUTH_FTRS'
AND compare_type NOT IN ('NET_SHIP', 'SHIP_NET_FTRS', 'NET_ETS', 'GROSS_FTRS')
) cmpr
-- The two (2) in-line views are joined by EVENT_TYPE (i.e. 'FCST' and 'BKNG')
-- to form a product of all RPT_FAM_FWEQ_DETAIL rows with comparison types
WHERE dtl.trg_event_type = cmpr.event_type
ORDER BY plan_ctry,
division,
season,
monthly_seq,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value
COMMIT
/I agree.
SELECT ticket_no,name_of_the_passenger..
FROM ticket_master
WHERE ticket_no NOT IN
(SELECT Ticket_no
FROM ticket_cancellations
WHERE trip_id = my_trip_id);
This involves creating a little temp table for each record in ticket_no. Then full scanning it.
Change it to
SELECT ticket_no,name_of_the_passenger..
FROM ticket_master B
WHERE ticket_no NOT EXISTS
(SELECT null
FROM ticket_cancellations A
WHERE A.ticket_no = B.ticket_no
AND trip_id = my_trip_id);
Then you get an index hit in both cases. -
Using Decision Service for existing rules based on Java Fact
Hello
According The BPEL Process Manager developers guid (10.1.3.1) page 18-28 if we want to call existing ruleset from BPEL that it's based on java facts . We should write RL function and convert xml data into java and assert the facts and after retrieving the result we should set the data from java types into xml types. From BPEL we should call this function
But I have question : We know that rules based on java facts has better performance than xml facts , Instead of Using Decicion Service , we have writen java function and run ruleset and then we have exposed this as a webservice and invoke it from BPEL.
What is the performance difference if we use this way instead of Decision Services?Thank you ralf
So I want to write an RL Function and call my ruleset (based on my Java facts) from this RL Function and then Call this RL Function from BPEL decicion services.
I have problem : I want to generate XSD that match with my JavaFact A . How can I easily create XSD from my java fact class.? I saw http://technology.amis.nl/blog/?p=3221 that shows how we can create xsd from Java classes . I could generate XSD from Java class A by the recommended method.
I have Class A that contains member B as an array of objects (C) , My problem is that When I import generated xsd into rule Author it's generated JAXB class does not generate member B as an array of class(C)
Is there any way that I can modify class A into JAXB compatible class and generate xsd file from it's JAXB class .
I don't know JAXB specification and I do not know how JAXB works.I want easily generate JAXB class form my java class A and then retrieve corresponding xsd from it .
Can I do such thing? -
How to avoid full Table scan when using Rule based optimizer (Oracle817)
1. We have a Oracle 8.1.7 DB, and the optimizer_mode is set to "RULE"
2. There are three indexes on table cm_contract_supply, which is a large table having 28732830 Rows, and average row length 149 Bytes
COLUMN_NAME INDEX_NAME
PROGRESS_RECID XAK11CM_CONTRACT_SUPPLY
COMPANY_CODE XIE1CM_CONTRACT_SUPPLY
CONTRACT_NUMBER XIE1CM_CONTRACT_SUPPLY
COUNTRY_CODE XIE1CM_CONTRACT_SUPPLY
SUPPLY_TYPE_CODE XIE1CM_CONTRACT_SUPPLY
VERSION_NUMBER XIE1CM_CONTRACT_SUPPLY
CAMPAIGN_CODE XIF1290CM_CONTRACT_SUPPLY
COMPANY_CODE XIF1290CM_CONTRACT_SUPPLY
COUNTRY_CODE XIF1290CM_CONTRACT_SUPPLY
SUPPLIER_BP_ID XIF801CONTRACT_SUPPLY
COMMISSION_LETTER_CODE XIF803CONTRACT_SUPPLY
COMPANY_CODE XIF803CONTRACT_SUPPLY
COUNTRY_CODE XIF803CONTRACT_SUPPLY
COMPANY_CODE XPKCM_CONTRACT_SUPPLY
CONTRACT_NUMBER XPKCM_CONTRACT_SUPPLY
COUNTRY_CODE XPKCM_CONTRACT_SUPPLY
SUPPLY_SEQUENCE_NUMBER XPKCM_CONTRACT_SUPPLY
VERSION_NUMBER XPKCM_CONTRACT_SUPPLY
3. We are querying the table for a particular contract_number and version_number. We want to avoid full table scan.
SELECT /*+ INDEX(XAK11CM_CONTRACT_SUPPLY) */
rowid, pms.cm_contract_supply.*
FROM pms.cm_contract_supply
WHERE
contract_number = '0000000000131710'
AND version_number = 3;
However despite of giving hint, query results are fetched after full table scan.
Execution Plan
0 SELECT STATEMENT Optimizer=RULE (Cost=1182 Card=1 Bytes=742)
1 0 TABLE ACCESS (FULL) OF 'CM_CONTRACT_SUPPLY' (Cost=1182 Card=1 Bytes=742)
4. I have tried giving
SELECT /*+ FIRST_ROWS + INDEX(XAK11CM_CONTRACT_SUPPLY) */
rowid, pms.cm_contract_supply.*
FROM pms.cm_contract_supply
WHERE
contract_number = '0000000000131710'
AND version_number = 3;
and
SELECT /*+ CHOOSE + INDEX(XAK11CM_CONTRACT_SUPPLY) */
rowid, pms.cm_contract_supply.*
FROM pms.cm_contract_supply
WHERE
contract_number = '0000000000131710'
AND version_number = 3;
But it does not work.
Is there some way without changing optimizer mode and without creating an additional index, we can use the index instead of full table scan?David,
Here is my test on a Oracle 10g database.
SQL> create table mytable as select * from all_tables;
Table created.
SQL> set autot traceonly
SQL> alter session set optimizer_mode = choose;
Session altered.
SQL> select count(*) from mytable;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'MYTABLE' (TABLE)
Statistics
1 recursive calls
0 db block gets
29 consistent gets
0 physical reads
0 redo size
223 bytes sent via SQL*Net to client
276 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> analyze table mytable compute statistics;
Table analyzed.
SQL> select count(*) from mytable
2 ;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'MYTABLE' (TABLE) (Cost=11 Card=1
788)
Statistics
1 recursive calls
0 db block gets
29 consistent gets
0 physical reads
0 redo size
222 bytes sent via SQL*Net to client
276 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options -
RULE BASED OPTIMIZER의 PLAN 선택 등급
제품 : ORACLE SERVER
작성날짜 : 1997-01-21
Index 사용의 우선 순위
======================
테이블에 인덱스가 생성되어 있다 하더라도 SQL의 코딩 방법에 따라 한개 이상의
인덱스가 사용되기도 하고 전혀 사용되지 않을 수가 있다.
그러므로, 항상 적절한 인덱스를 사용하여 좋은 액세스 경로를 확보하여야 한다.
또한 처리해야 할 문장의 WHERE 조건에 사용한 컬럼이 모두 인덱스 컬럼인 경우
이들에게도 우선 순위가 있다. 즉 모든 인덱스가 사용되어지는 것이 아니고 우선
순위가 높은 인덱스가 먼저 사용되어져서 처리된다.
인덱스의 우선 순위를 높은 순서부터 살펴보면 아래와 같다.
1) Rowid = Constant 의 비교
2) Unique indexed column = Constant 의 비교
3) Entire unique concatenated index = Constant 의 비교
4) Entire cluster key = Corresponding cluster key in another table in
same cluster의 비교
5) Entire cluster key = Constant 의 비교
6) Entire non unique concatenated index = Constant 의 비교
7) Non unique index = Constant 의 비교
8) Entire concatenated index >= Constant 의 비교
9) Unique indexed column BETWEEN low value AND
high value, or Unique indexed column LIKE 'C%' 의 비교
10) Non unique indexed column BETWEEN low value AND
high value, or Non Unique indexed column LIKE 'C%' 의 비교
11) Unique indexed column < or > Constant 의 비교
12) Non unique indexed column < or > Constant 의 비교
13) Sort/Merge(Joins only) 의 비교
14) Max or Min of single indexed column 의 비교
15) ORDER BY entire index 의 비교
16) Full table scans 의 비교
이 의미는 한 SQL의 WHERE 조건에 2)와 3)의 인덱스가 사용된 경우 우선
순위가 높은 2)의 인덱스가 사용되어진다는 것이다. -
Rule based & Cost based optimizer
Hi,
What is the difference Rule based & Cost based optimizer ?
ThanksWithout an optimizer, all SQL statements would simply do block-by-block, row-by-row table scans and table updates.
The optimizer attempts to find a faster way of accessing rows by looking at alternatives, such as indexes.
Joins add a level of complexity - the simplest join is "take an appropriate row in the first table, scan the second table for a match". However, deciding which is the first (or driving) table is also an optimization decision.
As technology improves a lot of different techiques for accessing the rows or joining that tables have been devised, each with it's own optimium data-size:performance:cost curve.
Rule-Based Optimizer:
The optimization process follows specific defined rules, and will always follow those rules. The rules are easily documented and cover things like 'when are indexes used', 'which table is the first to be used in a join' and so on. A number of the rules are based on the form of the SQL statement, such as order of table names in the FROM clause.
In the hands of an expert Oracle SQL tuner, the RBO is a wonderful tool - except that it does not support such advanced as query rewrite and bitmap indexes. In the hands of the typical developer, the RBO is a surefire recipie for slow SQL.
Cost-Based Optimizer:
The optimization process internally sets up multiple execution proposals and extrapolates the cost of each proposal using statistics and knowledge of the disk, CPU and memory usage of each of the propsals. It is not unusual for the optimizer to analyze hundred, or even thousands, of proposals - remember, something as simple as a different order of table names is a proposal. The proposal with the least cost is generally selected to be executed.
The CBO requires accurate statistics to make reasonable decisions.
Even with good statistics, the complexity of the SQL statement may cause the CBO to make a wrong decision, or ignore a specific proposal. To compensate for this, the developer may provide 'hints' or recommendations to the optimizer. (See the 10g SQL Reference manual for a list of hints.)
The CBO has been constantly improving with every release since it's inception in Oracle 7.0.12, but early missteps have given it a bad reputation. Even in Oracle8i and 9i Release 1, there were countless 'opportunities for improvement' <tm> As of Oracle 10g, the CBO is quite decent - sufficiently so that the RBO has been officially deprecated. -
Rule based ATP is not working for Components
Hi All,
Our requirement is to do availability check through APO for Sales order created in ECC,so we are using gATP.
Requirement: We are creating salesorder for BOM header (Sales BOM) and avaialbility check should happen for components i.e. Product avalaibility & Rule based substitution.
Issue: Product availiabilty is working for components but rules based substituion is working, mean Rules are not getting determind for components.
Settings:
- Header doesnot exist in APO and compnents do exist in APO
- Availability check is not enabled for header item category and enabled for Item category for components
- Rules have been created for Components in APO
- Rule base ATP is activated in Check instructions
We have also tried MATP for this i.e. PPM created in APO but still didn't get the desired result.
If we create salesorder for the component material directly then Rule based ATP is happening, so for components Rule based ATP is not working.
How do we enable enable Rulesbased ATP for components, i mean is there any different way to do the same.
Thanks for help.
Regards,
JagadeeshHi Jagdeesh,
If you are creating BOM in ECC and CIFing PPM of FG/Header material to APO, I think you need to CIF Header material, too, with material integration model.
Please include header material in you integration models for material, SO and ATP check as well.
For component availability check, you can use MATP; but for MATP, FG should be in APO. You need not to CIF any receipts of FG (stock, planned orders, POs etc), so that MATP will be triggered directly. Then maintaining Rules for RMs will enable to select available RMs according to the rule created.
Regards,
Bipin -
Cost Based Optimizer (CBO)
not sure if this is a daft question or what. but i am trying to find out where exactly it exists.
i know, when performing ST05 and viewing the execution plan, we see what the CBO has used, but is the CBO purely performed at the database server, and not at the SAP Application.
When updating the statistics, are these passed to the database server, and once again, the CBO utilizes them for the execution plan, or do the database statistics actually reside in the database server.
finally, in viewing the execution plan, the statement "execution costs = xxx" (xxx being a numeric value). what exactly is xxx. maybe an internal index used to compare execution plans, or maybe the number of blocks required to read the "estimated #rows".
anyone ??
thanks
glenHello Glen,
So far as my knowledge is concerned, the statistics are actually located on the database server. That is what appears to be more logical too. what is the use of maintaining the access paths on tha application server ? Most of the modern database servers are equipped with the CBO functionality. And Cost-Based-Optimizing is dependent on the database.
Here's what the documentation says:
<i>You can update statistics on the Oracle database using the Computing Center Management System (CCMS). The transactions to be used are DB20 and DB21.
By running update statistics regularly, you make sure that the database statistics are up-to-date, so improving database performance. The Oracle cost-based optimizer (CBO) uses the statistics to optimize access paths when retrieving data for queries. If the statistics are out-of-date, the CBO might generate inappropriate access paths (such as using the wrong index), resulting in poor performance.
From Release 4.0, the CBO is a standard part of the SAP System. If statistics are available for a table, the database system uses the cost-based optimizer. Otherwise, it uses the rule-based optimizer.</i>
Regards,
Anand Mandalika. -
Re: Oracle 8i (8.1.7.4) Rule based v/s Cost based
Hi,
I would like to know the advantages/disadvantages of using RULE based optimizer v/s COST based optimizer in Oracle 8i. We have a production RULE based database and are experiencing performance issues on some queries sporadically.
TKPROF revealed:
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 3 94.67 2699.16 1020421 5692711 51404 0
Fetch 13 140.93 4204.41 688482 4073366 0 26896
total 16 235.60 6903.57 1708903 9766077 51404 26896
Please post your expert suggestions as soon as possible.
Thanks and Regards,
AI think the answer you are looking for is that Rule Based optimizer is predictive, but Cost Based optimizer results may vary depending on statistics of rows, indexes, etc. But at the same time, you can typically get better speed for OLTP relational databases with CBO, assuming you have correct statistics, and correct optimizer settings set.
Maybe you are looking for
-
Did you know I switched to Internet Explorer after Eich's forced resignation?
After learning that my browser only identifies with info coming from 5% of the human population I decided to delete Mozilla Firefox from my home and work computers. I've discovered there's much more info out there now that the Mozilla socialist filte
-
How to reinstall fonts in Yosemite
Whenever I upgrade mac os, first thing I do is remove all the extra fonts from library and system fonts. Hate seeing tons of fonts I don't need when working in Adobe Creative Suite. If I get overzealous, I add what I need back. Did that in Yosemite.
-
Hi, I have an issue with Letter of Credit. I have created an LC using financial document and processed the order and PGI is done. Now the LC value is not getting updated. Even if the order value is greater than the LC value it is getting processed wi
-
Using the Tab Key in Simulations
Hi, I am new to this forum and new to Captivate 5. I am creating online courses that teach people how to use our company's software programs so I use Captivate simulations often. We use Captivate to record the simulations and then we use Articulate
-
I created a website with iweb '08. On one page I have text that is hyperlinked to a html file. The file has text and an image in it. In the finder on my computer disk drive the html file also has a folder with an image list file and the image files.