Cost Based Query
i have removed the RULE hints and tried the below query in 2 different databases both same version(10g)
In one of the database query completed in 4hrs and in another database it completed in 3 days.
Both databases has similar data.
where i am doing wrong?
Any feeedbacks?
Any optimizer parameters needs to be set?
lect /*+ RULE */ 'PO_COMMITMENT' record_type,
b.org_id,
NULL invoice_number,
a.PO_NUMBER,
a.PO_REVISION,
a.RELEASE_NUMBER,
a.CREATION_DATE,
a.APPROVED_DATE,
b.NEED_BY_DATE,
b.PROMISED_DATE,
a.BUYER_NAME,
a.VENDOR_NAME,
a.PO_LINE,
replace(a.ITEM_DESCRIPTION, chr(10),' ') item_description,
a.QUANTITY_ORDERED,
a.AMOUNT_ORDERED,
a.QUANTITY_CANCELLED,
a.AMOUNT_CANCELLED,
a.QUANTITY_DELIVERED,
a.AMOUNT_DELIVERED,
a.QUANTITY_INVOICED ,
a.AMOUNT_INVOICED*nvl(pod.rate,1) AMOUNT_INVOICED,
a.Amount_outstanding_invoice,
a.PROJECT_ID,
a.TASK_ID,
a.EXPENDITURE_ITEM_DATE,
a.ACCT_EXCHANGE_RATE,
a.denom_CURRENCY_CODE,
a.PO_HEADER_ID,
a.PO_RELEASE_ID,
pod.po_line_id REQUISITION_HEADER_ID,
a.po_line_location_id REQUISITION_LINE_ID ,
pod.po_distribution_id invoice_id,
EXPENDITURE_ORGANIZATION ,
null po_status,
pod.accrue_on_receipt_flag po_line_status,
null requisioner_name,
0 commitment_amt,
pod.po_header_id xpo_header_id,
pod.po_distribution_id xpo_distribution_id,
pod.distribution_num DISTRIBUTION_LINE_NUMBER
from pa_proj_appr_po_distributions a,
po_distributions pod,
po_line_locations b
where a.PO_LINE_LOCATION_ID = b.LINE_LOCATION_ID
and a.po_distribution_id = pod.po_distribution_id
and b.line_location_id = pod.line_location_id
and b.org_id = :p_org_id
and a.project_id > 0
UNION ALL
SELECT /*+ RULE */ 'REQ_COMMITMENT' ,
:p_org_id,
NULL,
REQ_NUMBER ,
NULL ,
NULL,
CREATION_DATE ,
to_date(null),
NEED_BY_DATE ,
to_date(null),
null,
vendor_name,
REQ_LINE ,
replace(ITEM_DESCRIPTION, chr(10), ' '),
QUANTITY ,
AMOUNT ,
0,
0,
0,
0,
0,
0,
0,
PROJECT_ID ,
TASK_ID ,
EXPENDITURE_ITEM_DATE ,
ACCT_EXCHANGE_RATE ,
denom_CURRENCY_CODE,
0,
0,
REQUISITION_HEADER_ID,
REQUISITION_LINE_ID ,
0 invoice_id,
EXPENDITURE_ORGANIZATION ,
null po_status,
null po_line_status,
REQUESTOR_NAME requisioner_name,
AMOUNT ,
0,
0,
0 DISTRIBUTION_LINE_NUMBER
FROM pa_proj_appr_req_distributions
union all
select /*+ RULE */ 'INVOICE_COMMITMENT' ,
:p_org_id,
b.INVOICE_NUM,
NULL,
NULL,
NULL,
b.creation_date,
a.INVOICE_DATE,
a.GL_DATE ,
to_date(null),
NULL,
a.VENDOR_NAME,
0,
replace(a.DESCRIPTION,chr(10),' '),
0,
0,
0,
0,
0,
0,
a.QUANTITY,
a.AMOUNT ,
0,
a.PROJECT_ID ,
a.TASK_ID ,
a.EXPENDITURE_ITEM_DATE ,
a.ACCT_EXCHANGE_RATE ,
a.denom_CURRENCY_CODE ,
0,
0,
aid.po_distribution_id,
aid.invoice_distribution_id,
a.invoice_id,
a.EXPENDITURE_ORGANIZATION ,
null,
null,
null,
a.amount ,
0,
0,
a.DISTRIBUTION_LINE_NUMBER
FROM
pa_proj_ap_inv_distributions a,
ap_invoices b,
ap_invoice_distributions aid
where a.invoice_id = b.invoice_id
and aid.invoice_id = b.invoice_id
and a.distribution_line_number = aid.distribution_line_number
Hello,
It's duplicate post, please close this and provide requested information in your previous post.
Regards
Similar Messages
-
Re: Oracle 8i (8.1.7.4) Rule based v/s Cost based
Hi,
I would like to know the advantages/disadvantages of using RULE based optimizer v/s COST based optimizer in Oracle 8i. We have a production RULE based database and are experiencing performance issues on some queries sporadically.
TKPROF revealed:
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 3 94.67 2699.16 1020421 5692711 51404 0
Fetch 13 140.93 4204.41 688482 4073366 0 26896
total 16 235.60 6903.57 1708903 9766077 51404 26896
Please post your expert suggestions as soon as possible.
Thanks and Regards,
AI think the answer you are looking for is that Rule Based optimizer is predictive, but Cost Based optimizer results may vary depending on statistics of rows, indexes, etc. But at the same time, you can typically get better speed for OLTP relational databases with CBO, assuming you have correct statistics, and correct optimizer settings set.
-
Top Link Special Considerations in moving to Cost Based Optimizer....
Our current application architecture consists of running a Java based application with Oracle 9i as the database and toplink as the object relational mapping tool. This is a hosted application about 5 years old with stringent SLA requirements and high availability needs. We are currently using Rule Based Optimizer (RBO) mode and do not collect statistics for the schemas. We are planning a move to Cost Based Optimizer (CBO)
What are the special considerations we need to be aware of from moving RBO to CBO from top link perspective. Is top link code optimized for one mode over the other ?. What special parameter settings are needed ?. Any of your experience in moving Top Link based applications to RBO and best practices will be very much appreciated.
-Thanks
Ganesan MahaGanesan,
Over the 10 years we have been delivering TopLink I do not recall any issues with customizing TopLink for either approach. You do have the ability to customize how the SQL is generated and even replace the generated SQL with custom queries should you need to. This will not require application changes but simply modifications to the TopLink metadata.
As of 9.0.4 you can also provide hints in the TopLink query and expression framework that will be generated into the SQL to assist the optimizer.
Doug -
SQL 문장이 RULE 에서 COST-BASED로 전환되는 경우
제품 : ORACLE SERVER
작성날짜 : 2004-05-28
SQL 문장이 RULE에서 COST-BASED로 전환되는 경우
==============================================
PURPOSE
SQL statement 문장이 자동으로 cost-based mode로 전환되는 경우에 대해
알아보자.
Explanation
Rule-based mode에서 sql statement를 실행하더라도 Optimizer에 의해
cost-based mode로 전환되는 경우가 있다.
이런 경우는 해당 SQL이 아래와 같은 경우로 사용되는 경우 가능하다.
- Partitioned tables
- Index-organized tables
- Reverse key indexes
- Function-based indexes
- SAMPLE clauses in a SELECT statement
- Parallel execution and parallel DML
- Star transformations
- Star joins
- Extensible optimizer
- Query rewrite (materialized views)
- Progress meter
- Hash joins
- Bitmap indexes
- Partition views (release 7.3)
- Hint (RULE 또는 DRIVING_SITE제외한 Hint가 왔을경우)
- FIRST_ROWS,ALL_ROWS Optimizer의 경우는 통계정보가 없어도 CBO로 동작
- TABLE 또는 INDEX에 Parallel degree가 설정되어 있거나,
INSTANCE가 설정되어 있는 경우(DEFAULT도 해당)
- Table에 domain index(Text index등) 이 생성되어 있는 경우 -
I have the following Select Statement:
SELECT FGBTRND_SUBMISSION_NUMBER, FGBTRND_TRANS_AMT, FGBTRND_COAS_CODE, FGBTRND_FUND_CODE, FGBTRND_ORGN_CODE,
FGBTRND_ACCT_CODE, FGBTRND_PROG_CODE, FGBTRND_ACTV_CODE, FGBTRND_LOCN_CODE, FGBTRND_RUCL_CODE
FROM FGBTRND
WHERE FGBTRND_DOC_CODE = 'F0022513'
AND FGBTRND_RUCL_CODE IN ( SELECT FGBTRNH_RUCL_CODE FROM FGBTRNH
WHERE FGBTRNH_DOC_CODE = 'F0022513' )
AND FGBTRND_LEDGER_IND='O'
AND FGBTRND_FIELD_CODE='03' --:B4 01 02 03
AND DECODE('Y','Y',BWFKPROC.F_SECURITY_FOR_WEB_FNC(FGBTRND_COAS_CODE, FGBTRND_FUND_CODE, FGBTRND_ORGN_CODE, 'PBEED'),'Y' ) = 'Y'
AND ((FGBTRND_SUBMISSION_NUMBER IS NULL AND '0' IS NULL) OR (FGBTRND_SUBMISSION_NUMBER='0' ))
This statement is ok without the following:
AND DECODE('Y','Y',BWFKPROC.F_SECURITY_FOR_WEB_FNC(FGBTRND_COAS_CODE, FGBTRND_FUND_CODE, FGBTRND_ORGN_CODE, 'PBEED'),'Y' ) = 'Y'
The call is to a security package which has to evaluate to Y inorder for the user to see the result. This statement in total would work fine provided the decode in the where clause is called last. However, the cost based optimizer is determining that it needs to evaluate this first.
Question is:
How do I get the cost based optimizer to evaluate the decode last and not first?
I am on 10.2.0.3
Patrick Churchilluser3390467 wrote:
" Consider setting your optimizer_index_caching parameter to assist the cost-based optimizer. Set the value of optimizer_index_caching to the average percentage of index segments in the data buffer at any time, which you can estimate from the v$bh view.
Can someone give me the query to use to estimate from v$bh view mentioned above?
What are other considerations for setting this parameter for optimizationThis post, and the flood of your other posts, appear to be quoting sections of a Statspack Analyzer report. Why are you posting this material here?
If you want to set the optimizer_index_caching initialization parameter, first determine the purpose of the parameter. Next, determine if the current value of the parameter is causing performance problems. Next, determine if there are any unwanted side-effects. Finally, test the changed parameter, possibly at the session level or through an OPT_PARAM hint in affected queries.
Here is a link to the starting point. http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams159.htm
Blindly changing parameters in response to vague advice is likely to lead to problems.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Hi all,
please forward information about Rule Based Query, actually i am nott getting what is rule based query, please help me out....
Thanks and Regards,
Santoshrule based optimizer is a older technique and cannot be used efficiently to optimize a query, now -a- days only cost based optimization would be effective and these are categories under which a query can be tuned. usually explain plan on a query would fetch you the rule or cost based information....Thank you
-
Rule based & Cost based optimizer
Hi,
What is the difference Rule based & Cost based optimizer ?
ThanksWithout an optimizer, all SQL statements would simply do block-by-block, row-by-row table scans and table updates.
The optimizer attempts to find a faster way of accessing rows by looking at alternatives, such as indexes.
Joins add a level of complexity - the simplest join is "take an appropriate row in the first table, scan the second table for a match". However, deciding which is the first (or driving) table is also an optimization decision.
As technology improves a lot of different techiques for accessing the rows or joining that tables have been devised, each with it's own optimium data-size:performance:cost curve.
Rule-Based Optimizer:
The optimization process follows specific defined rules, and will always follow those rules. The rules are easily documented and cover things like 'when are indexes used', 'which table is the first to be used in a join' and so on. A number of the rules are based on the form of the SQL statement, such as order of table names in the FROM clause.
In the hands of an expert Oracle SQL tuner, the RBO is a wonderful tool - except that it does not support such advanced as query rewrite and bitmap indexes. In the hands of the typical developer, the RBO is a surefire recipie for slow SQL.
Cost-Based Optimizer:
The optimization process internally sets up multiple execution proposals and extrapolates the cost of each proposal using statistics and knowledge of the disk, CPU and memory usage of each of the propsals. It is not unusual for the optimizer to analyze hundred, or even thousands, of proposals - remember, something as simple as a different order of table names is a proposal. The proposal with the least cost is generally selected to be executed.
The CBO requires accurate statistics to make reasonable decisions.
Even with good statistics, the complexity of the SQL statement may cause the CBO to make a wrong decision, or ignore a specific proposal. To compensate for this, the developer may provide 'hints' or recommendations to the optimizer. (See the 10g SQL Reference manual for a list of hints.)
The CBO has been constantly improving with every release since it's inception in Oracle 7.0.12, but early missteps have given it a bad reputation. Even in Oracle8i and 9i Release 1, there were countless 'opportunities for improvement' <tm> As of Oracle 10g, the CBO is quite decent - sufficiently so that the RBO has been officially deprecated. -
Cost Based Oracle Volume 1: Fundamentals - The most awaited topic book
List,
I was very happy to know the news from AskTom site that Mr. Jonathan Lewis has written a book on most awaited topic, i.e. Cost Based Oracle - Volumne 1. I used the terms most awaited, because, I haven't found any book on this topic, may be few technical papers, but, a complete book dedicating to this topic is new, at least in my view. What you more expect when an Oracle Expert write this books, who provides with in-detailed information and not to forget with proven tests.
The book might be available in the book stall from November month onwards. Following is the linke and book index, if someone interested, please read it.
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.htmlA couple of years back I had a developer bring me a poorly running query that came from a vendor package. I looked at it, asked a bunch of dumb questions, and then rewote it to drive differently. It ran great. Then I told the developer that since it was a canned package I didn't know what good my version was going to do him.
A week later the developer came back and told me that the vendor had put my version of the query into their product. Considering that most of their installations are on SQL Server while we run on Oracle I found that amazing.
Sometimes the unexpected happens and it is not a bad thing. But I do not recommend waging money on a vendor accepting a performance improvement suggestion. Most of them act like your shop is the only one having performance issues with their product.
-- Mark D Powell -- -
Hi,
Is it possible to find out how the cube is aggregated when you use cost-based aggregration?
The cost-based aggregation is giving me reasonable load times, disk usage and query times. But I can't use this because one of my hierarchies changes rather often causing the complete cube to be re-aggregated. If I use level-based aggregation I can overcome this problem but I am having trouble finding the best configuration for which levels to aggregate on.
Regards /MagnusMagnus,
I think you are asking about dynamically aggregating over a hierarchy (or some parts of a hierarchy, like a level or a member).
AWM does not expose that kind of functionality, but its there in the OLAP.
You can set the levels or even parent members for which the cube data is pre-aggregated. For all the other levels or parent-members, it will be dynamically aggregated. Its done through PRECOMPUTE.
Here is some explanation. The example is about doing complete dynamic aggregation over a hierarchy. Then I mention other PRECOMPUTE conditions that you can use.
Lets say you want the cube to be dynamically aggregated over a hierarchy at query time (instead of pre-aggregating over that hierarchy), you can set the PrecomputeCondition of the cube by selecting the dimension and setting PrecomputeCondition to NONE. If you describe the AGGMAP for this cube (in olap worksheet), you will then see PRECOMPUTE(NA) for that dimension. In case of uncompressed cubes, the AGGMAP may still show PRECOMPUTE(<valueset>), but that valueset will be empty.
You can also query ALL_CUBES view to see the PRECOMPUTE settings. For more PRECOMPUTE options look at RELATION statement documentation in AGGMAP at http://docs.oracle.com/cd/E11882_01/olap.112/e17122/dml_commands_1006.htm#i1017474
EXAMPLE:
begin
dbms_cube.import_xml(q'!
*<Metadata Version="1.3">*
*<Cube Name="BNSGL_ACTV" Owner="BAWOLAP">*
*<Organization>*
*<AWCubeOrganization PrecomputeCondition="BAWOLAP.PRODUCT NONE"/>*
*</Organization>*
*</Cube>*
*</Metadata> !');*
end;
In addition to NONE, the other options for PRECOMPUTE are
(1). ALL
(2). AUTO
(3). n%
(4). levels of dimensions to be precomputed
(5). a list of one or more parent members to be precomputed. For rest of the parent members, dynamic aggregation will be done at query time.
(6). According to documentation, some conditional statements can be used also (although I have not tried it). For example:
PRECOMPUTE (geography.levelrel ‘L3')
PRECOMPUTE (LIMIT(product complement ‘TotalProd’))
PRECOMPUTE (time NE ’2001')
Note that there maybe a bug because of which the dimensions (over which the dynamic aggregation is desired) should be last dimensions in the aggregation order.
For your situation, you should look at (4) or (5) or (6)
. -
Adding HINTS produce a cost based plan ?
I have an SQL with Oracle Hints. If I do an explain plan report on this SQL, there is data under Rows, Bytes and Cost. If I remove the hints from the SQL, the explain plan has no data under rows, bytes, cost and a note: rule based optimization.
If I compute statistics on one of the tables used by the SQL, using ANALYSE TABLE as recommended, then I have a third explain plan, with data under rows, bytes and cost.
So how, in the absence of statistics, can Hints help produce a cost based plan ?When you provide hints in the SQL statments you typically are controlling the execution path and the nature of join that SQL statment is choosing. This can give you good results or can slow down performance of your query as the time passes and database is subjected to changes.
If on the other hand you choose COST based optimization and collect statistics as recommended by Oracle then you make optimizer think instead of your self which yealds competative performance when you let optimizer engine decide the execution plan. So If i where you would think of performing following tasks.
1)Collect the statistics for all the tables and indexes refrenced in the SQL statment.
2)Set the optimizer goal to choose.
3) Vary the optimizer sampling size while collecting the statistics using ANALYZE command. In the past I have noticed that optimizer behavior will change as per the sampling so you might have to adjest your stats while using ANALYZE command to fine tune the behavior of SQL statment.
4)This should improve performance of your query. -
How can I know the database is using Cost Based or Rule Based?
Hi all expertise,
How can I know the database is using Cost Based or Rule Based?
If cost based it is using, what methods are need to use to minimize the cost when database is running? And which tables I can see the performance of the database?
Thanks
Amyhow to see database setting ?
use this
SQL> sho parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE
choose means if table statistics is available then it will use cost
else
use rule based optimizer
for seeing performnace of table use
set autotrace on
and run your query
if it doen't show cost.it means it use rule based
for using cost based
u will calculate table statistics
9i
dbms_stats.gather_table_stats('owner','table');
8i
analyze table <table_name> compute statistics;
hope it will help you
kuljeet pal singh -
Costing based CO-PA extraction to DSO
Hello Expert,
I need to load the line item data to level 1 DSO. On the source system, the system has used the combination of the following key fields not allow duplicate records.
PALEDGER
VRGAR
VERSI
PERIO
PAOBJNR
PASUBNR
BELNR
POSNR
But, the datasource only includes the following fields from the source system.
PALEDGER
VRGAR
VERSI
PERIO
BELNR
POSNR
How the system is going to handle duplicate records in absence of these two fields (PAOBJNR
PASUBNR) in the datastore object (DSO) which are part of the key fields in the source system?
Please let me know about it.
Thanks.
Bhai.Hi,
For Costing based CO-PA Extraction : Pls follow this steps.
1.Create a CO_PA Data Source in KEB0 (Costing Based) and test this extractor in RSA3.
2. On BW side , Create CO_PA cube using the info source (to which generated CO_PA data source assigned in BW) as a template,and move the chars, KF and time chars and assign the dimensions as required.
I suggest make the copy of any below existing COPA cube and then you add/ delete missing fields.
CO-PA: Published Key Figures
CO-PA: Quickstart (S_GO)
CO-PA: Route Profitability (S_AL)
Pls Check this link for Costing based CO-PA
http://help.sap.com/saphelp_nw04/helpdata/en/b0/06644ca87011d1b5750000e82de856/content.htm
Hope this helps,
Regards
CSM Reddy -
Activating Account based Copa in the existing Cost-based COPA
Hi Sap CO gurus!
Please advise me for the following issue.
Presently, My client using Cost-based COPA for segment wise reporting.( Already activated).
Now they wanted the reports in Account Based. Is it possible to activate the Account based now? Will the system allow us to activate or not?
If yes, What will be the implications? I want to know the pros and cons for that.
If not, How Can I edit the COPA settings( Operating concern)? Or shall I delete the Operating concern and create one new one?
Can you please explain the impact of each situation?
Thanks a lot in advance
RamaHi Joe
You can deactivate Account Based COPA, but you will have to do a thorough testing so that you are aware of the issues that can crop up
The IMG Menu is SPRO > Controlling > Prof Analysis
You will have to do a testing of how the open sales orders would behave... The open sales orders mean Sales Orders where PGI has been done, but billing is pending AS WELL AS Sales orders which are just created, but no Logistics movement (PGI) has taken place... You will have to do testing on both types of sales orders
- When you do PGI after deactivating the account based COPA, you may face error. Because, COGS GL account is a cost element in Account based COPA, where as in Costing based COPA, it is usually not a Cost Element
- Also do a billing from an open sales order and see if you get any error there...
Similar issue can arise during variance settlement also, because the variance account is not a cost ele in Costing Based COPA.
Test out the above scenarios and do share your experiences
Regards
Ajay M -
Does the cost of query depends on the number of rows in the table?
Also does the cost change from database version to version,
like if i check the cost of query in oracle 8i and if i check same
query in oracle 9i, will there be any difference in the cost of query?Yogesh,
All pedantry aside, the cost does broadly reflect the resource consumption associated with a given query. Accessing a greater number of rows will usually result in a greater cost. And yes, the cost may change greatly from one version of the database to another. Up through 9i the cost was meant to reflect i/o consumption. As of 10g it reflects both i/o and cpu consumption.
However, you probably can't do anything useful with the cost and it can in fact mask problems. Here's one reasonably common problem. You have a staging table into which you load data before processing it. You gather statistics when the table is empty, and then load several million rows into it. Now the optimizer associates a very low cost with fetching data from the table. It will generate queries with very low costs that involve repeatedly reading the entire contents of the table. These queries will perform miserably.
The solution is to gather statistics on this table after loading it. At this point appropriate plans will be generated that perform well. The cost of these queries will be much larger than the cost of the old queries that performed miserably.
And in all of the above I'm being terribly simplistic. I use the cost as follows: when I see an execution plan with a substantially lower or higher cost for a given step than makes sense when compared with the other steps in the plan then I assume that the optimizer does not have enough information. -
Respected Guru's
I have a problem eith block bassed query.i have already used this block based query and it is working too.
But here i am facing the the problem.why becuz i have used a global variable to pass value for execute it.
and used 'set_block_property' to execute this query ..
go_block('block name ')
execute_query;
But it si showing unable to perform the query..
How i do this?? plez help me out...Dear...........
There are some possible reasons.
1- The items that are in data block , there is some field data is not database field and you set the property of this item is
"Data Base Item" = 'Yes' , means this field is not in table.
2- user right problem
3- see the block property where set the property QUERY = NO\
THX.
Maybe you are looking for
-
I got an iPad mini from a friend who is now passed away and I'm trying to activate it using my Apple ID but it won't let me because I have to contact the previous owner, what do I do!?
-
Am using CS5 and would like to have "just" the interior of the table to have a specific background color. Currently - using the code below - I receive the following result with white appearing outside the table perimeter. Would also like to be able
-
I've just installed Yosemite and find i cannot open iTunes. What should I do?
I cannot get iTunes to boot up.
-
How to create statement templates in ABAP?
In ABAP editor, when I go to pattern-> Other pattern and specify zpattern, then the following statements should be inserted in my code Changed by .......: Your Name Date..............: Current Date Requested By......: Reference Document: SR..
-
I'm having issues (again) with the MSI Z87-G45 motherboard. This is a replacement board from having to RMA my initial purchase. Brand new build and I cannot get any display. I have reseated the cpu, ram, and graphics card multiple times. I disconnect