Cost optimizer in 9i R2

Is there any bugs or related bugs to Oracle 9iR2? I'm having issue of query running slow in a partitioned DB vs.non partitioned DB. and it's not one query there are another query same. I have tried to search on Metalink but I did not find any straight answer from Oracle. I don't now if the optimizer problem or a bug. Any one can suggest or had this issue. I don't know if I should post the query. I just need some feedback.

In case you overlooked the note at the top of this forum, this one is oriented toward discussion about how to improve the forums experience. Not quite sure how the question relates to the topic.
Perhaps you would get a relevant answer by asking this in a database-related forum?

Similar Messages

  • Regarding Azure Usage Cost optimization required

    Hi
        This post is regarding how to reduce my Azure usage costs.We are using Microsoft Azure Cloud Services (Paas).We are  using Cloud services (Web/Worker Roles).
    So for every client specific.We are deploying in 2 instances of Web Role for webapplication and 2 Instances of Service Role for WebServices and 2 instances of Worker Role for running some jobs to meet Azure SLA for 99.95% availablilty.
    For every Client specific i am deploying the same way as above apart from test setup.So the azure costs are increasing.
    We are using this Microsoft Azure for more than 2 years.
    Due to the costs increasing recently due to added instances.we are planning on looking for options to reduce the costs.
    Please provide me some options to reduce the costs.
    Q1: Is Cloud Services (Paas) is a good option for smaller firms? or are there any alternatives?
    Q2: What options i have to optimize costs using currently using Cloud Services (PaaS) ?
    Q3: Currently my Webapp is providing like this client1.cloudapp.net, client2.cloudapp.net..etc?Is there any way i can combine all Clients into one Webapp and deploy it in 2 instances and provide some thing like mainsite.cloudapp.net/client1, mainsite.cloudapp.net/client2...
    etc..?
    Q4: Will there be any way i can try to combine all client specific Webservices into one. and connect to client specific SQL Azure databases?
    Q5:Similarly for running scheduled jobs using worker role for client specific? can i combine all into one and connect to client specific SQL Azure database to run the jobs?
    Q6: As of now i am using Azure cloud services (Web/Worker Roles) will there be any good to go for other alternatives Azure Websites / Azure VM? If yes will it help for cost optimizations even client increases?
    I had gone through some web sites / forms but did not find any good optimized solution.
    Any options for cost reduction will be highly appreciated?

    Hi PVL_Narayana,
    This is a very complex question, but in the end it all concludes with application design (architecture). Of course you can design your web and worker roles to serve all your customers in a multi-tenant approach and you can also do the same for your databases.
    However, from my expertise, there are certain scenarios where a one-bucket-satisfies-everyone approach doesn't work for those customers who want to pay extra. It's totally understandable for you to look to cutting costs considering your user pool has become
    larger. Thanks to Microsoft though, which continously drops prices to Azure, this situation usually arrives later than originally expected.
    There are some patterns I've blogged about and I spoke about at various conferences which you could benefit from in your multi-tenant application, such as the priority-queue pattern, the valet key, the event-based triggering pattern, the CQRS pattern and
    especially the sharding pattern. I'm happy to chat with you more on this subject.
    As for moving away from cloud services to Azure Websitesor VMs: I would personally not suggest that, considering that: 1. you're happy with the no-manage-required approach. Otherwise, you'll end up in updating and patching your VM's OS regularry and rebooting
    every once in a while. Using PaaS, Microsoft does that for you. 2. You don't really want to mix your worker role resources with your web resources. If your pool of users has enlarged, this probably means that your services have to satisfy more requests / sec
    as well. Therefore, allocating your web server's resources for running WebJobs rather than have specific servers which just run the worker roles could mess up your code's HW requirements and you could end up with huge performance impacts.
    Of course, these answers are quite generic given the lack of insights into your application.
    I'm happy to chat with you more on this subject, should you require any more intel.
    Alex

  • Bridge Cost Optimizer

    So here's a little background.
    I'm making a program that optimizes the cost of a bridge in a bridge designing program, given the length, the compression it has to support, and the tension it has to support.
    Each bar in the bridge (truss bridge design) has two variables you can edit (for the purposes of this program): size (x mm by x mm) and material.
    With each size, each material gets stronger, but the strongest of the size before is weaker than the weakest of the current size. Ex: 120x120 M3 is weaker than 130x130 M1 (M3 being the strongest, M1 being weakest).
    There are 32 somewhat arbitrary sizes which I already put into my program, and three materials that have been also.
    I have also input the 96 values for each of the costs for each cross section of material (all combinations) possible.
    What I have done so far:
    My program will take in any number of members, given their length, compression force, and tensile force, and optimize them individually--that is, make each bar the lowest it can be. I have thoroughly tested this part of the program, and it is very successful. It will correctly output the type of material and size needed to minimize individual costs.
    How it works:
    I have a main class and a Member class. The Member class holds basic information:
    -The size "number" (the sizes are held in an array in the main class, and the Members just hold the index)
    -The needed compression strength
    -The needed tensile strength
    -The length
    -The current material
    The main class iterates through each member (all are stored in one array) with a basic for loop. Inside this loop, there is a while statement that checks if the bar will hold. If not, it makes it the next biggest material, or the next size and the weakest material.
    I am very confident that this part of the program works.
    What I need to do:
    Optimize lowest cost. Now, it sounds like my program already does it, but here's the catch: each unique kind of bar (120x120 M1 and 120x120 M2 are different) costs an extra $1000. What I was originally going to do was to have the program iterate through every single possibility and calculate the cost, giving the lowest cost and the corresponding members. I soon realized that with an average of about 25 members, this would be (32 x 3)^25 possibilities, which would take years to compute. I have never really worked with a program that needs such a high efficiency as this one.
    In addition, 25 "for" loops nested within each other wouldn't be practical, as the number of members can vary greatly.
    So, I'm not asking for help in the form of code. I need someone to help me create an algorithm that would be able to test only a few reasonable possibilities, in addition to making a program that can work for a varying number of members. Don't worry about the specifics, I only need the concepts, as I have much experience coding. I guess I'm just having trouble solving this problem.
    Thanks for your time! (even if you're not going to answer, you still read a lot of my rambling :P)
    P.S.--Putting filters of "has to be higher than the lowest and lower than the highest" don't work, as this still leaves you with 25 nested "for" loops and a long, long time to wait.

    Timbydude wrote:
    With each size, each material gets stronger, but the strongest of the size before is weaker than the weakest of the current size. Ex: 120x120 M3 is weaker than 130x130 M1 (M3 being the strongest, M1 being weakest).Increasing bar size should not affect material strength! And you should use standard bar sizes see the blue book on corus website
    Timbydude wrote:
    I have also input the 96 values for each of the costs for each cross section of material (all combinations) possible.Wrong approach,
    bar volume = C.S.A x bar length
    bar mass = density of steel x volume
    bar cost = mass x cost per kg
    or see blue book for cost per length of each section
    Timbydude wrote:
    -The needed compression strength
    -The needed tensile strengthWhat does this mean, each bar should have an axial force, which is worked out either by stiffness or flexibility method. If this axial fore is negative, the bar is in tension and really only needs to be checked that bar satisfies stress criteria (F/A)
    If axial force is positive, bar needs various checks depending on restraint conditions. For example are assuming your bars to be pinned-pinned or fixed-fixed?
    Timbydude wrote:
    What I need to do:
    Optimize lowest cost. Now, it sounds like my program already does it, but here's the catch: each unique kind of bar (120x120 M1 and 120x120 M2 are different) costs an extra $1000. What I was originally going to do was to have the program iterate through every single possibility and calculate the cost, giving the lowest cost and the corresponding members. I soon realized that with an average of about 25 members, this would be (32 x 3)^25 possibilities, which would take years to compute. I have never really worked with a program that needs such a high efficiency as this one.The minimal weight truss problem has been extensively covered, you really need to read some journals. I would advise you to have a look at JGap, which is a geneitc algorithm framework. It shouldnt take you long to configure.
    Calypso

  • SQL Optimization with join and in subselect

    Hello,
    I am having problems finding a way to optimize a query that has a join from a fact table to several dimension tables (star schema) and a constraint defined as an in (select ....). I am hoping that this constraint will filter the fact table then perform the joins but I am seeing just the opposite with the optimizer joining first and then filtering at the very end. I am using the cost optimizer and saw that it does in subselects last in the predicate order. I tried the push_subq hint with no success.
    Does anyone have any other suggestions?
    Thanks in advance,
    David
    example sql:
    select ....
    from fact, dim1, dim2, .... dim <n>
    where
    fact.dim1_fk in ( select pf from dim1 where code = '10' )
    and fact.dim1_fk = dim1.pk
    and fact.dim2_fk = dim2.pk
    and fact.dim<n>_fk = dim<n>.pk

    The original query probably shouldn't use the IN clause because in this example it is not necessary. There is no limit on the values returned if a sub-select is used, the limit is only an issue with hard coded literals like
    .. in (1, 2, 3, 4 ...)Something like this is okay even in 8.1.7
    SQL> select count(*) from all_objects
      2  where object_id in
      3    (select object_id from all_objects);
      COUNT(*)
         32378The IN clause has its uses and performs better than EXISTS in some conditions. Blanket statements to avoid IN and use EXISTS instead are just nonsense.
    Martin

  • CTM or optimizer in Pharma Industry

    Dear Gurus,
                       Is CTM or Optimizer usually used in Pharma industry? I understand the reasons behind the selection of SNP solution that if you just need a simple supply planning with capacity levelling, you go for Heuristics, if you need rule based go for CTM and if you need cost optimization, you go for Optimizer.
    In your experience, does Pharma industries go for CTM or optimizer? While I understand it's specific to the business model , does the Pharma industry process support the CTM or Optimizer?
    Thanks.

    Visu,
    I shall put forth a different argument based on my experience with one of the pharma company mentioned by Ken and another that is not listed in the discussion so far.
    According to my experience Pharma industry is more of Supply-driven rather than Demand-driven.
    As a result PPDS pretty much is the most important planning engine where long-term sequence of orders are carried out. Campaign-based planing is pretty much a mandatory requirement in case of API / Active Ingredient manufacturing. To start with SNP is not capable of doing Campaign-based planning. So it does not matter if its CTM or Optimiser. Frankly speaking I find Heuristic much easier to understand and handle.
    One Golbal Pharma major uses SNP for long-term planning (that too heuristic) basically to pass the demand from the markets all the way to the API manufacturing plants. Then PPDS is run for a shorter horizon depending on the product level. For APIs the PPDS plan is almost 1 year out - because when you are doing campaign planning each campaign (for a particular API) typically lasts 2-3 months. That means if on a manufacturing line (a complex series of pressure vessels and piping) you need to manufacturing 4 different APIs you will make the same API after 1 year (3 months per campaig x 4 APIs). For Semi-finished products (like forming the tablets) the production process is faster and hence PPDS planning is not done for 1 year out but maybe upto 6 months. As for the final finish and packing lines each Production Run lasts for few days and hence te PPDS plan is say upto 3 months only. Now given these different production levels you need to maintain master data parameters suitably to match PPDS and SNP planning runs.
    This client did not use SNP Optimiser or CTM engine in their planning when I was working. Even now I they are not using these engines.
    On the other hand another Indian Pharma major started off APO implementation using SNP especially CTM engine and over a period of time as they rolled out more and more of Production Planning at the plants they realised CTM is not serving the purpose. As far as I know they were in process of switching over to PPDS for the production planning and keep CTM only close to the demand side i.e. the Distribution Centers.
    Hope this helps.
    Somnath

  • Discoverer Optimizer

    One of our application uses rule based optimizer at db level and by default discoverer uses cost based optimizer. We do a lot of ad-hoc reporting in this area. Does changing the optimizer at discoverer will help us in performance?

    and one more ...
    ==================================================================
    Subject:Which Optimizer Is Being Used In Discoverer? : Cost or Rule
         Doc ID:      Note:1055786.6      Type:      PROBLEM
         Last Revision Date:      26-NOV-2002      Status:      PUBLISHED
    PROBLEM DESCRIPTION:
    ====================
    Does Discoverer use the cost-based or rule-based optimizer?
    SOLUTION DESCRIPTION:
    =====================
    There have been a number of questions recently about how the optimizer
    chooses execution paths based on the presence of hints, statistics and the
    optimizer_mode init.ora parameter. The rules are described in detail in
    Chapter 8 of the Oracle 8 Tuning Guide.
    There have also been questions about the Discoverer registry entry
    "UseOptimizerHints".
    If you do not need the details, for best results:
    * leave "UseOptimizerHints" to its default value of 0,
    * ensure the init.ora parameter optimizer_mode is set to CHOOSE,
    * analyze the application tables, using ANALYZE TABLE command.
    Which optimizer will be used:
    The optimizer used will be the COST optimizer if there is a valid optimizer
    hint in the SQL statement that Discoverer generates, regardless of all other
    factors. Check the SQL inspector to look at the SQL generated. The kernel
    parses SQL comments to determine if the comment is really a hint, and if so,
    will ALWAYS, UNDER ALL CIRCUMSTANCES use the cost optimizer, even if there
    are no statistics, and even if the init.ora parameters are set to use the
    rule optimizer by default. Only if there is no optimizer hint in the SQL
    statement, the following table shows what happens:
    Optimizer Mode     Got           Optimizer used
    parameter     Statistics?
    CHOOSE          Y          Cost
    CHOOSE          N          Rule
    RULE          Y          Rule
    RULE          N          Rule
    ALL_ROWS     Y          Cost
    ALL_ROWS     N          Cost (stats are estimated based on table
    storage info)
    FIRST_ROWS     Y          Cost
    FIRST_ROWS     N          Cost (stats are estimated based on table
    storage info)
    In general, it makes sense to set the optimizer to CHOOSE, when the
    presence of statistics (generated from the analyze table command) will
    determine which optimizer is best. In Discoverer, we will not get Query
    Performance prediction unless the COST optimizer is used, since the QPP
    algorithm depends on getting an estimated cost for the query.
    How do I get an optimizer hint in the query?
    First, only consider this if you are familiar with Oracle tuning. You
    don't need this for query prediction; you only do this if you want to
    force a particular access path method on the optimizer. Set the hint in
    the folder properties dialog. See the Oracle 8 Tuning Guide for details
    of syntax.
    NOTE: Any correct hint will force use of the cost optimizer under
    ALL circumstances, even if the tables are not analyzed.
    What does the registry entry "UseOptimizerHints" do?
    The default value is 0 and it should not be changed. The registry entry
    is only there to allow compatibility with Discoverer 3.0.7 which always
    generated an optimizer hint (+All ROWS or +FIRST ROWS) in the SQL.  If 
    you set "UseOptimizerHints = 1, it will revert to this behavior. The
    effect of this is that the cost optimizer will be used under ALL
    circumstances (see above). If your tables have not been analyzed, this
    can result in poor query performance. UseOptimizerHints should always be
    set to 0. If the tables are analyzed, you will use the cost optimizer,
    and if they are not you will use the rule based optimizer.
    What is the effect on Query Prediction?
    Query prediction depends on Discoverer getting a cost for the query from
    the cost optimizer. If you follow the guidelines above, and the tables are
    analyzed, you will get query prediction, without having to worry about
    optimizer hints. If the rule optimizer is used, due to one of the factors
    above, you will not get query prediction. Setting "UseOptimizerHints = 1"
    forces use of the cost optimizer, and you WILL then get query prediction,
    but, unless the tables are analyzed you will get poor accuracy and poor
    query performance, since the cost optimizer will not have the information
    it needs to determine the best access path. And, of course, if the tables
    ARE analyzed, and your optimizer_mode is set to CHOOSE, as recommended, you
    will get query prediction and good performance, so you don't need the
    UseOptimizerHints setting.
    NOTE: To get query prediction, you need to ensure the steps outlined
    in the release notes are also followed.
    The behavior of the OPTIMIZER_MODE parameter is the same in Oracle7 and
    OraCLE8, so this solution is correct for both.
    REFERENCES:
    ===========
    Discoverer Release Notes (as of 3.0.8)
    Oracle Server Tuning Guide, Release 8.0.x

  • What is meant by estimated costs and estimated rows in SQL explain (ST05)?

    Hi
    I was just wondering if someone could explain/clarify exactly what is meant by estimated costs and estimated rows in the 'explain' / execution path functionality of ST05.
    For example, we could see a SQL statement was very inefficient accessing a table:
    Estimated Costs = 6.006.615 , Estimated #Rows = 0
    Does this literally mean that for 6 million costs / reads / effort, 0 results were returned??
    Is this a ratio of efficiency?
    We built an appropriate index and now we have:
    Estimated Costs = 2 , Estimated #Rows = 1
    A lot better! The job was taking 40+ hours and being cancelled; now it takes 5 minutes. So a 3 million times improvement sounds realistic...
    However, we had another instance where the explain showed:
    ( Estim. Costs = 195.077 , Estim. #Rows = 538.660 )
    and we built an index, and now the explain is:
    ( Estimated Costs = 41.867 , Estimated #Rows = 538.660 )
    What exactly does this mean - as the costs has been reduced, but the rows is the same?
    Thanks
    Ross

    Hi Ross,
    >I was just wondering if someone could explain/clarify exactly what is meant by estimated costs and estimated rows in the >'explain' / execution path functionality of ST05
    Take a look at note 766349, point 20.
    >An EXPLAIN displays "Estimated Costs" and "Estimated Rows", which
    >are simply the CBO's calculation results (refer to Note 7550631).
    >Since these results are based upon a number of assumptions (column
    >values are distributed equally, statistics), and depend upon the
    >database parameter settings, the calculated costs and rows are
    >useful only within a margin for error. High "Estimated Costs" and
    >"Estimated Rows" are therefore neither a satisfactory nor a
    >necessary indication of an expensive SQL statement. Also, the
    >calculated costs have no actual effect upon the performance - the
    >deciding costs are always the actual ones, in the form of BUFFER
    >GETs, DISK READs, or processed rows.
    So the costs and rows are values conjured up by the cost optimizer when calculating the access path that is most likely to be efficient. THEY ARE ESTIMATES!!!
    >Does this literally mean that for 6 million costs / reads / effort, 0 results were returned??
    As per the above, no. The costs and rows are estimated before the rows are fetched so there are no actual results yet.
    >What exactly does this mean - as the costs has been reduced, but the rows is the same?
    An efficient database access is exactly that; reads only the blocks that contain the rows it needs and nothing else. If the access is inefficient it will spend time accessing blocks that contain no data that is eventually contained in the result set.
    This question would be better placed in the Oracle forum...
    Regards,
    Peter

  • ORACLE7 OPTIMIZER 에 대하여

    제품 : SQL*PLUS
    작성날짜 : 1995-11-02
    trace 결과 중 execution plan 에 원하는 index 이름이 나오지 않았다면,
    analyze table 을 해야합니다.
    analyze table table_name compute statistics;
    optimizer mode 확인방법 :
    svrmgrl
    connect internal;
    show parameter optimize
    어렇게 하는 이유는 다음과 같습니다.
    OPTIMIZER의 기능은 QUERY 를 실행하기위해 가장좋은 방법을 선택하는 것이다.
    ORACLE V6와 비교하여 ORACLE7은 QUERY OPTIMIZER에 관한 2가지 주요기능 즉,
    COST-BASED 접근방법과 HINT-MECHANISM을 이용할 수 있는 것이다.
    COST-BASED 접근방법은 OPTIMIZER가 이용가능한 접근 경로를 결정하고 그 접근
    경로에 근간을 둔 여러개의 EXECUTION PLAN을 만든다. 이 EXECUTION PLAN들에
    대해서 COST가 측정되고 OPTIMIZER는 가장 COST가 적은 EXECUTION PLAN을
    선택한다.
    OPTIMIZER는 EXECUTION PLAN의 COST를 측정하기 위하여 통계치를 사용한다.
    통계치들은 ANALYZE명령에 의해 생성되고 다음과 같은 DATA DICTIONARY VIRES를
    통해서 볼 수 있다.
    DBA/ALL/USER TABLES, DBA/ALL/USER TAB COLUMNS,
    DBA/USER CLUSTERS, DBA/ALL/USER INDEXES
    syntax ANALYZE명령어 :
    ANALYZE INDEX index COMPUTE STATISTICS
    TABLE table ESTIMATE
    CLUSTER cluster DELETE
    [COMPUTE ] : 정확한 정보를 얻을수있지만 데이타베이스에 상당한 LOAD 가
    있으므로 RECORD 건수가 작은 TABLE 에 적용한다.(약 20000
    RECORD 이하)
    [ESTIMATE] : RECORD 수가 20000 건이상 TABLE 에 적용한다.
    ORACLE7 OPTIMIZER는 통계치가 적어도 한개 TABLE에 존재하면 COST-BASED
    OPTIMIZER를 사용하고 어느 TABLE에도 통계치가 없으면 RULL-BASED
    OPTIMIZER를 사용한다.
    또한 ORACLE7은 HINT MECHANISM을 이용하여 USER가 원하는 EXECUTION PLAN을
    optimizer 에게 강제적으로 사용하도록 할 수 있다.
    HINT에는 COST (cost-based approach), NOCOST (rule-based approach), FULL
    (full table scan), INDEX (use a certain index), ORDERED (from절에 나오는
    순서대로 TABLE을 JOIN) 등이 있다.
    다음 예들을 통하여 OPTIMIZER의 기능을 이해하기로 하자.
    CASE1) ORACLE Version6 OPTIMIZER
    Table TAB1 : 16,384 rows
    Table TAB2 : 1 rows
    a. TAB2를 DRIVING TABLE 로 하는 경우
    select count(*) from TAB1, TAB2 0.96 seconds elapsed
    Count Phys cr cw2 rows
    Parse 1 0 0 0
    Execute 1 95 100 4 0
    Fetch 1 0 0 0 1
    b. TAB1을 DRIVING TABLE 로 하는 경우
    select count(*) from TAB2, TAB1 26.09 seconds elapsed
    Count Phys cr cw2 rows
    Parse 1 0 0 0
    Execute 1 95 49247 32770 0
    Fetch 1 0 0 0 1
    위의 경우 TAB2가 DRIVING인 경우가 더 효율적이므로 APPLICATION 은 a 와같이
    사용하여야 한다.
    CASE2) ORACLE Version7 Optimizer
    각 TABLE에 ANALYZE COMMAND 를 시키지 않으면 결과는 VERSION 6의 경우와
    유사하고 만일 두개의 TABLE 에 ANALYZE 를 실행시킬 경우는 다음과 같은
    결과를 얻을수 있다.
    a. select count(*) from TAB1, TAB2 1.20 seconds elapsed
    Count Phys cr cw2 rows
    Parse 1 0 0 0
    Execute 1 0 0 0 0
    Fetch 1 123 124 5 1
    b. select count(*) from TAB2, TAB1 1.32 seconds elapsed
    Count Phys cr cw2 rows
    Parse 1 0 0 0
    Execute 1 0 0 0 0
    Fetch 1 123 124 5 1
    위의 결과로서 ORACLE7에는 ANALYZE COMMAND 를 실행할 경우 TABLE 의 순서를
    바꾸더라도 성능에 큰차이가 없이 향상된 결과를 얻을 수 있고 APPLICATION
    TUNING에 부담을 줄일수 있다.

    제품 : SQL*PLUS
    작성날짜 : 1995-11-02
    trace 결과 중 execution plan 에 원하는 index 이름이 나오지 않았다면,
    analyze table 을 해야합니다.
    analyze table table_name compute statistics;
    optimizer mode 확인방법 :
    svrmgrl
    connect internal;
    show parameter optimize
    어렇게 하는 이유는 다음과 같습니다.
    OPTIMIZER의 기능은 QUERY 를 실행하기위해 가장좋은 방법을 선택하는 것이다.
    ORACLE V6와 비교하여 ORACLE7은 QUERY OPTIMIZER에 관한 2가지 주요기능 즉,
    COST-BASED 접근방법과 HINT-MECHANISM을 이용할 수 있는 것이다.
    COST-BASED 접근방법은 OPTIMIZER가 이용가능한 접근 경로를 결정하고 그 접근
    경로에 근간을 둔 여러개의 EXECUTION PLAN을 만든다. 이 EXECUTION PLAN들에
    대해서 COST가 측정되고 OPTIMIZER는 가장 COST가 적은 EXECUTION PLAN을
    선택한다.
    OPTIMIZER는 EXECUTION PLAN의 COST를 측정하기 위하여 통계치를 사용한다.
    통계치들은 ANALYZE명령에 의해 생성되고 다음과 같은 DATA DICTIONARY VIRES를
    통해서 볼 수 있다.
    DBA/ALL/USER TABLES, DBA/ALL/USER TAB COLUMNS,
    DBA/USER CLUSTERS, DBA/ALL/USER INDEXES
    syntax ANALYZE명령어 :
    ANALYZE INDEX index COMPUTE STATISTICS
    TABLE table ESTIMATE
    CLUSTER cluster DELETE
    [COMPUTE ] : 정확한 정보를 얻을수있지만 데이타베이스에 상당한 LOAD 가
    있으므로 RECORD 건수가 작은 TABLE 에 적용한다.(약 20000
    RECORD 이하)
    [ESTIMATE] : RECORD 수가 20000 건이상 TABLE 에 적용한다.
    ORACLE7 OPTIMIZER는 통계치가 적어도 한개 TABLE에 존재하면 COST-BASED
    OPTIMIZER를 사용하고 어느 TABLE에도 통계치가 없으면 RULL-BASED
    OPTIMIZER를 사용한다.
    또한 ORACLE7은 HINT MECHANISM을 이용하여 USER가 원하는 EXECUTION PLAN을
    optimizer 에게 강제적으로 사용하도록 할 수 있다.
    HINT에는 COST (cost-based approach), NOCOST (rule-based approach), FULL
    (full table scan), INDEX (use a certain index), ORDERED (from절에 나오는
    순서대로 TABLE을 JOIN) 등이 있다.
    다음 예들을 통하여 OPTIMIZER의 기능을 이해하기로 하자.
    CASE1) ORACLE Version6 OPTIMIZER
    Table TAB1 : 16,384 rows
    Table TAB2 : 1 rows
    a. TAB2를 DRIVING TABLE 로 하는 경우
    select count(*) from TAB1, TAB2 0.96 seconds elapsed
    Count Phys cr cw2 rows
    Parse 1 0 0 0
    Execute 1 95 100 4 0
    Fetch 1 0 0 0 1
    b. TAB1을 DRIVING TABLE 로 하는 경우
    select count(*) from TAB2, TAB1 26.09 seconds elapsed
    Count Phys cr cw2 rows
    Parse 1 0 0 0
    Execute 1 95 49247 32770 0
    Fetch 1 0 0 0 1
    위의 경우 TAB2가 DRIVING인 경우가 더 효율적이므로 APPLICATION 은 a 와같이
    사용하여야 한다.
    CASE2) ORACLE Version7 Optimizer
    각 TABLE에 ANALYZE COMMAND 를 시키지 않으면 결과는 VERSION 6의 경우와
    유사하고 만일 두개의 TABLE 에 ANALYZE 를 실행시킬 경우는 다음과 같은
    결과를 얻을수 있다.
    a. select count(*) from TAB1, TAB2 1.20 seconds elapsed
    Count Phys cr cw2 rows
    Parse 1 0 0 0
    Execute 1 0 0 0 0
    Fetch 1 123 124 5 1
    b. select count(*) from TAB2, TAB1 1.32 seconds elapsed
    Count Phys cr cw2 rows
    Parse 1 0 0 0
    Execute 1 0 0 0 0
    Fetch 1 123 124 5 1
    위의 결과로서 ORACLE7에는 ANALYZE COMMAND 를 실행할 경우 TABLE 의 순서를
    바꾸더라도 성능에 큰차이가 없이 향상된 결과를 얻을 수 있고 APPLICATION
    TUNING에 부담을 줄일수 있다.

  • SNP Optimizer

    Hi All,
    When we run the SNP Optimizer in the background scheduler, several buckets don't get planned and are missing planned orders. When we run it in the foreground, that is in Interactive SNP Planning, it works fine and plans for every bucket.
    I checked at the parameters and everything looks fine. I also checked the Optimizer Log data. When we run it in the background I get a warning message saying that the Non-delivery penalty cost is too low, but again, when we run it in Interactive SNP Planning screen it works fine.
    Thanks
    Rob

    Check if optimization profile used in interactive planning is different from the background mode.
    For the background mode optimization profile try maintaining relative Penalty costs considerably high.
    If you don't maintain any penalty costs optimizer will skip creating orders as there is no costs affected if that specific requirement is not met.
    Check for the material & capacity availability, in case if some other order(for a different mateiral probably using the same resource) has more penalty costs it will consider that first.
    Hope this helps

  • How to use indexes correctly in oracle

    Hi guys
    on one table i have indexes on 3 columns in the following order
    a
    b
    c
    When i m writing a select statement in my where clause what should be the order? like
    where
    a='some value' and
    b='some value' and
    c='some value';
    or in reverse order like this
    c='some value' and
    b='some value' and
    a='some value';
    please let me know.
    Thanks

    If you have an index on a,b,c the difference in performance can be on the index order.
    If column "a" has only 2 unique values then the sub values of the index only has 50% of the remaining values, the cost optimizer on Oracle will probably skip the index as an index scan of 50% is slower than a full table scan.
    If a has 100 unique values then the remaining search is only on 1% of the values so is likely to be used.
    As with any optimisation try using
    explain plan for selec x,y,z from a_table
    from where
    a='some value' and
    b='some value' and
    c='some value'
    If the index is not being used firstly try
    analyze table estimate statistics
    sample 10 percent
    for all indexes
    for all indexed columns;
    and failing than try different orders in acceptance environment.

  • Problems with execution plans of OL queries in MGP

    I'm just facing some strange behavior of OL MGP process. Its' performance is really poor on one of our servers and I just executed Consperf to figure out that the execution plans looks really weird. It looks like OL doesn't use available indexes at all even though statistics are ok and even when I execute the same SQL manually I can see that the execution plan looks totally different - there are almost none TABLE ACCESS FULL lookups. Is there any OL setup property which could cause this strange behavior?
    Consperf explain plan output for one of the snapshots:
    ********** BASE - Publication item query ***********
    SELECT d.VISITID, d.TASKID, d.QTY FROM HEINAPS.PA_TASKS d
    WHERE d.VisitID IN (SELECT h.VisitID FROM HEINAPS.PA_VISITS_H_LIST h WHERE h.DSM = ?)
    | Operation | Name | Rows | Bytes| Cost | Optimizer
    | SELECT STATEMENT | | 1 | 24 | 0 |ALL_ROWS
    | FILTER | | | | |
    | HASH JOIN RIGHT SEMI | | 2M| 61M| 20743 |
    | TABLE ACCESS FULL |PA_VISITS_H_LIST | 230K| 2M| 445 |ANALYZED
    | TABLE ACCESS FULL |PA_TASKS | 11M| 134M| 6522 |ANALYZED
    explain plan result of the same query executed in Pl/SQL Developer:
    UPDATE STATEMENT, GOAL = ALL_ROWS               Cost=3345     Cardinality=39599     Bytes=2969925
    UPDATE     Object owner=MOBILEADMIN     Object name=CMP$JPHSK_PA_TASKS               
    HASH JOIN ANTI               Cost=3345     Cardinality=39599     Bytes=2969925
    TABLE ACCESS BY INDEX ROWID     Object owner=MOBILEADMIN     Object name=CMP$JPHSK_PA_TASKS     Cost=1798     Cardinality=39599     Bytes=910777
    INDEX RANGE SCAN     Object owner=MOBILEADMIN     Object name=CMP$1527381C     Cost=239     Cardinality=49309     
    VIEW     Object owner=SYS     Object name=VW_SQ_1     Cost=1547     Cardinality=29101     Bytes=1513252
    NESTED LOOPS               Cost=1547     Cardinality=29101     Bytes=640222
    INDEX RANGE SCAN     Object owner=HEINAPS     Object name=IDX_PAVISITSHL_DSM_VISITID     Cost=39     Cardinality=1378     Bytes=16536
    INDEX RANGE SCAN     Object owner=HEINAPS     Object name=PK_PA_TASKS     Cost=2     Cardinality=21     Bytes=210
    This query and also few others run in MGP for few minutes for each user, because of the poor execution plan. Is there any method how to force OL to use "standard" execution plans the DB produces to get MGP back to usable performance?

    The problem is that the MGP process does not run the publication item query as such. What id does is wrap it up inside insert and update statements and then execute via java, and this is what can cause problems.
    Set the trace to all for MGPCOMPOSE on a user, wait for the MGP cycle and you will find in the trace files a series of files for the user. Look through this and you should find the actual wrapped up query that is executed. This should also be in the consperf file. Consperf should give a few different execution stats for the query (ins_1, ins_2) if these are better then set these in c$consperf. The automatic setting does nort always choose the best one.
    If all else fails, try expressing the query in other ways and test them in the MGP process. I have found that this kind of trial and error is the only approach
    couple of bits about the query below
    1) do you sopecifically need to restrict the columns from HEINAPS.PA_TASKS ? if not use select * in the PI select statement as it tends to bind better
    2) what is the data type of HEINAPS.PA_VISITS_H_LIST.DSM. If numberic, then do a to_number() on the bind variable and the type casting is not very efficient

  • Invisible Indexes

    Oracle DB 11g Enterprise Edition
    Release 11.2.0.3.0 - 64bit
    Folks,
    We have a job for which we created Invisible indexes. We use the ALTER SESSSION SET OPTIMIZER_USE_INVISIBLE_INDEXES = TRUE;  Howerver our job is still not picking the indexes.
    The Job is run as a People Soft Process. We have tried other session commands in the past and they have been working fine. Other than using /*+ index(table index)  */ HINT is there another way to make sure invisible indexes are used.
    Thank you
    Aj

    Aj09 wrote:
    Oracle DB 11g Enterprise Edition
    Release 11.2.0.3.0 - 64bit
    Folks,
    We have a job for which we created Invisible indexes. We use the ALTER SESSSION SET OPTIMIZER_USE_INVISIBLE_INDEXES = TRUE;  Howerver our job is still not picking the indexes.
    The Job is run as a People Soft Process. We have tried other session commands in the past and they have been working fine. Other than using /*+ index(table index)  */ HINT is there another way to make sure invisible indexes are used.
    Thank you
    Aj
    No - Oracle isn't going to use ANY index unless that is the lowest cost optimizer plan. Why would you want to use an index if there is a lower cost option?
    See the FAQ for how to post a tuning request and the information that you need to provide. That information includes:
    1. the DDL for the tables and indexes
    2. the query being used
    3. the statement you execute when you collect the stats for the tables and indexes
    4. row counts for the tables and query predicates being used
    5. the actual execution plan

  • Top Link Special Considerations in moving to Cost Based Optimizer....

    Our current application architecture consists of running a Java based application with Oracle 9i as the database and toplink as the object relational mapping tool. This is a hosted application about 5 years old with stringent SLA requirements and high availability needs. We are currently using Rule Based Optimizer (RBO) mode and do not collect statistics for the schemas. We are planning a move to Cost Based Optimizer (CBO)
    What are the special considerations we need to be aware of from moving RBO to CBO from top link perspective. Is top link code optimized for one mode over the other ?. What special parameter settings are needed ?. Any of your experience in moving Top Link based applications to RBO and best practices will be very much appreciated.
    -Thanks
    Ganesan Maha

    Ganesan,
    Over the 10 years we have been delivering TopLink I do not recall any issues with customizing TopLink for either approach. You do have the ability to customize how the SQL is generated and even replace the generated SQL with custom queries should you need to. This will not require application changes but simply modifications to the TopLink metadata.
    As of 9.0.4 you can also provide hints in the TopLink query and expression framework that will be generated into the SQL to assist the optimizer.
    Doug

  • Penalty costs in the optimizer

    I have a requirement in my project where the sales orders should have a higher priority than the forecast.  I have defined the delay penalty for the sales orders as 1 per unit per day and the delay penalty for the forecast as 0.01 per unit per day.  I have defined the maximum delay as 45 days for
    both the sales orders and forecast.  This is how I have setup all the products.
    Does the above setting ensure that the sales orders are satisfied all the time before the forecast is
    satisfied by the deployment optimizer.  Could there be a scenario where the forecast is satisfied before the sales orders with the above penalty structure.
    Thanks in advance.
    Regards,
    Venkat

    Hi venkat,
    You must maintain distinctly higher penaly costs in the SNP 1 tab for customer demand than for demand forecast.
    But I do not understand the statement "sale order should always be fulfilled before forecast".  If you are talking about same location, what difference does it make? 
    If yoy are saying that there is a forecast for one product and sale order for another which are manufactured with same resource and that the product wih sale order should be produced in preference over the product with forecast, the above solution should work.
    Regards,
    Nitin Thatte

  • Rule based & Cost based optimizer

    Hi,
    What is the difference Rule based & Cost based optimizer ?
    Thanks

    Without an optimizer, all SQL statements would simply do block-by-block, row-by-row table scans and table updates.
    The optimizer attempts to find a faster way of accessing rows by looking at alternatives, such as indexes.
    Joins add a level of complexity - the simplest join is "take an appropriate row in the first table, scan the second table for a match". However, deciding which is the first (or driving) table is also an optimization decision.
    As technology improves a lot of different techiques for accessing the rows or joining that tables have been devised, each with it's own optimium data-size:performance:cost curve.
    Rule-Based Optimizer:
    The optimization process follows specific defined rules, and will always follow those rules. The rules are easily documented and cover things like 'when are indexes used', 'which table is the first to be used in a join' and so on. A number of the rules are based on the form of the SQL statement, such as order of table names in the FROM clause.
    In the hands of an expert Oracle SQL tuner, the RBO is a wonderful tool - except that it does not support such advanced as query rewrite and bitmap indexes. In the hands of the typical developer, the RBO is a surefire recipie for slow SQL.
    Cost-Based Optimizer:
    The optimization process internally sets up multiple execution proposals and extrapolates the cost of each proposal using statistics and knowledge of the disk, CPU and memory usage of each of the propsals. It is not unusual for the optimizer to analyze hundred, or even thousands, of proposals - remember, something as simple as a different order of table names is a proposal. The proposal with the least cost is generally selected to be executed.
    The CBO requires accurate statistics to make reasonable decisions.
    Even with good statistics, the complexity of the SQL statement may cause the CBO to make a wrong decision, or ignore a specific proposal. To compensate for this, the developer may provide 'hints' or recommendations to the optimizer. (See the 10g SQL Reference manual for a list of hints.)
    The CBO has been constantly improving with every release since it's inception in Oracle 7.0.12, but early missteps have given it a bad reputation. Even in Oracle8i and 9i Release 1, there were countless 'opportunities for improvement' <tm> As of Oracle 10g, the CBO is quite decent - sufficiently so that the RBO has been officially deprecated.

Maybe you are looking for

  • Indesign CS6 crashes on Windows 8.1

    I've deleted the folder %localappdata%\Adobe\InDesign and %appdata%\Adobe\InDesign Yet InDesgn CS6 crashes when I start it as normal user. I can start it as administrator but I don't want to start adobe software as administrator. How do I start InDes

  • ISP Bandwidth Testing Issue

    We just commission an STM1 link to an upstream ISP, part of the bandwidth is meant for mobile data service of a customer. The customers complain of slow download while we still have 100Mbps of capacity available. The customer did an FTP test but the

  • My phone keeps telling me to re connect to iCloud

    My phone will not let me connect to anything and keeps telling me to reconnect to icloud

  • CIO-API: Configuration beeing used when updating rules

    Hi, We're currently facing some issues when using the CIO-API to open an exisiting configuration, modify it and then save it again: Since we had some minor issues in one of our configurator-extension-rules I changed that rule, updated it in configura

  • Configuration of BI

    Hi Gurus..!! Please help me out how can I configure my IDES system for BI. I would like to configure BI in IDES. System details.-- Windows server 2003.                             DB- oracle 10g.                             SAP ECC 6.0 regards, sandy