Usage of Hints

Hello All:
I need suggestions in the tuning of the SQL Stat.
Example:
Select * from dept
where deptno = (select max(a.deptno)
from dept a
where a.deptno = deptno.deptno)
The query is a simulated one to my req.
Here in the Corelated sub query can use the FIRST_ROWS hint make
my query to execute fast.
Normally we use FIRST_ROWS to for better response time rather
than better thruogh put.
for the Corelated sub query we get only one record wich
qualifies for the parent query.
Suggest me is there a better way to improve the perfomance of
the query. the table conatis 100000 records.

I dont understand the meaning of your query.
The result of:
select * from dept d
where deptno = (select max(a.deptno)
from dept a
where a.deptno = d.deptno);
is logicaly the same as:
select * from dept;
Which version of Oracle are you using?

Similar Messages

  • Use of hints in query performance

    Hi
    Please let me know actual usage of hints in query tunging, how do we write hints of increase performnace.
    let me know below query will gives better performnce. if hints are not use query will degrade performance.
    SELECT /*+ ORDERED INDEX (b, jl_br_balances_n1) USE_NL (j b)
    USE_NL (glcc glf) USE_MERGE (gp gsb) */
    b.application_id ,
    b.set_of_books_id ,
    b.personnel_id,
    p.vendor_id Personnel,
    p.segment1 PersonnelNumber,
    p.vendor_name Name
    FROM jl_br_journals j,
    jl_br_balances b,
    gl_code_combinations glcc,
    fnd_flex_values_vl glf,
    gl_periods gp,
    gl_sets_of_books gsb,
    po_vendors p

    942919 wrote:
    Please let me know actual usage of hints in query tunging, how do we write hints of increase performnace.The majority of hints would be used to diagnose a performance problem by identifying a better query plan and fixing the underlying reason that the optimizer did not select that plan automatically. Hints used in this way would be removed from the query after the cause of the performance problem was fixed.
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/hintsref.htm#i8327
    Hints change the access paths and methods the optimizer chooses, so before using a hint, you need to understand what the optimizer does, what access methods are, when they are chosen, and what they are best used for.
    To do that you need to read the Performance Tuning Guide
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/toc.htm
    At a minimum reading and understanding these sections -
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/perf_overview.htm#i1006218
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/optimops.htm#i21299
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/ex_plan.htm#i19260
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/stats.htm#i13546
    Then you should be able to use a hint safely.
    let me know below query will gives better performnce. if hints are not use query will degrade performance.Not true, hints change the performance of queries they can make them slower as well as faster. Here is an example of an index hint slowing down a query
    {message:id=1989089}

  • Use of Hints in performance tuning

    Hi
    Please let me know actual usage of hints in query tunging, how do we write hints of increase performnace.
    let me know below query will gives better performnce. if hints are not use query will degrade performance.
    SELECT /*+ ORDERED INDEX (b, jl_br_balances_n1) USE_NL (j b)
    USE_NL (glcc glf) USE_MERGE (gp gsb) */
    b.application_id ,
    b.set_of_books_id ,
    b.personnel_id,
    p.vendor_id Personnel,
    p.segment1 PersonnelNumber,
    p.vendor_name Name
    FROM jl_br_journals j,
    jl_br_balances b,
    gl_code_combinations glcc,
    fnd_flex_values_vl glf,
    gl_periods gp,
    gl_sets_of_books gsb,
    po_vendors p

    You'll likely have better luck posting this to an Oracle database forum, such as this one: PL/SQL
    The forum you have posted this to is for Oracle Policy Automation.
    Kind regards,
    Davin.

  • Subquery Factoring and Materialized Hint

    WITH t AS
            (SELECT MAX (lDATE) tidate
               FROM rate_Master
              WHERE     Code = 'G'
                    AND orno > 0
                    AND TYPE = 'L'
                    AND lDATE <= ':entereddate')
    SELECT DECODE (:p1,  'B', RateB,  'S', RateS,  Rate)
      FROM rate_Master, t
    WHERE     Code = 'G'
           AND orno > 0
           AND TYPE = 'L'
           AND NVL (lDATE, SYSDATE) = tidate;In the given example the sub query returns just one row because of the aggregate function max. Making this in to a With clause will be of any benefit ? Also i presume/understand that the subquery factoring would be really useful only when we try to make a sub query which returns more rows in a with clause. Is my intrepration right?
    Secondly adding the /*+ Materialize */ hint to a With query is mandatory or the optimizer by itself will do it and make a temp table transformation. In my example i am forced to give the hint in the query. Please discuss and help
    Thanks in advance.

    ramarun wrote:
    WITH t AS
    (SELECT MAX (lDATE) tidate
    FROM rate_Master
    WHERE     Code = 'G'
    AND orno > 0
    AND TYPE = 'L'
    AND lDATE <= ':entereddate')
    SELECT DECODE (:p1,  'B', RateB,  'S', RateS,  Rate)
    FROM rate_Master, t
    WHERE     Code = 'G'
    AND orno > 0
    AND TYPE = 'L'
    AND NVL (lDATE, SYSDATE) = tidate;In the given example the sub query returns just one row because of the aggregate function max. Making this in to a With clause will be of any benefit ? Also i presume/understand that the subquery factoring would be really useful only when we try to make a sub query which returns more rows in a with clause. Is my intrepration right?I am not aware of any performance Benefit due to use of With clause. IMO, It eases the job to write a Subquery multiple times in a query.
    The solution you adopted has to hit the cache twice and hence do not look very performant. I will advise you to opt for Analytic functions (like the suggestion I provided in another thread). If the solution does not yeild correct results, then provide with a Script that we can replicate (Create table, Sample Insert statement and the expected output).
    select decode(:p1, 'B', RateB, 'S', RateS, Rate)
       from (
                select RateB, RateS, Rate, NVL(ldate, sysdate) ldate, dense_rank() over (order by case when NVL(lDATE, SYSDATE) <= ':entereddate' then NVL(lDATE, SYSDATE) else to_date('01/01/1970', 'DD/MM/YYYY' end DESC) rn
                  from rate_Master
               where Code = 'G'
                   and orno > 0
                   and type = 'L'
             ) a
      where a.rn = 1;>
    Secondly adding the /*+ Materialize */ hint to a With query is mandatory or the optimizer by itself will do it and make a temp table transformation. In my example i am forced to give the hint in the query. Please discuss and help
    Usage of Hints is only for Debugging purposes and is not meant to be used in production code. It is when you have to ascertain the reason for CBO choosing a plan that you do not expect it to take, you use hints to force your plan and find the cost and analyze it. Hence, I do not support the idea of Hints for production code.

  • Performance Issie in SQL Statement Using Hints

    Hi,
    I am using Oracle version 10G.
    I am using a INSERT statement as:
    INSERT /*+ PARALLEL (TEST,4) */ INTO TEST(COL1,COL2,COL3) SELECT COL1,COL2,COL3 FROM DUMMY;
    For increasing the Performance I am using the above statement.
    Will the usage of Hints increase the Performance?
    Any help will be highly needful.
    Thanks and Regards

    user598986 wrote:
    I am using Oracle version 10G.
    I am using a INSERT statement as:
    INSERT /*+ PARALLEL (TEST,4) */ INTO TEST(COL1,COL2,COL3) SELECT COL1,COL2,COL3 FROM DUMMY;
    For increasing the Performance I am using the above statement.
    Will the usage of Hints increase the Performance?The way you're asking the question suggests that you're not sure what this particular hint is supposed to imply.
    Using this hint suggests that you want to take advantage of direct-path parallel DML operations. Note that you explicitly need to enable parallel DML in your session, it is disabled by default, because it has some significant implications and restrictions. You should think about parallelizing the query on DUMMY, too, if it is not marked as PARALLEL in the dictionary, because otherwise you're combining a parallel DML operation with a serial query which might not be that efficient.
    Note that there is no general answer to the question if this particular hint will actually increase the performance of the DML statement. There are many things to consider, among them are if your system scales reasonably with parallel operations and if the underlying object structure actually allows to benefit from the parallel operation. There are cases where a serial operation might be faster than a parallel operation.
    For more information about direct-path and parallel execution, its implications and restrictions, see the documentation:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_9014.htm#i2163698
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/usingpe.htm#CACEJACE
    You can use the EXPLAIN PLAN and the DBMS_XPLAN.DISPLAY function to get the execution plan of your statement that shows you what kind of parallel operations the optimizer estimates to perform.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Is Hints are useful in performance of reports

    Hi all,
    My question is i want to use oracle hints while retreving data from oracle database(11g) when running obiee report(sql) to execute fast.
    is it good ?
    does it increase the performance or degrads the performance?
    regards,
    ramakrishna

    Hi,
    1) the use of hints can have positive, negative or zero effect on statement's performance, depending on many things
    2) there are several problems that can be caused by usage of hints
    - it becomes more difficult to support the code
    - the effect of a hint can change dramatically after an upgrade
    - if using a hint to lock down a part of the plan (e.g. require a certain access method to a certain table) then there is a danger
      that the rest of the plan can change in such a way that it would make the effect of the hint adverse on the performance
      (e.g. you use INDEX hint to force the optimizer to use a specific index, and everything works fine as long as the index is used
      inside a NESTED LOOP, but then for some reason the join method changes to HASH JOIN and using the same index no longer
      makes sense)
    - hints require code change which can be difficult or impossible
    3) nevertheless, in some situations there is no viable alternative to using hints
    I could also express what I said above in just two words: it depends. 
    Best regards,
    Nikolay

  • JPA -- How can I turn off the caching for an entity?

    Hi,
    I have a problem that I will illustrate with a simplified example. I have created an entity:
    @Entity(name="Customer")
    @Table(name="CUSTOMERS")
    public class Customer implements Serializable {
    }I have also set the collowing properties in persistence.xml:
    <property name="toplink.cache.type.default" value="NONE"/>
    <property name="toplink.cache.size.default" value="0"/>
    <property name="toplink.cache.type.Customer" value="NONE"/>
    <property name="toplink.cache.size.Customer" value="0"/>
    <property name="toplink.cache.shared.Customer" value="false"/>And then I run the following code:
    Customer cust = em.find(Customer.class, 1L);
    System.out.println(cust);
    cust = em.find(Customer.class, 1L);
    System.out.println(cust);The problem: the second call to em.find does NOT generate a query to the database. Here's a fragment from the console log:
    [TopLink Fine]: 2007.05.11 02:55:05.656--ServerSession(2030438)--Connection(5858953)--Thread(Thread[Main Thread,5,main])--SELECT ID, SEX, NAME, MANAGER FROM CUSTOMERS WHERE (ID = ?)
         bind => [1]
    Customer: id=1, name=Customer #1, sex=MALE
    Customer: id=1, name=Customer #1, sex=MALECan anyone help me? Why isn't the caching turned off? I tried various combinations of properties. Nothing worked. I was expecting to see two queries to the database. I can see only one.
    I tried with TopLink Essentials Version 2 Build 39 and Version 2 Build 41.
    Best regards,
    Bisser

    The cache is likely turned off, but you can't tell because you are using the same transactional EntityManager instance for the two queries. The EntityManager requires its own cache for object identity and transactional purposes, as once you read an object in through the EM, the spec requires that all subsequent reads return the same instance. Only the EntityManager refresh will cause a refresh, that or setting your queries to use the toplink.refresh and toplink.cache-usage query hints.
    I would strongly recommend you use a query cache for performance, but there are of course reasons why one might not be the best option.
    http://weblogs.java.net/blog/guruwons/archive/2006/09/understanding_t.html
    is a good blog on understanding the caching used In TopLin Essentials.
    Best Regards,
    Chris

  • Performance problem with slow VIEW from JDBC (fast from SQL Developer)

    Hi all,
    I'm experiencing following problem and would like to know if someone else also hit this one before and has a suggestion how to solve it:
    I have a pretty complicated SELECT statement that per definition returns only a few rows (~30). With no further optimization it takes ~20 seconds to return the full dataset in Oracle SQL Developer. If you add the */+ PUSH_PRED(name_of_some_inner_view) /* hint (hint is correct, stars got eaten by the OTN-forum syntax), the statement takes less than 0.5s to execute (still in SQL Developer). I saved the statement with the hint as VIEW. Selecting from the VIEW in SQL Developer is also fast.
    Now if I call the statement from JDBC (Tomcat webapp), I can see from the server console that the statement is 1:1 100% the same as the one I execute in SQL Developer. Nevertheless it takes about 20 seconds to complete.
    Here my details:
    SELECT banner FROM v$version;
    BANNER                                                                        
    Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production             
    PL/SQL Release 11.2.0.2.0 - Production                                          
    CORE     11.2.0.2.0     Production                                                        
    TNS for 32-bit Windows: Version 11.2.0.2.0 - Production                         
    NLSRTL Version 11.2.0.2.0 - Production                                          
    JDBC Driver used: some old odbc14.jar as well as current odbc6.jar for 11.2.0.2.0 from http://www.oracle.com/technetwork/da...10-090769.html
    SQL Developer: current version 3.2.20.09From my reading this could go wrong:
    - JDBC doesn't know the VIEW's column data types and Oracle behaves mysterious because of this (=there must be more to the SELECT than just the string, some meta-information)
    - For some reason the hint inside the VIEW is not used (unlikely)
    I also tried a Table Function/Pipelined table and selected from it as a workaround, but the result is the same: Selecting from Function is fast from SQL Developer, but slow from JDBC. All other statements that come from JDBC are as fast as they should be. I really don't know what to think of this and where the error might be.
    Is there some setting that tells Oracle not to use hints when called from JDBC?
    Thank you & Best regards,
    Blama

    Hi Bawer,
    that's what I'm thinking. Unfortunately I can't post it, as it is library code (not my lib). But in the debug-output I can see the SQL-String sent to the DB (which does include the hint).
    But I find the 2nd option you mention more likely anyway: Even if I put the hint into a VIEW and select from the view, the time-difference is there (it's even there if I use Table Functions/Pipelined table and select from the function).
    So I'd think it is more likely that something else is happening (e.g. Oracle is configured in a way that it does not use hints when called from JDBC or similar. Or the library sets some session options in order to prevent the usage of hints). But I don't know if there is even the possibility of doing so.
    Does the Oracle JDBC driver have the option to set these options?
    Does the Oracle DB have the option to set sth. like "ALTER SESSION SET dontUseHints = 'Y';"

  • Optimistic transaction - Recovering after exception

    I'm looking for a usage pattern / hint for following problem:
    We are using long-running optimistic transactions, where a large number of
    JDO objects is modified during a transaction. Since multiple end users can
    modify the same JDO objects, we can get OptimisticVerificationExceptions.
    Following recovering strategies are trivial to implement:
    - accept changes of the user who committed first by calling refresh() on the
    conflicting JDO objects, discarding all changes made during the transaction
    - step back to the state where the transaction began (with restoreValues set
    to true), discarding all changes made during the transaction
    However, we did not find a simple solution for recovering the following way:
    - commit the changes of the user who received the exception, thus
    overwriting the changes of the user who committed first
    After retrieving a conflicting JDO object via getFailedObject(), it is
    either in hollow or persistent-nontransactional state (depending on the
    restoreValue property), so all changes that were made during the
    transaction are lost. The only way I can think of is to copy/clone mapped
    fields in preStore/preDelete instance callbacks and storing them in
    transient fields within the JDO object. After receiving the exception, the
    mapped fields could be set back to the values stored in the transient
    fields. This approach appears somewhat clumsy, however. Are there better
    ideas / proven usage patterns around?
    Thanks,
    Contus

    Hi,
    The JDO specification mandates that an OptimisticVerificationException is a
    fatal exception and thus any transaction is implicitly rolled back and there
    will no longer be an active optimistic transaction (hence why you are seeing
    objects as hollow or PNT depending on RestoreValues).
    The only thing that I can think of is to maybe use the detach() API to take
    a copy of the objects you've changed prior to trying the commit. Then if
    there is a failure you can begin a new tx and attach the detached
    copies...not something I've tried but might work?
    Cheers
    - Keiron
    "contus" <[email protected]> wrote in message
    news:[email protected]...
    I'm looking for a usage pattern / hint for following problem:
    We are using long-running optimistic transactions, where a large number of
    JDO objects is modified during a transaction. Since multiple end users can
    modify the same JDO objects, we can get OptimisticVerificationExceptions.
    Following recovering strategies are trivial to implement:
    - accept changes of the user who committed first by calling refresh() onthe
    conflicting JDO objects, discarding all changes made during thetransaction
    - step back to the state where the transaction began (with restoreValuesset
    to true), discarding all changes made during the transaction
    However, we did not find a simple solution for recovering the followingway:
    - commit the changes of the user who received the exception, thus
    overwriting the changes of the user who committed first
    After retrieving a conflicting JDO object via getFailedObject(), it is
    either in hollow or persistent-nontransactional state (depending on the
    restoreValue property), so all changes that were made during the
    transaction are lost. The only way I can think of is to copy/clone mapped
    fields in preStore/preDelete instance callbacks and storing them in
    transient fields within the JDO object. After receiving the exception, the
    mapped fields could be set back to the values stored in the transient
    fields. This approach appears somewhat clumsy, however. Are there better
    ideas / proven usage patterns around?
    Thanks,
    Contus

  • Pubilc DC error

    Hi all,
    I am getting the following exception in DCs which use another public DC. In this case it is tryin to use another DC called InterfaceMgrComp :-
      <i>com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Cannot find repository information for component usage InterfaceMgrComp (Hint: Does a component usage of that name really exist?)
        at com.sap.tc.webdynpro.progmodel.controller.Component.createComponentUsage(Component.java:816)
        at com.sap.tc.webdynpro.progmodel.controller.Component.getComponentUsageInternal(Component.java:460)
        at com.sap.tc.webdynpro.progmodel.controller.Component.getComponentUsage(Component.java:448)
        at com.ngl.bp.partner.appl.wdp.InternalPartnerRailShipRepListView.wdGetInterfaceMgrCompInterface(InternalPartnerRailShipRepListView.java:379)
        at com.ngl.bp.partner.appl.PartnerRailShipRepListView.wdDoModifyView(PartnerRailShipRepListView.java:176)
        ... 28 more
    See full exception chain for details.</i>
    I normally get this error when I move the code to the next system.
    Can some one help me in resolving this issue?/
    Regards,
    NArahari

    HI Narahari,
    Reason and Prerequisites
    There can be many reasons for this error. More obvious reasons are not correctly build and deployed Web Dynpro DCs.
    A qualified name (for a class, Web Dynpro Application, Web Dynpro Component) must be used only in one DC. The DC concept provides assigning  packages uniquely to a DC for reaching this goal. However, the DC tools do not enforce this restriction. If a Web Dynpro Component (or Application) with the same qualified name is contained in more than one DC then deploying one of the DCs destroys the meta data in the runtime  repository of the other DC.
    Solution
    Use unique names for all Web Dynpro Components and Applications.
    Please refer to this thread:
    Re: WD Java: Problem with building a Web Dynpro Development Component
    Thanks,
    Raj.
    Message was edited by: Raj

  • WebDynpro runtime exception

    Hi, In our quality server, after moving code from dev server, iam getting following exception
    <i>com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Cannot find repository information for component usage PP_SalesOrder (Hint: Does a component usage of that name really exist?)</i>
    This PP_SalesOrder is a public part of SalesOrder DC being used by another DC. Any help?
    regards,
    Sujesh

    Hi
    i am also facing the similar issue.
    if i deploy again then the respective compoennt works.
    i am on Nw2004 Ep 6.0 sp19.
    After transports are done to other systems again if i deploy into respective systems it works.
    My Transports are controlled by NWDI.
    please let me know if we can resolve the issue.
    Thank you
    With Wishes
    Krishna Kanth

  • List of Hints and there usage?????

    Can anybody give me the link/doc having the sql hints in oracle 9i or 10g
    and having a good description on there usage.
    Regards
    Gagan

    Agree with you John - especially on the DRIVING_SITE hint.
    But I still feel the message itself needs to be strong. Nothing as frustrating as having developers releasing code that for example forces a PQ degree of 20 simply because that was the first number that sprung to mind. Or forcing an global index (indexes is good) as they did not like the CBO doing a FTS on the partition (FTS is bad). Etc.
    Sometimes I even get a mix of nonsense hints like forcing an index range scan and trying to force a PQ too - when asked the answer is along the lines of "oh, it is faster in parallel". Tuning by observation. "Oh, look, it is faster the 2nd time around! PQ works great!!" And not even considering that there are now less PIOs due to the db cache.
    It becomes real messy trying to fix. What frustrates me is that such developers usually have Oracle in low regard and see hints as a necessity to force it to behave correctly. And they have the tendency to complain loudly very quickly when it does not work how they expect it to work and blame Oracle as a poor product. If they spend that energy and effort instead of learning Oracle concepts and fundamentals... sigh

  • FULL hint usage

    Hi,
    I would like to know the benefits of using FULL table scan hint.
    SELECT /*+ FULL(A) */ * FROM TEST A;
    In our production system, when we join two bulk tables using FULL hint and it has got indexes on the join fields,
    performance is much better.
    If indexes are dropped and FULL hint is removed, performance is pretty slow.
    However, when indexes are available and I use FULL hint, how we get this performance improvement, does the processing
    happens in memory itself, please share your views on this.
    Regards,
    Sarathy

    Can you post your queries and EXPLAIN PLAN output? Otherwise we cannot be of much help.
    Make sure you post using the [pre] and [/pre] tags to format the output.
    cheers,
    Anthony

  • Report Developer Control Of Applying Hints to Analytics Queries

    There are numerous ways to apply hints to the queries generated by Analytics:
    - Table level
    - Join level
    - Evaluate calculation
    Each has its advantages and drawbacks.
    - Table level: applies the hint to every query that references the table.
    - Join level: applies the hint whenever the join is used in the query.
    - Evaluate: allows the report developer to include a hint, but can't control where Analytics decides to apply the hint.
    I propose another method for the report developer to apply hints, when needed, that uses join level hints. All the report developer
    does is add the hint column to the Answer or add a filter based on the hint column to the Answer to apply the hint.
    Setup
    NOTE: I suggest you do consistency checks along the way, especially before starting work in the next Layer, to be sure that all setup errors are resolved early.
    1) Start by defining a Logical SQL table in the Physical Layer using the following SQL: Select 1 Hint from dual
    2) Alias this table for each hint to be defined for report developer usage. As an example, alias the hint table, creating
    No Star and Parallel alias tables.
    3) Join each alias to the physical layer fact tables, using a complex join, where the hint could be applied. In the Join definition screen, put the hint in the HINT field and enter 1=1 for
    in the Expression section. Yes, we are creating a cartesian join between the hint and the other joining table. As the hint table always returns one and only one row, there
    is no effect on the rows returned by the query. For No Star, you
    put NO_STAR_TRANSFORMATION in the Hint field. For Parallel, you put PARALLEL(<physical table name>, default, default), where the physical table name
    is the name of the actual database table, not the name of the alias table (Analytics will put the alias in the place of the database table name
    when it generates the SQL). Additionally, for hints that have no parameters, you only need to join it
    to the Fact tables in a query and not necessarily the dimensions. If you include fields from multiple fact tables, the hint will be applied
    for each fact table. So, you may see the hint multiple times in the SQL (something like SELECT /*+ NO_STAR_TRANSFORMATION NO_STAR_TRANSFORMATION */ t00001.col1...)
    4) Add the hint alias tables to the BMM Layer.
    5) Rename the Hint field in each of the BMM hint tables to identify the hint being applied. For No Star, change the column name from Hint to No Star Hint. For Parallel,
    change the column name from Hint to Parallel Hint.
    6) Set the hint column as a key.
    7) Join the BMM hint tables to the appropriate fact tables, using a complex join.
    8) Define each hint table as a dimension.
    9) Set the Logical Level in the Content tab in each of the sources of the joined tables to use the Detail of the hint dimension.
    10) Create a folder in the Presentation Layer called Hints
    11) Place each BMM hint field into the Presentation Layer Hints folder.
    To apply a hint to your Answer, either add the Hint field to your Answer or create a filter where the Hint field is equal to/is in 1 (the number one). Check that the SQL generated
    contains the hint, in Answers, go into Administration, Session Manager, and view the log for the user (the user log level will need to have been set to 7 to see the SQL generated).
    Use of hints in more complex setups can be done by performing a setup of the hints that is parallel to the fact table setup. As an example, if you specify fragmentation content and a where
    clause in your BMM for your fact tables, you would setup parallel physical layer hint tables and joins, BMM objects and joins, fragmentation content, and where clauses based on the
    hint tables (only hint tables are referenced in the where clause).
    As any database person knows, hints can either help or degrade the performance of queries. So, taking the SQL of the pre-hint Answer and figuring out which hints give the best
    performance is suggested, prior to adding the hint fields/filters to the Answer.

    Hi Oliver,
    I would suggest you to have a look at the below WLST script which would give you the required report of the active threads and it would be send an email too.
    Topic: Sending Email Alert for Threads Pool Health Using WLST
    http://middlewaremagic.com/weblogic/?p=5433
    Topic: Sending Email Alert for Hogger Threads Count Using WLST
    http://middlewaremagic.com/weblogic/?p=5423
    Also you can use the below script in case of the stuck threads, this script would send you an email with the thread dumps during the issue occurred.
    Topic: Sending Email Alert For Stuck Threads With Thread Dumps
    http://middlewaremagic.com/weblogic/?p=5582
    Regards,
    Ravish Mody

  • QMASTER hints 4 usual trouble (QM NOT running/CLUSTEREd nodes/Networks etc

    All, I just posted this with some hints & workaround with very common issues people have on this forum and keep asking concerning the use of APPLE QMASTER with FCP, SHAKE, COMPRESSOR and MOTION. I've had many over the last 2 years and see them coming up frequently.
    Perhaps these symptoms are fixed in FCS2 at MAY 2007 (now). However if not here's some ROTS that i used for FCP to compressor via QMASTER cluster for example. NO special order but might help someone get around the stuff with QMASTER V2.3, FCP V5.1.4, compressor.app V2.3
    I saw the latest QMASTER UI and usage at NAB2007 and it looked a little more solid with some "EASY SETUP" stuff. I hope it has been reworked underneath.. I guess I will know soon if it has.
    For most FCP/COMPRESSOR, SHAKE. MOTION and COMPRESSOR:
    • provide access from ALL nodes to ALL the source and target objects (files) on their VOLUMES. Simply MOUNT those volumes through the APPLE file system (via NFS) using +k (cmd+k) or finder/go/connect to server. OR using an SSAFS such as XSAN™ where the file systems are all shared over FC not the network. YOu will notice the CPU's going very busy for a small while. THhis is the APPLE FILE SYSTEM task,,, I guess it's doing 'spotlight stuff". This goes away after a few minutes.
    • set the COMPRESSOR preferences for "CLUSTER OPTIONS" to "Never copy source to Cluster". This means that all nodes can access your source and target objects (files) over NFS (as above). Failure to to this means LENGTHY times to COPY material back an forth, in some cases undermining the pleasure gained from initially using clustering (reduced job times)
    • DONT mix the PHYSICAL or LOGICAL networks in your local cluster. I dont know why but I could never get this to work. Physical mean stick with eother ETHERNET or FIREWIRE or your other (airport etc whic will be generally way to slow and useless), Logical measn leepin all nodes on the SAME subnet. You can do this siply by setting theis up in the system preferences/QMASTER/advanced tab under "Use Network Interfaces". In my currnet QUAd I set this to use BUILT IN ETHERNET1 and in the MPBDC's I set this to their BUILTIN ETHERNET.
    • LOGICAL NETWORKS (Subnet): simply HARDCODE an IP address on the ETHERNET (for eample) for your cluster nodes andthe service controller. FOr eample 3.1.1.x .... it will all connect fine.
    • Physical Networks: As above (1) DONT MIX firewire (IPoFW) and Ethernet(IPoE). (2) if more than extra service node USE A HUB or SWITCH. I went and bought a 10 port GbE HUB for about $HK400 (€40) and it worked fine. I was NEVER able to get a stable system of QMASTER mixing FW and ETHERNET. (3) fwiw using IP of FW caused me a LOAD of DISK errors and timouts (I/O errors) on thosse DISKs that were FW400 (al gone now) but it showed this was not stable overall
    • for the cluster controller node MAKE SURE you set the CLUSTER STORAGE (system preferences/QMASTER/shared cluster storage) for the CLUSTER CONTROLLER NODE IS ON A SHARED volume (See above). This seems essential for SHAKE to work. (if not check the Qmaster errors in the console.app [see below] ). IF you have an SSAFS like XSAN™ then just add this cluster storage on a share file path. NOte that QMASTER does not permit the cluster storage to be on a NETWORK NODE for some reason. So in short just MOUNT the volume where the SHARED CLUSTER file is maintained for the CLUSTER controller.
    • FCP - avoid EXPORT to COMPRESSOR from the TIMELINE - it never seems to work properly (see later). Instead EXPORT FROM SEQUENCE in the BROWSER - consistent results
    • FCP - "media missing " messages on EXPORT to COMPRESSOR.. seems a defect in FCP 5.1 when you EXPORT using a sequence that is NOT in the "root" or primary trry in the FCP PROJECT BROWSER. Simply if you have browser/bin A contains(Bin B (contains Bin C (contains sequence X))) this will FAIL (wont work) for "EXPORT TO COMPRESSOR" if you use EXPORT to COMPRESSOR in a FCP browser PANE that is separately OPEN. To get around this, simply OPEN/EXPOSE the triangles/trees in the BROWSER PANE for the PROJECT and select the SEQUENCE you want and "EXPORT to COMPRESSOR" from there. This has been documented in a few places in this forum I think.
    • FCP -> COMPRESSOR -> .M2V (for DVDSP3): some things here. EXPORTING from an FCP SEQUENCE with CHAPTER MARKERS to an MPEG2 .M2V encoding USING A CLUSTER causes errors in the placement of the chapter makers when it is imported to DVDSP3. In fact CONSISTENTLY, ALL the chapter markers are all PLACED AT THE END of the TRACK in DVD SP# - somewhat useless. This seems to happen ALSO when the source is an FCP reference movie, although inconsistent. A simple work around if you have the machines is TRUN OF SEGMENTING in the COMPRESSOR ENCODER inspector. let each .M2V transcode run on the same service node. FOr the jobs at hand just set up a CLUSTER and controller for each machine and then SELECT the cluster (myclusterA, hisclusterb, herclusterc) for each transcode job.. anyway for me.. the time spent resolving all this I could have TRANSCODED all this on my QUAD and it would all have ben done by sooner! (LOL)
    • CONSOLE logs: IF QMASTER fails, I would suggest your fist port of diagnosis should be /Library/Logs/Qmaster in there you will see (on the controller node) compressor.log, jobcontroller.com.apple.qmaster.cluster.admin.log, and lots of others including service controller.com.apple.qmaster.executorX.log (for each cpu/core and node) andd qmasterca.log. All these are worth a look and for me helped me solve 90% of my qmaster errors and failures.
    • MOTION 3 - fwiw.. EXPORT USING COMPRESSOR to a CLUSTER seems to fail EVERY TIME.. seems MOTION is writing stuff out to a /var/spool/qmaster
    TROUBLESHOOTING QMASTER: IF QMASTER seems buggered up (hosed), then follow these steps PRIOR to restarting you machines.
    go read the TROUBLE SHOOTING in the published APPLE docs for COMPRESSOR, SHAKE and "SET UP FOR DISTRIBUTED PROCESSING" and serach these forums CAREFULLY.. the answer is usually there somewhere.
    ELSE THEN,, try these steps....
    You'll feel that QMASTER is in trouble when you
    • see that the QMASTER ICON at the top of the screen says 'NO SERVICES" even though that node is started and
    • that the APPLE QMASTER ADMINSTRATOR is VERY SLOW after an 'APPLY" (like minutes with SPINNING BEACHBALL) or it WONT LET YOU DELETE a cluster or you see 'undefined' nodes in your cluster (meaning that one was shut down or had a network failure)..... all this means it's going to get worse and worse. SO DONT submit any more work to QAMSTER... best count you gains and follow this list next.
    (a) in COMPRESSOR.app / RESET BACKGROUND PROCESSES (its under the COMPRESSOR name list box) see if things get kick started but you will lose all the work that has been done up to that point for COMPRESSOR.app
    b) if no OK, then on EACH node in that cluster, STOP the QMASTER (system preferences/QMASTER/setup [set 0 minutes in the prompt and OK). Then when STOPPED, RESET the shared services my licking OPTION+CLICK on the "START" button to reveal the "RESET SERVICES". Then click "START" on each node to start the services. This has the actin of REMOVING or in the case where the CLUSTER CONTROLLER node is "RESET" f terminating the cluster that's under its control. IF so Simply go to APPLE QMASTER ADMINISTRATOR and REDFINE it. Go restart you cluster.
    c) if step (b) is no help, consult the QMASTER logs in /Library/Logs/Qmaster (using the cosole.app) for any FILE MISSING or FILE not found or FILE ERROR . Look carefully for the NODENAME (the machine_name.local) where the error may have occured. Sometimes it's very chatty. Others it is not. ALso look in the BATCH MONITOR OUTPUT for errors messages. Often these are NEVER written (or I cant find them) in the /var/logs... try and resolve any issues you can see (mostly VOLUME or FILE path issues from my experience)
    (d) if still no joy then - try removing all the 'dead' cluster files from /var/tmp/qmaster , /var/sppol/qmaster and also the file directory that you specified above for the controller to share the clustering. FOR shake issues, go do the same (note also where the shake shared cluster file path is - it can be also specified in the RENDER FILEOUT nodes prompt).
    e) if all this WONT help you, its time to get the BIG hammer out. Simply, STOP all nodes of not stopped. (if status/mode is "STOPPING" then it [QMASTER] is truly buggered). DISMOUNT the network volumes you had mounted. and RESTART ALL YOUR NODES. Tis has the affect of RESTARTING all the QMASTERD tasks. YEs sure you can go in and SUDO restart them but it is dodgy at best because they never seem to terminate cleanly (Kill -9 etc) or FORCE QUIT.... is what one ends up doing and then STILL having to restart.
    f) after restart perform steps from (B) again and it will be usually (but not always) right after that
    LAstly - here's some posts I have made that may help others for QMASTER 2.3 .. and not for the NEW QMASTER as at MAy 2007...
    Topic "qmasterd not running" - how this happened and what we did to fix it. - http://discussions.apple.com/message.jspa?messageID=4168064#4168064
    Topic: IP over Firewire AND Ethernet connected cluster? http://discussions.apple.com/message.jspa?messageID=4171772#4171772
    LAstly spend some DEDICATED time to using OBJECTIVE keywords to search the FINAL CUT PRO, SHAKE, COMPRESSOR , MOTION and QMASTER forums
    hope thats helps.
    G5 QUAD 8GB ram w/3.5TB + 2 x 15in MBPCore   Mac OS X (10.4.9)   FCS1, SHAKE 4.1

    Warwick,
    Thanks for joining the forum and for doing all this work and posting your results for our benefit.
    As FCP2 arrives in our shop, we will try once again to make sense of it and to see if we can boost our efficiencies in rendering big projects and getting Compressor to embrace five or six idle Macs.
    Nonetheless, I am still in "Major Disbelief Mode" that Apple has done so little to make this software actually useful.
    bogiesan

Maybe you are looking for

  • Custom XPath Function not working - ORABPEL-09503 error

    I wrote a simple date conversion custom xpath function. 1. Implemented IXPathFunction as mentioned in Clemens and Antony Reynolds blog. 2. Also changed the xpath-functions.xml kept at C:\product\10.1.3.1\OracleAS_1\bpel\system\config\xpath-functions.

  • Country India Version - Depot Sales - Certificate A

    Dear All We have following scenario in depot sales. Material is cleared at price Rs. 100 from the manufacturing plant. It is sold to end customer from depot at Rs. 150. We have to perform 2 steps from here. 1. We have to pay differential excise on Rs

  • Fan runs (nearly) fullspeed ALWAYS - XPS l502x

    Hey! After trying everything i end up really desperate and now count on your help. Following is my Problem: I changed my HDD for an SSD two days ago after having my XPS L502x for nearly three years now. Everything works fine with the SSD, it runs sup

  • Problem configuring IP SLA using SNMP

    I am trying to configure IP SLA on a Cisco device using SNMP. No matter what I do the rttMonCtrlAdminStatus changes to 1 even though I set it to 4. The device is  - Cisco IOS Software, 2800 Software (C2800NM-SPSERVICESK9-M), Version 12.4(22)T, RELEAS

  • Asking for transport request whie table data updation

    Hi All, I am trying to update a table TCJ04 using transaction OPS6. When ever I am adding a new entry or edit the existing entry and click SAVE, the table is asking for Transport request number. This table carries a master data and should not be aski