Query vs listcube performance

Hey Everyone.  Wondering if anyone else has seen this issue.  We keep getting BWA errors regarding memory, although we really are not coming close on any blade.  That's not really what this post is about, but thought I would mention that.  SAP has been trying to help out but hasn't come up with anything yet.  So here is what I am wondering...
Generally speaking, shouldn't performance of a query on a BWA cube be similar to running the same selections off that cube through listcube?  Mainly, we are seeing these error messages thrown when a particular query/group of queries is run.  I have tried to make local copies of the queries and have tried to redesign them, but hasn't helped.  Here is the thing... I built the listcube lookup on that cube to be the same as the query.  Made the query only three objects and have select option variables in the default values.  Nothing in the filter area.  The cube has a bit over 300 million records and this query returns around 30 records.  Through listcube, it takes about 6 seconds.  But the query completely times out and then the errors start happening.
I've been in the BW area for a long time, but am new to BWA.  On the surface it almost seams like the query variables are being treated as hard coded filters and not being used in the selection of data.  So all 300 million records come back, adds to query cache, times out, and now memory is hit because cache isn't deleted.  I am not saying that is exactly what is happening, but it just seams like it... although I cannot think of any reason why this would be the case.
Has anyone else seen this type of behavior?  Just to reiterate, I made this a very simplistic query for the purpose of matching it via listcube.  
Any suggestions are greatly appreciated.  
Thank you!!

Hey Everyone.  Thanks for the replies and sorry to be slow in answering your questions!  I am not on the network at the moment so can't lookup our patch levels.  The query doesn't use exception aggregation.  Only key figure is taken directly from the cube with no other manipulation. 
I haven't let the query run long enough through RSRT to see the messages, but I have seen a screen print from a user which referenced a memory problem and also said something about attributes.  I will have to post laster what the true message was.  But as soon as this query is executed, we start seeing the BWA swap memory rising and then we start seeing typical alert emails.  The query does not using any InfoObject attributes or navigational attributes. 
I'm just at a loss as to why such a strait forward data selection would take seconds off listcube but start causing errors via executing the actual query.  I realize the query has a generated program that will make it perform differently than the listcube lookup, but the difference is dramatic. 
I will keep poking around and will post the actual error message first chance I get.
Thanks again everyone!

Similar Messages

  • Can u any imrove this query for maximum performance

    select g_com_bu_entity bunt_entity
         , g_com_rep_cd srep_cd
         , effdt from_dt
         , eff_status
         , g_com_role role
         , g_com_pgm prgm
         , g_com_district district
         , g_com_draw_status draw_status
         , decode(g_com_primary_pgm, 'Y',1, 0) pri_prgm_flag
    FROM ps_g_com_assign_vw@commissions c1
    WHERE effdt =
    (SELECT MAX (effdt)
    FROM ps_g_com_assign_vw@commissions c2
    WHERE c1.g_com_bu_entity = c2.g_com_bu_entity
    AND c1.g_com_rep_cd = c2.g_com_rep_cd);
    can anyone make it as regular query for maximum performance
    Thanks,
    Sreekanth

    Hi Sreekant,
    Try this: If it helps
    select g_com_bu_entity bunt_entity
    , g_com_rep_cd srep_cd
    , effdt from_dt
    , eff_status
    , g_com_role role
    , g_com_pgm prgm
    , g_com_district district
    , g_com_draw_status draw_status
    , decode(g_com_primary_pgm, 'Y',1, 0) pri_prgm_flag
    FROM ps_g_com_assign_vw@commissions c1,
    (SELECT MAX (effdt) effdt_max
    FROM ps_g_com_assign_vw@commissions c2
    WHERE c1.g_com_bu_entity = c2.g_com_bu_entity
    AND c1.g_com_rep_cd = c2.g_com_rep_cd) t2
    WHERE effdt = t2.effdt_max;

  • Please help to modifiy this query for better performance

    Please help to rewrite this query for better performance. This is taking long time to execute.
    Table t_t_bil_bil_cycle_change contains 1200000 rows and table t_acctnumberTab countains  200000 rows.
    I have created index on ACCOUNT_ID
    Query is shown below
    update rbabu.t_t_bil_bil_cycle_change a
       set account_number =
           ( select distinct b.account_number
             from rbabu.t_acctnumberTab b
             where a.account_id = b.account_id
    Table structure  is shown below
    SQL> DESC t_acctnumberTab;
    Name           Type         Nullable Default Comments
    ACCOUNT_ID     NUMBER(10)                            
    ACCOUNT_NUMBER VARCHAR2(24)
    SQL> DESC t_t_bil_bil_cycle_change;
    Name                    Type         Nullable Default Comments
    ACCOUNT_ID              NUMBER(10)                            
    ACCOUNT_NUMBER          VARCHAR2(24) Y    

    Ishan's solution is good. I would avoid updating rows which already have the right value - it's a waste of time.
    You should have a UNIQUE or PRIMARY KEY constraint on t_acctnumberTab.account_id
    merge rbabu.t_t_bil_bil_cycle_change a
    using
          ( select distinct account_number, account_id
      from  rbabu.t_acctnumberTab
          ) t
    on    ( a.account_id = b.account_id
           and decode(a.account_number, b.account_number, 0, 1) = 1
    when matched then
      update set a.account_number = b.account_number

  • Help to rewirte query for best performance

    Hi All,
    can you kindly help me to rewirte the below mentioned query for best performance. this is taking more than 20 min in our production server.
    SELECT cp.name,mis.secondary_type U_NAME,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-161,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-154,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-154,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-147,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-147,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-140,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-140,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-133,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-133,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-126,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-126,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-119,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-119,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-112,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-112,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-105,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-105,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-98,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-98,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-91,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-91,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-84,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-84,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-77,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-77,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-70,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-70,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-63,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-63,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-56,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-56,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-49,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-49,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-42,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-42,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-35,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-35,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-28,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-28,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-21,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-21,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-14,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-14,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-7,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage
    FROM mis_event_audit mis,USER u,com_pros cp where
    mis.user_id=u.email_address and u.cp_id=cp.cp_id
    and (mis.start_time between To_DATE(to_char(next_day (sysdate-161,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-7,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))
    GROUP BY cp.name, mis.secondary_type;
    Thanks,
    krish

    Hi, Krish,
    Something like this will probably be faster, because it cuts out most of the function calls:
    WITH     got_cnt          AS
         SELECT    cp.name
         ,       mis.secondary_type          AS u_name
         ,       COUNT (mis.event_audit_id)     AS cnt
         ,       ( TRUNC (mis.start_time, 'IW')
                - TRUNC (SYSDATE,        'IW')
                ) / 7                    AS week_num
         FROM      mis_event_audit  mis
         JOIN       user_table        u     ON   mis.user_id  = u.email_address     -- USER is not a good table name
         JOIN       com_pros        cp     ON   u.cp_id       = cp.cp_id
         WHERE       mis.start_time   >= TRUNC (SYSDATE, 'IW') - 161
         AND       mis.start_time   <  TRUNC (SYSDATE, 'IW')
         GROUP BY  cp.name
         ,            mis.secondary_type
         ,       TRUNC (mis.start_time, 'IW')
    SELECT       name
    ,       secondary_type
    ,       SUM (CASE WHEN week_num = 22 THEN cnt END)     AS week_23
    ,       SUM (CASE WHEN week_num = 21 THEN cnt END)     AS week_22
    ,       SUM (CASE WHEN week_num = 20 THEN cnt END)     AS week_21
    ,       SUM (CASE WHEN week_num =  0  THEN cnt END)     AS week_1
    FROM       got_cnt
    GROUP BY  name
    ,            secondary_type
    ;TRUNC (d, 'IW')       is midnight on the last Monday before or equal to the DATE d. It does not depend on you NLS settings.
    Whenever you're tempted to write an exprssion as complicated as
    ,     COUNT ( CASE
                       WHEN ( mis.start_time BETWEEN TO_DATE ( TO_CHAR ( NEXT_DAY  ( SYSDATE - 161
                                                                       , 'monday'
                                                  , 'MM/DD/YYYY'
                                        , 'MM/DD/YYYY'
                              AND     TO_DATE ( TO_CHAR ( NEXT_DAY ( SYSDATE - 154
                                                                 ,'monday'
                                                , 'MM/DD/YYYY'
                                          , 'MM/DD/YYYY'
                  THEN mis.event_audit_id
               END
             )               AS usageseek alternate ways. Oracle provides several handy functions, especially for manipulating DATEs. In particular "TO_DATE (TO_CHAR ...)" is almost never needed; think very carefully before doing a round-trip conversion like that.
    Besides being more efficient, this will be easier to debug and maintain.
    If you're using Oracle 11.1 (or higher), then you can also use SELECT ... PIVOT in the main query, but I doubt that will be any faster, and it might not be any simpler.
    I hope this answers your question.
    If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all tables involved, and also post the results you want from that data.
    Explain, using specific examples, how you get those results from that data.
    Simplify the problem as much as possible. For example, instead of posting a problem that covers the last 23 weeks, pretend that you're only interested in the last 3 weeks. You'll get a solution that's easy to adapt to any number of weeks.
    Always say which version of Oracle you're using (e.g., 11.2.0.2.0).
    See the forum FAQ {message:id=9360002}
    For performance problems, there's another page of the forum FAQ {message:id=9360003}, but, before you start that process, let's get a cleaner query, without so many functions.
    Edited by: Frank Kulash on Oct 2, 2012 11:50 AM
    Changed week_num to be non-negative

  • Multi Select Choice on af:query has severe performance issue

    Multi-select choice used with af:query through a view criteria is causing severe performance issue on deselection of "All" checkbox, if the data in the list is around 550 rows. The same component works absolutely fine when used in a form layout.
    I can provide you a re-producible test case, if anyone needs it!
    ***: This is a customer environment issue, and customer is eager to have multi-select in this case. Appreciate any help!

    Glimpse of repetitive lines from console for the above scenario:
    <DCUtil> <findSpelObject> [2208] DCUtil, returning:oracle.jbo.uicli.binding.JUApplication, for TestSelectChoiceDefaultAMDataControl
    <ADFLogger> <begin> Attaching an iterator binding to a datasource
    <DCIteratorBinding> <getViewObject> [2209] Resolving VO:TestSelectChoiceDefaultAM._SESSION_SHARED_APPMODULE_NAME_.SessionAM.DeptReadOnly1 for iterator binding:noCtrl_oracle_adfinternal_view_faces_model_binding_FacesCtrlListBinding_59List_60
    <DCUtil> <findSpelObject> [2210] DCUtil, RETURNING: <null> for TestSelectChoiceDefaultAM._SESSION_SHARED_APPMODULE_NAME_.SessionAM.DeptReadOnly1
    <ADFLogger> <addContextData> Attaching an iterator binding to a datasource
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    <DCUtil> <findSpelObject> [2211] DCUtil, returning:oracle.jbo.uicli.binding.JUApplication, for TestSelectChoiceDefaultAMDataControl
    <ADFLogger> <begin> Attaching an iterator binding to a datasource
    <DCIteratorBinding> <getViewObject> [2212] Resolving VO:TestSelectChoiceDefaultAM._SESSION_SHARED_APPMODULE_NAME_.SessionAM.DeptReadOnly1 for iterator binding:noCtrl_oracle_adfinternal_view_faces_model_binding_FacesCtrlListBinding_123List_124
    <DCUtil> <findSpelObject> [2213] DCUtil, RETURNING: <null> for TestSelectChoiceDefaultAM._SESSION_SHARED_APPMODULE_NAME_.SessionAM.DeptReadOnly1
    <ADFLogger> <addContextData> Attaching an iterator binding to a datasource
    <ADFLogger> <addContextData> Get LOV list
    .....many times followed by
    <ADFLogger> <addContextData> Attaching an iterator binding to a datasource
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    ...many times

  • Data pump, Query "1=2" performance?

    Hi guys
    I am trying to export a schema using data pump however I need no data from a few of the tables since they are irrelevant but I'd still like to have the structure of the table itself along with any constraints and such.
    I thought of using the QUERY parameter with a "1=2" query making it so that I can filter out all data from certain tables in the export while giving me everything else.
    While this works I wonder if data pump/oracle is smart enough to not run this query through the entire table? If it does perform a full table scan then can anybody recommend any other way of excluding just the data of certain tables while still getting the table structure itself along with anything else related to it?
    I have been unable to find such information after searching the net for a good while.
    Regards
    Alex

    Thanks.
    Does that mean 1=2 actually scans the entire table so it should be avoided in the future?
    Regards
    Alex

  • Impact of Query Logging on Performance of Queries in OBIEE

    I see from [An Oracle BI Blog post|http://obieeblog.wordpress.com/2009/01/19/obiee-performance-tuning-tip-%e2%80%93-turn-off-query-logging/] that Query Logging has a performance impact in OBIEE.
    What is the experience with Query Logging at different levels in a Production environment with, say, 50 or 100 or 500 concurrent users ?
    I am completely new to OBIEE, I know the Database. So, please bear with me.
    Hemant K Chitale

    Kumar's blog that you reference says it all really.
    I don't know if anyone's going to be able to give you the kind of information you're looking for, because it's a no-brainer not to enable this level of logging :)
    Is there are reason you're even considering it?
    Imagine in the database running a low-level trace or debug log for every user session... you just wouldn't do it

  • How can we rewrite this query for better performance

    Hi All,
    The below query is taking more time to run. Any ideas how to improve the performance by rewriting the query using NOT EXITS or any other way...
    Help Appreciated.
    /* Formatted on 2012/04/25 18:00 (Formatter Plus v4.8.8) */
    SELECT vendor_id
    FROM po_vendors
    WHERE end_date_active IS NULL
    AND enabled_flag = 'Y'
    and vendor_id NOT IN ( /* Formatted on 2012/04/25 18:25 (Formatter Plus v4.8.8) */
    SELECT vendor_id
    FROM po_headers_all
    WHERE TO_DATE (creation_date) BETWEEN TO_DATE (SYSDATE - 365)
    AND TO_DATE (SYSDATE))
    Thanks

    Try this one :
    This will help you for partial fetching of data
    SELECT /*+ first_rows(50) no_cpu_costing */
    vendor_id
    FROM po_vendors
    WHERE end_date_active IS NULL
    AND enabled_flag = 'Y'
    AND vendor_id NOT IN (
    SELECT vendor_id
    FROM po_headers_all
    WHERE TO_DATE (creation_date) BETWEEN TO_DATE (SYSDATE - 365)
    AND TO_DATE (SYSDATE))
    overall your query is also fine, because, the in this query the subquery always contain less data compare to main query.

  • SCOM SQL query killing instance performance

    Hi!
    Since a few day, a new query running against our SCOM datawarehouse db is killing the performance of the SQL instance. The query run 1'000'000 times per hour for a total of 100'000'000 logical read per hour, and this continuously. The query is part of the
    following Procedure: OperationsManagerDW.dbo.fn_BuildInstancePropertyDelta
    Any idea what's wrong ?
    What could trigger this new behavior ? 
    Rgds
    /Chris
    Waiting on statement:
    RETURN
       SELECT 1 AS Tag ,
          NULL AS Parent ,
          ISNULL(S1.PropertyGuid, S2.PropertyGuid) AS [Property!1!Guid] ,
          S1.PropertyValue AS [Property!1!NewValue!ELEMENT] ,
          S2.PropertyValue AS [Property!1!OldValue!ELEMENT]
       FROM
          SELECT PropertyGuid = Set1.PropertyXml.value('@Guid' ,'nvarchar(256)') ,
             PropertyValue = Set1.PropertyXml.value('.' ,'nvarchar(max)')
          FROM @PropertyXml1.nodes('Root/Property') AS Set1(PropertyXml)
          ) AS S1
       FULL JOIN
          SELECT PropertyGuid = Set1.PropertyXml.value('@Guid' ,'nvarchar(256)') ,
             PropertyValue = Set1.PropertyXml.value('.' ,'nvarchar(max)')
          FROM @PropertyXml2.nodes('Root/Property') AS Set1(PropertyXml)
          ) AS S2
       ON (S1.PropertyGuid = S2.PropertyGuid)
       WHERE (S1.PropertyValue <> S2.PropertyValue)
       OR ( (ISNULL(S1.PropertyValue, S2.PropertyValue) IS NOT NULL)
      AND ((S1.PropertyValue IS NULL)
       OR (S2.PropertyValue IS NULL)) ) FOR XML EXPLICIT

    Hallo,
    We open a ticket to Microsoft Support.
    In the SCOM application logs, this huge SQL cpu activity correlates with an error 31553, Violation of unique key constraint.
    We get the information that if the following query return some rows, it means you are hitting the same issue and you should better open a ticket to get information how to fix the problem.
    select
    * from ManagedEntityStage mes
    inner
    join ManagedEntity me on mes.ManagedEntityGuid
    = me.ManagedEntityGuid
    inner
    join ManagedEntityProperty mep
    on me.ManagedEntityRowId
    = mep.ManagedEntityRowId
    where mep.FromDateTime
    = mes.ChangeDateTime
    order
    by mep.ManagedEntityRowId
    Rgds
    /Chris

  • How to update this query and avoid performance issue?

    Hi, guys:
    I wonder how to update the following query to make it weekend day aware. My boss want the query to consider business days only. Below is just a portion of the query:
    select count(distinct cmv.invoicekey ) total ,'3' as type, 'VALID CALL DATE' as Category
    FROM cbwp_mv2 cmv
    where cmv.colresponse=1
    And Trunc(cmv.Invdate)  Between (Trunc(Sysdate)-1)-39 And (Trunc(Sysdate)-1)-37
    And Trunc(cmv.Whendate) Between cmv.Invdate+37 And cmv.Invdate+39the CBWP_MV2 is a materialized view to tune query. This query is written for a data warehouse application, the CBWP_MV2 will be updated every day evening. My boss wants the condition in the query to consider only business days, for example, if (Trunc(Sysdate)-1)-39 falls in weekend, I need to move the range begins from next coming business day, if (Trunc(Sysdate)-1)-37 falls in weekend, I need to move the range ends from next coming business day. but I should always keep the range within 3 business days. If there is overlap on weekend, always push to later business days.
    Question: how to implement it and avoid performance issue? I am afraid that if I use a function, it greatly reduce the performance. This view already contains more than 100K rows.
    thank you in advance!
    Sam
    Edited by: lxiscas on Dec 18, 2012 7:55 AM
    Edited by: lxiscas on Dec 18, 2012 7:56 AM

    You are already using a function, since you're using TRUNC on invdate and whendate.
    If you have indexes on those columns, then they will not be used because of the TRUNC.
    Consider omitting the TRUNC or testing with Function Based Indexes.
    Regarding business days:
    If you search this forum, you'll find lots of examples.
    Here's another 'golden oldie': http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:185012348071
    Regarding performance:
    Steps to take are explained from the links you find here: {message:id=9360003}
    Read them, they are more than worth it for now and future questions.

  • Query vs View Performance

    Hello All,
    I am looking for some guidance on teh performance overhead of using Queries versus Views.
    If we create a "Super Query" Query_1 :
    Characteristic A
    Characteristic B
    Characteristic X
    KF C
    Then a "minimal view" Query_1_View_1:
    Characteristic A
    KF C
    Then a "minimal query" Query_2:
    Characteristic A
    KF C
    So view Query_1_View_1 and query Query_2 deliver the same results to the end-user ......
    Which will be the best performer?
    Query_1_View_1 or Query_2
    I am guessing that the answer will be Query_2, because the view Query_1_View_1 has to return all the Characterics of Query_1 before forming the View representation Query_1_View_1 and returning result to the end-user.
    ( and any aggregates that satisfy Query_2, but not Query_1, will never be invoked)
    Are these assumptions correct, or is "view processing" smarter than we think?
    Will Query_1_View_1 and Query_2 in fact perform the same?
    All comment appreciated, thanks.
    Ian Reid

    Performance wise it may not be significat.
    But if you the queries differe only by the characteristic values, it is better to keep them as views so that it is less query to be upgraded.
    Ravi Thothadri

  • Querying a partition performance.

    Hi Oracle Gurus,
    I had a small question. I was thinking which one of the 2 techniques will give me a better performance.
    1). Querying a table and using partition name in the from clause like
    select count(*) from table_name partition (partition_name);
    The partition in on the business_date for that table.
    2). Querying a partitioned table with a where condition (here the column in the where condition is partitioned for date).
    select count(*) from table_name where table_name.business_date='1-Dec-2009'
    Please let me know if you need any clarification.

    The performance difference is not at all related to the data volume. It is solely related to the cost of parsing the query. If anything, the static SQL solution will be more efficient in a production environment.
    - If you are using static SQL, Oracle only has to parse the SQL statement once and you can pass in whatever bind values you'd like. If you are using dynamic SQL, Oracle has to parse every SQL statement separately which means generating a brand new query plan (if you repeat the query for the same date, you'd probably only need a soft parse). Generating a query plan takes some time (on the order of tenths of a second probably) in addition to requiring various latches on the shared pool. Those tenths of a second will dwarf whatever potential benefit you get in not having Oracle prune partitions. Plus, you're increasing serialization, pressure on the shared pool, etc.
    - Even beyond that, I find it extremely unlikely that your code could determine the partition to use more quickly than Oracle could. Even if it is a simple algorithm to convert the date into a partition name, that's not appreciably more work than Oracle will have to figure out what partition the date value would be in. And Oracle's lookup is happening in highly optimized and compiled kernel code-- your application is likely using a higher level language with less optimized code. At best, if you've got hundreds of thousands of partitions, it's probably a wash.
    Justin

  • Does Coloring Of Rows and Column in Query effect the Performance of Query

    Hi to all,
    Does Coloring of Rows and Column in Query via WAD or Report designer , will effect the performance of query.
    If yes, then how.
    What are the key factor should we consider while design of query in regards of Query performance.
    I shall ne thankful to you for this.
    Regards
    Pavneet Rana

    There will not be any significance perofrmance issue for Colouring the rows or columns...
    But there are various performance parameters which should be looked into while designing a query...
    Hope this PPT helps you.
    http://www.comeritinc.com/UserFiles/file/tips%20tricks%20to%20speed%20%20NW%20BI%20%202009.ppt
    rgds, Ghuru

  • API to access query structure / bad performance Bex query processor

    Hi, we are using a big P&L query structure. Each query structure node selects a hierarchy node of the account.
    This setup makes the performance incredible bad. The Bex query processor caches and selects per structure node - which creates an awful mass of unnecessary SQL statements. (It would be more useful to try to merge the SQL statements as far as possible with an group by account to generate bigger SQL statements.)
    The structure is necessary to cover percentage calculations in the query, the hierarchy is used to “calculate” subtotals by selecting different nodes on different levels.
    I am searching now for a different approach to cover the reporting requirement - or - for a API to generate out of the master structure smaller query structures per area of the P&L. It there any class to access the query structure?
    We tried already to generate data entries per node level (duplicating one data record per node where it appears with an characteristic for the node name). But this approach generates too many data records.
    Not using hierarchy nodes would make the maintenance terrible. To generate "hard" selections in the structure out of the hierarchy an API to change the structure be also useful.

    The problem came from a wrong development of exit varibale used in Analysis Authorization
    Edited by: SSE-BW-Team SSE-BW-Team on Feb 28, 2011 1:46 PM

  • Query result pagination performance

    Hi
    I have CQ5.4 code (extract below) which uses QueryBuilder to create a query.  The result set is quite large (~2000 nodes) and ordered. I set the hits per page and the start page as I display the results using a pager.
    Performance is good for the first page of results, but performance degrades quite significantly as the page start value is increased towards the end of the result set.  I find this strange as all nodes must always be accessed as the result set is sorted.
    Does anyone have suggestions as to how I might resolve this performance issue?
    Thanks
    Simon
         QueryBuilder queryBuilder = resource.getResourceResolver().adaptTo(QueryBuilder.class);  
         Session session = resource.getResourceResolver().adaptTo(Session.class);
         Query query = queryBuilder.createQuery(PredicateGroup.create(map), session);
         if (query != null) {
             int hitLimit = (pageMaximum>0) ? pageMaximum : limit;
             if (limit>0 && hitLimit>limit)
                 hitLimit = limit;
             query.setHitsPerPage(hitLimit);
             query.setStart(pageStart);
             SearchResult result = query.getResult();
             totalMatches = result.getTotalMatches();
             actLimit = (limit>0) ? Math.min(limit, (int)totalMatches) : (int)totalMatches;

    Sure, I can provide lots more information.  I've tried many different styles of query, but found the behaviour to always be the same.  The log extracts below show an example query and how increasing the offset using query.setStart() impacts the time taken by query.getResult().
    I modified the code a little for some extra debug so it's clear where the time is being spent, snippet below.
    I guess my questions really is, has anyone else used query.setStart() to set the offset for paging, did they see similar increases in response times and did they find a resolution ?
    Does anyone else agree that it's strange that all the hard work of searching and sorting is fast, and, as expected, takes the same time irrespective of the offset, but returning the results takes longer when the offset is increased ?
    Regards
    Simon
         Query query = queryBuilder.createQuery(PredicateGroup.create(map), session);
         if (query != null) {
            int hitLimit = (pageMaximum>0) ? pageMaximum : limit;
            if (limit>0 && hitLimit>limit)
               hitLimit = limit;
            query.setHitsPerPage(hitLimit);
            query.setStart(pageStart);
             log.debug("Time to prepare : " + ((new Date()).getTime()-start) + "ms");
             SearchResult result = query.getResult();
             log.debug("Prep and getResult : " + ((new Date()).getTime()-start) + "ms");
    Example 1:  Offset 0, query takes 328 ms, results returned in 328 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.xxx.wcm.components.List Time to prepare : 0ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (URL):
    group.0_property=jcr%3acontent%2fpathTags&group.0_property.0_value=testing&group.0_propert y.1_value=portal&group.0_property.2_value=de&group.0_property.and=true&group.p.or=true&ord erby=%40jcr%3acontent%2fcq%3alastModified&orderby.index=true&orderby.sort=desc&p.limit=25& p.offset=0
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (predicate tree):
    ROOT=group: limit=25, offset=0[
        {group=group: or=true[
            {0_property=property: 1_value=portal, 2_value=de, property=jcr:content/pathTags, 0_value=testing, and=true}
        {orderby=orderby: index=true, sort=desc, orderby=@jcr:content/cq:lastModified}
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query: //*[((jcr:content/@pathTags = 'portal' and jcr:content/@pathTags = 'de' and jcr:content/@pathTags = 'testing'))] order by jcr:content/@cq:lastModified descending
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query took 328 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl >> xpath query returned 2643 results
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl entire query execution took 328 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.xxx.wcm.components.List Prep and getResult : 328ms
    Example 2:  Offset 50, query takes 312ms, results returned in 890ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.xxx.wcm.components.List Time to prepare : 0ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (URL):
    group.0_property=jcr%3acontent%2fpathTags&group.0_property.0_value=testing&group.0_propert y.1_value=portal&group.0_property.2_value=de&group.0_property.and=true&group.p.or=true&ord erby=%40jcr%3acontent%2fcq%3alastModified&orderby.index=true&orderby.sort=desc&p.limit=25& p.offset=50
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (predicate tree):
    ROOT=group: limit=25, offset=50[
        {group=group: or=true[
            {0_property=property: 1_value=portal, 2_value=de, property=jcr:content/pathTags, 0_value=testing, and=true}
        {orderby=orderby: index=true, sort=desc, orderby=@jcr:content/cq:lastModified}
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query: //*[((jcr:content/@pathTags = 'portal' and jcr:content/@pathTags = 'de' and jcr:content/@pathTags = 'testing'))] order by jcr:content/@cq:lastModified descending
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query took 312 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl >> xpath query returned 2643 results
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl entire query execution took 312 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.xxx.wcm.components.List Prep and getResult : 890ms
    Example 3: Offset 2625, query takes 359ms, results returned in 10250ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.xxx.wcm.components.List Time to prepare : 0ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (URL):
    group.0_property=jcr%3acontent%2fpathTags&group.0_property.0_value=testing&group.0_propert y.1_value=portal&group.0_property.2_value=de&group.0_property.and=true&group.p.or=true&ord erby=%40jcr%3acontent%2fcq%3alastModified&orderby.index=true&orderby.sort=desc&p.limit=25& p.offset=2625
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (predicate tree):
    ROOT=group: limit=25, offset=2625[
        {group=group: or=true[
            {0_property=property: 1_value=portal, 2_value=de, property=jcr:content/pathTags, 0_value=testing, and=true}
        {orderby=orderby: index=true, sort=desc, orderby=@jcr:content/cq:lastModified}
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query: //*[((jcr:content/@pathTags = 'portal' and jcr:content/@pathTags = 'de' and jcr:content/@pathTags = 'testing'))] order by jcr:content/@cq:lastModified descending
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query took 359 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl >> xpath query returned 2643 results
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl entire query execution took 359 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.xxx.wcm.components.List Prep and getResult : 10250ms

Maybe you are looking for

  • Firefox has been crashing everyday, I do not know why?

    This is the report I get. Nobody got back to me from Firefox although I provided my e-mail each time. AdapterDeviceID: 1040 AdapterVendorID: 10de Add-ons: [email protected]:2.0.8,[email protected]:1.19.1,isreaditlater@ideas

  • Lack of apps and games!

    i've been a fan of nokia phones for a long time... last year i bought a nokia c7 and i love it but there is a big problem that nokia is having which is LACK OF APPS AND HD GAMES!!  i mean check out the app store or the android market with its amazing

  • Need to find invalid coverage period

    Hi Gurus I have the following data: Creation of Table DROP TABLE cal_rules; DROP TABLE cal_rules_dtl; CREATE TABLE cal_rules rule_id NUMBER(10), effective_date DATE, termination_date DATE); CREATE TABLE cal_rules_dtl rule_id NUMBER(10), as_of_date DA

  • How can i set the network time on my appleTV?

    Since the last update I have been unable to use my Appletv.  It says it can't get the network time.  I've restarted my router and unplugged the appleTV numerous times.  I've heard others have this problem too.  What is the solution?

  • Can anybody run Oralce(8.1.6) demo code ?

    Can anybody run Oralce(8.1.6) demo code about embedded SQL without any problem? It seems that I'm unlucky. I need to write a small embedded SQL program (Window 2000 / VC++6 compiler) that can connect to the Oracle server on the LAN. So I try to begin