Query run: takes ages to execute!

Hi,
I had created a query ( on cube which has roughly 497000 records ).
when i am trying to execute it, a pop up message will comes up after half an hour so saying that result has exceeded xxxxxx rows: and then after a while a affirmation of result is incomplete!
even after these two messages it never has given any out put! it is still running!
what could be done to see the results of the query?
I had removed all the chars in rows area except one and placed in free char area!
But the issue remains an issue
Any suggestions?
Thanks,
Ravi

1. Run your query in RSRT with execute and Debug option
2. Check Display SQL Query in Database tab.
3. View your result and you will know which aggregates / tables its using and their selection.
4. Based on the SQL result you can fine tune your selection.
5. Try to run your selection with small range.
Hope this helps...

Similar Messages

  • SSRS 2008 R2 is extremely slow. The query runs in less than a second in the dataset designer but if you try to view the report it takes over 10 minutes. I have read this is a bug in SSRS 2008 R2. We installed the most recent patches and service packs.

    SSRS 2008 R2 is extremely slow.  The query runs in less than a second in the dataset designer but if you try to view the report it takes over 10 minutes.  I have read this is a bug in SSRS 2008 R2.  We installed the most recent patches and
    service packs.  Nothing we've done so far has fixed it and I see that I'm not the only person with this problem.  However I don't see any answers either.

    Hi Kim Sharp,
    According to your description that when you view the report it is extremely slow in SSRS 2008 R2 but it is very fast when execute the query in dataset designer, right?
    I have tested on my local environment and can‘t reproduce the issue. Obviously, it is the performance issue, rendering performance can be affected by a combination of factors that include hardware, number of concurrent users accessing reports, the amount
    of data in a report, design of the report, and output format. If you have parameters in your report which contains many values in the list, the bad performance as you mentioned is an known issue on 2008 R2 and already have the hotfix:
    http://support.microsoft.com/kb/2276203
    Any issue after applying the update, I recommend you that submit a feedback at https://connect.microsoft.com/SQLServer/ 
    If you don’t have, you can do some action to improve the performance when designing the report. Because how you create and update reports affects how fast the report renders.
    Actually, the Report Server ExecutionLog2  view contains reports performance data. You could make use of below query to see where the report processing time is being spent:
    After you determine whether the delay time is in data retrieval, report processing, or report rendering:
    use ReportServer
    SELECT TOP 10 ReportPath,parameters,
    TimeDataRetrieval + TimeProcessing + TimeRendering as [total time],
    TimeDataRetrieval, TimeProcessing, TimeRendering,
    ByteCount, [RowCount],Source, AdditionalInfo
    FROM ExecutionLog2
    ORDER BY Timestart DESC
    Use below methods to help troubleshoot issues according to the above query result :
    Troubleshooting Reports: Report Performance
    Besides this, you could also follow these articles for more information about this issue:
    Report Server Catalog Best Practices
    Performance, Snapshots, Caching (Reporting Services)
    Similar thread for your reference:
    SSRS slow
    Any problem, please feel free to ask
    Regards
    Vicky Liu

  • My new iphone is running extremley slow, internet takes ages to load, apps take ages to install and has trouble connecting to app store. and the apps dont work properly like ebay wont open or refresh items, plz help

    my new iphone is running extremley slow, internet takes ages to load, apps take ages to install and has trouble connecting to app store. and the apps dont work properly like ebay wont open or refresh items, plz help
    my iphone4 is 10 times faster
    i have backed up on itunes and restored but still no luck
    even siri is lagging

    my new iphone is running extremley slow, internet takes ages to load, apps take ages to install and has trouble connecting to app store. and the apps dont work properly like ebay wont open or refresh items, plz help
    my iphone4 is 10 times faster
    i have backed up on itunes and restored but still no luck
    even siri is lagging

  • FOR UPDATE causing query to take very long to execute.. What can we do ??

    SELECT cell_data
    FROM csv_workfile
    WHERE cell_row = p_r
    AND cell_column = p_c
    FOR UPDATE;
    this is our query. it take very long to execute.
    wat can we do to get it working properly.
    this is real real urgent .
    Ragards

    Hi,
    first ask yourself if a SELECT FOR UPDATE is really necessary. If so, try to use an FOR UPDATE OF <attribute>. If there are many users accessing and updateing this table try NOWAIT Option. Your process will not be blocked on case of another lock. You will get error ORA-00054 and can do other things while waiting.
    Keep in mind that locks will only released after COMMIT.
    But remember to ask yourself. Row locking can be very time consuming. If you can avoid it.
    Bye,
    Holger

  • Need to write a query that takes more than 2 minutes to execute

    Hi ,
    I am using Oracle 10g as my DataBase.
    I am writing a small program for Testing purpose , in that my requirement is to write a query that takes more than 2 minutes to execute .Right now i have only a small Table called as "Users" with very less data .
    Please let me know how can i achieve this thing ??
    Thanks .

    So please tell me , how can i achieve this . Thanks in advance .P. Forstman's example above will probably be more reliable, but here's an example of my idea - harder to control timing (untested)
    select count(*)
      from dba_objects o, dba_tables t, dba_tab_columns tc
    where o.object_name||'' = t.table_name||''
       and o.owner||'' = t.owner||''
      and t.table_name||'' = tc.table_name
      and t.owner||'' = tc.owner||''

  • Since I've updated my iPad to iOS 6 it's running a lot slower especially when I scroll up and down and takes ages to load up on the App Store I know they got glitches but are they gonna be sorting this out as well ?

    Since I've updated my iPad to iOS 6 it's running a lot slower especially when I scroll up and down and takes ages to load up on the App Store I know they got glitches but are they gonna be sorting this out as well ?

    Hi!
    I am also having issues with the device since updating to IOs 6. The time between clicking an icon, i.e. a picture, and the load of that picture takes longer enough to notice it. It did not happen before the update.
    I am also having problems with some applications, specially the Facebook application. When I click on my picture in the right bar of the app, it promps a window that ask me for share my location, etc.. and get freeze there.
    Regarding reseting the device, already done. It did not helped.
    Also, SIRI is very nice, but sometimes it stop working and I have to re-start the device in order to being able to use SIRI again. When it happens, it looks like SIRI is sending the information, but does not do anything after.
    It is understandable that they require some time to get the new IOs version working properly, but I dont like to be a Beta tester.

  • FAST enabled MView takes ages to refresh

    Hi All,
    I have a materlialized view which is based on a single fact table ( OAS_BALANCE_UNIONED 11,445,156 rows) and a bunch of dimension tables each containing limited number of rows (a typical start schema).
    The conditioins to make the mview fast refreshable have been met so the mview created successfully with the FAST option.
    The mview then worked just fine after than and had been refreshing FAST (6 minutes) for the whole day till other mviews started to refresh in the same time. Some of those mviews read from some of the base tables of the mview in question. Since then, the mivew has started to take ages (3 hours) to refresh. There is surely something wrong because the COMPLETE refresh was faster than that.
    Later, I stopped all the refresh jobs of the other mviews. Yet, the FAST mview still has the same problem of taking ages to refresh.
    When I monitored the session that refreshes the mview, I noticed in the "Long Ops" that
    most of the time is spent on a step named as "Hash Join" (it appears in the Message column). Hash join occurrs because of the outer join.
    Why my FAST mview suddenly started to take so long time to refresh?
    How can I make my FAST mivew really refresh fast?
    Technical Details are below:
    OS: Windows 2003 32-bit
    DB version: Oracle 10g R1 ( 10.1.0.2.0 )
    CREATE MATERIALIZED VIEW FACT_BAL_MV
    BUILD DEFERRED
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    AS
    SELECT B.cmpcode  BAL_KEY,
           B.CMPCODE,
           B.YR,
           B.PERIOD,
           B.BALCODE,
           B.CURCODE,
           B.REPBASIS,
           TO_NUMBER (B.YR || LPAD (B.PERIOD, 2, 0)) DIM_PERIOD_MV,
           D_CMP.DIM_KEY DIM_COMPANY,
           D_CUR.DIM_KEY DIM_CURRENCY,
           D_EL_1.DIM_KEY DIM_EL_1,
           D_EL_2.DIM_KEY DIM_EL_2,
           D_EL_3.DIM_KEY DIM_EL_3,
           D_EL_4.DIM_KEY DIM_EL_4,
           GRP_1_P_EL.DIM_KEY DIM_GRP_1_P_EL,
           D_GRP_1_B_EL.DIM_KEY DIM_GRP_1_B_EL,
           D_GRP_1_K_EL.DIM_KEY DIM_GRP_1_K_EL,
           D_GRP_2_D_EL.DIM_KEY DIM_GRP_2_D_EL,
           D_GRP_3_E_EL.DIM_KEY D_GRP_3_E_EL,
           B.ROWID X_OB_ROWID,
           D_CMP.ROWID X_CMP_ROWID,
           D_CUR.ROWID X_CUR_ROWID,
           D_EL_1.ROWID X_EL_1_ROWID,
           D_EL_2.ROWID X_EL_2_ROWID,
           D_EL_3.ROWID X_EL_3_ROWID,
           D_EL_4.ROWID X_EL_4_ROWID,
           GRP_1_P_EL.ROWID X_GRP_1_P_EL_ROWID,
           GRP_1_P.ROWID X_GRP_1_P_ROWID,
           D_GRP_1_B_EL.ROWID X_GRP_1_B_EL_ROWID,
           D_GRP_1_B.ROWID X_GRP_1_B_ROWID,
           D_GRP_1_K_EL.ROWID X_GRP_1_K_EL_ROWID,
           D_GRP_1_K.ROWID X_GRP_1_K_ROWID,
           D_GRP_2_D_EL.ROWID X_GRP_2_D_EL_ROWID,
           D_GRP_2_D.ROWID X_GRP_2_D_ROWID,
           D_GRP_3_E_EL.ROWID X_GRP_3_E_EL_ROWID,
           D_GRP_3_E.ROWID X_GRP_3_E_ROWID
      FROM CLVE_STAGING.OAS_BALANCE_UNIONED B,
           FINANCE.DW_DIM_COMPANY D_CMP,
           FINANCE.DW_DIM_CURRENCY D_CUR,
           FINANCE.DW_DIM_EL_1 D_EL_1,
           FINANCE.DW_DIM_EL_2 D_EL_2,
           FINANCE.DW_DIM_EL_3 D_EL_3,
           FINANCE.DW_DIM_EL_4 D_EL_4,
           FINANCE.DW_DIM_GRP_1_P_EL GRP_1_P_EL,
           FINANCE.DW_DIM_GRP_1_P GRP_1_P,
           FINANCE.DW_DIM_GRP_1_B_EL D_GRP_1_B_EL,
           FINANCE.DW_DIM_GRP_1_B D_GRP_1_B,
           FINANCE.DW_DIM_GRP_1_K_EL D_GRP_1_K_EL,
           FINANCE.DW_DIM_GRP_1_K D_GRP_1_K,
           FINANCE.DW_DIM_GRP_2_D_EL D_GRP_2_D_EL,
           FINANCE.DW_DIM_GRP_2_D D_GRP_2_D,
           FINANCE.DW_DIM_GRP_3_E_EL D_GRP_3_E_EL,
           FINANCE.DW_DIM_GRP_3_E D_GRP_3_E
    WHERE     B.CMPCODE = D_CMP.CODE
           AND (B.CMPCODE = D_CUR.CMPCODE(+) AND B.CURCODE = D_CUR.CODE(+))
           AND (B.CMPCODE = D_EL_1.CMPCODE(+) AND B.EL1 = D_EL_1.EL_CODE(+))
           AND (B.CMPCODE = D_EL_2.CMPCODE(+) AND B.EL2 = D_EL_2.EL_CODE(+))
           AND (B.CMPCODE = D_EL_3.CMPCODE(+) AND B.EL3 = D_EL_3.EL_CODE(+))
           AND (B.CMPCODE = D_EL_4.CMPCODE(+) AND B.EL3 = D_EL_4.EL_CODE(+))
           AND (GRP_1_P_EL.CMPCODE = GRP_1_P.CMPCODE(+)
                AND GRP_1_P_EL.GRP_CODE = GRP_1_P.GRP_CODE(+))
           AND (B.CMPCODE = GRP_1_P_EL.CMPCODE(+)
                AND B.EL1 = GRP_1_P_EL.EL_CODE(+))
           AND (D_GRP_1_B_EL.CMPCODE = D_GRP_1_B.CMPCODE(+)
                AND D_GRP_1_B_EL.GRP_CODE = D_GRP_1_B.GRP_CODE(+))
           AND (B.CMPCODE = D_GRP_1_B_EL.CMPCODE(+)
                AND B.EL1 = D_GRP_1_B_EL.EL_CODE(+))
           AND (D_GRP_1_K_EL.CMPCODE = D_GRP_1_K.CMPCODE(+)
                AND D_GRP_1_K_EL.GRP_CODE = D_GRP_1_K.GRP_CODE(+))
           AND (B.CMPCODE = D_GRP_1_K_EL.CMPCODE(+)
                AND B.EL1 = D_GRP_1_K_EL.EL_CODE(+))
           AND (D_GRP_2_D_EL.CMPCODE = D_GRP_2_D.CMPCODE(+)
                AND D_GRP_2_D_EL.GRP_CODE = D_GRP_2_D.GRP_CODE(+))
           AND (B.CMPCODE = D_GRP_2_D_EL.CMPCODE(+)
                AND B.EL2 = D_GRP_2_D_EL.EL_CODE(+))
           AND (D_GRP_3_E_EL.CMPCODE = D_GRP_3_E.CMPCODE(+)
                AND D_GRP_3_E_EL.GRP_CODE = D_GRP_3_E.GRP_CODE(+))
           AND (B.CMPCODE = D_GRP_3_E_EL.CMPCODE(+)
                AND B.EL3 = D_GRP_3_E_EL.EL_CODE(+));
    CREATE INDEX FINANCE.DW_FACT_BAL_MV_INX
    ON FACT_BAL_MV (X_OB_ROWID);
    SQL> select count(X_OB_ROWID) from FACT_BAL_MV ;
    COUNT(X_OB_ROWID)
             11444816
    Elapsed: 00:00:27.85
    SQL> select bytes/1024/1024/1024 GB from
    user_segments where segment_name='FACT_BAL_MV';
            GB
    4.4296875search id: redcorolla

    Hard to say why the refresh is now slow. Possibilties include
    * system resources implemented by the fast refresh
    * something changed on the query making the background query run slowly
    Look at the query first. 18 tables, with almost every join an outer join. Some of the tables appear more than once in the query.
    Get an executioion plan to see how the query is being executed.

  • Query Timeout on stored procedure executed from Access 2010

    I am trying to delete old jobs from a SQL 2008 R2 database. I created a stored procedure on the server that deletes all Jobs over 1 year old. The delete can take several minutes to run as it deletes record on several related tables. I am getting a error
    "2147217871 - [Microsoft][ODBC SQL Server Driver]Query timeout expired". The query runs fine on the Server. I have tried setting Client timeout to 300 sec but it times out way before 5 min. I am not sure what else I
    can do to fix this problem. I am hoping someone has seen this and figured out a fix.
    Here is there code
    Public Sub Cleanup_Database()
    On Error GoTo CleanUp_Err
        Dim cmd As ADODB.Command
        Set cmd = New ADODB.Command
        ODBC_conn = "ODBC;Description=testbox2;DRIVER=SQL Server;" & _
                    "SERVER=O2GMSAPPTEST\SQL122DEVL;Trusted_Connection=Yes;" & _
                    "APP=Microsoft Office 2010;DATABASE=IMB_TraceData;StatsLog_On=Yes"
        cmd.ActiveConnection = ODBC_conn
        cmd.CommandType = adCmdStoredProc
        cmd.CommandText = "DataBase_Cleanup"
        cmd.Execute
    CleanUp_Err:
        Dim i As Long
        Dim str As String
        str = ""
        For i = 0 To Errors.Count - 1
            str = str & Errors(i).Number & "-" & Errors(i).Description & " " & vbNewLine
        Next i
        If str = "" Then
            str = Err.Number & " - " & Err.Description
        End If
        MsgBox str, , "Trace Update"
    End Sub
      

    You didn't say how you were setting the client timeout, but this has worked for me in an adp.
    'Temporarily increase query timeout, which is an application-wide setting
    'CurrentProject.Connection.CommandTimeout = 60 'Too late to change this setting- no effect
    Const cstrTimeoutOptionName As String = "OLE/DDE Timeout (Sec)"
    Const clngTimeoutSecondsForQuery As Long = 300
    strQueryTimeOutOriginal = Application.GetOption(OptionName:=cstrTimeoutOptionName)
    Application.SetOption cstrTimeoutOptionName, CStr(clngTimeoutSecondsForQuery)
    Paul

  • Stopping a Query taking more time to execute in runtime in Oracle Forms.

    Hi,
    In the present application one of the oracle form screen is taking long time to execute a query, user wanted an option to stop the query in between and browse the result (whatever has been fetched before stopping the query).
    We have tried three approach.
    1. set max fetch record in form and block level.
    2. set max fetch time in form and block level.
    in above two method does not provide the appropiate solution for us.
    3. the third approach we applied is setting the interaction mode to "NON BLOCKING" at the form level.
    It seems to be worked, while the query took long time to execute, oracle app server prompts an message to press Esc to cancel the query and it a displaying the results fetched upto that point.
    But the drawback is one pressing esc, its killing the session itself. which is causing the entire application to collapse.
    Please suggest if there is any alternative approach for this or how to overcome this perticular scenario.
    This kind of facility is alreday present in TOAD and PL/SQL developer where we can stop an executing query and browse the results fetched upto that point, is the similar facility is avialable in oracle forms ,please suggest.
    Thanks and Regards,
    Suraj
    Edited by: user10673131 on Jun 25, 2009 4:55 AM

    Hello Friend,
    You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
    1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
    and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
    First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
    Please check the possbility of finding it through other AR tables.
    2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
    For example : For ra_cust_trx_types_all use ra_cust_trx_types.
    This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
    3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
    You can also check the same using
    SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
    If the tables are not analyzed , the CBO will not be able to tune your query.
    4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
    5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
    report.
    Thanks,
    Neeraj Shrivastava
    [email protected]
    Edited by: user9352949 on Oct 1, 2010 8:02 PM
    Edited by: user9352949 on Oct 1, 2010 8:03 PM

  • Slow query running against DBA_OBJECTS in 10.1.0.4

    Running the following query takes ages to return a result (in fact I haven't even bothered waiting for it to return a result):
    SELECT a.object_type, a.object_name, b.owner, b.object_type,
    b.object_name,
    b.object_id, b.status
    FROM SYS.dba_objects a,
    SYS.dba_objects b,
    (SELECT object_id, referenced_object_id
    FROM public_dependency
    START WITH object_id =
    (SELECT object_id
    FROM SYS.dba_objects
    WHERE owner = :owner
    AND object_name = :object
    AND object_type = :type)
    CONNECT BY PRIOR referenced_object_id = object_id) c
    WHERE a.object_id = c.object_id
    AND b.object_id = c.referenced_object_id
    AND a.owner NOT IN ('SYS', 'SYSTEM')
    AND b.owner NOT IN ('SYS', 'SYSTEM')
    AND a.object_name <> 'DUAL'
    AND b.object_name <> 'DUAL';
    If I add an /*+ ALL_ROWS */ hint though I get a result almost instantly. Hints in Oracle 10g are considered to be bad form (?) so what do I do? Is this a badly written query or might it be a database configuration issue?
    Any hints (pun intended) would be greatly appreciated.
    Richard

    The data dictionary views are not really meant to be joined together, not even to themselves, although everybody does it.
    Hints, per se, are not necessarily bad. There are "good" hints that give the optimizer more information either about the tables involved, or the intent of the query, without unduly limiting its scope for generating a query plan. The ALL_ROWS hint (and its oposite FIRST_ROWS) are such hints. The ALL_ROWS hint tells the optimizer that I am willing to wait longer for the first row to come back if that means I get the last row faster. It will make the optimizer tend to use more full scans and hash joins as opposed to nested loop joins and index access. Another "good" hint would be the CARDINALITY hint when used on a GTT or a TABLE() cast.
    An index hint, or use_nl or those types of hints tend to be "bad" hints in that they tell the optimizer how to access the data. They might work today, but if something changes tomorrow, the hint is still forcing an execution plan.
    Try searching asktom for "good hints", you should get a few hits with his opinions on good and bad hints.
    HTH
    John

  • What to check in my query running slow

    Hi,
    Our 11g database was migrated from one platform to another. A query which ran very quickly before on the source platform now takes ages to run on the target platform. Please could you point me in the right direction as to what things I should check to fix this problem?
    thank you

    Hi,
    Following are the areas which can be explored. I hope this will help you.
    1) Check the Explain plan for both servers. I hope you know how to gahter the explain plan for quries. Its better to use dbms_XPLAIN
    -Most probably your explain plan will be different..
    2) Check the init parameters of source and target tables
    3) check the optimizer parameters of source and target tables. Parameters like optimizer_ind_cost_adj,sort_area_size,hash_area_size,Optimizer_mode,cursor_sharing etc
    4) check the data volumn of source and target tables.
    5) Check the statistics of source and target tables. Its cumbersome to compare the statistics of source and target. So what you can do is
    a) Take the export of statistics of target tables
    b) Import those statistics to source tables(Before importing take the backup of source table statistics)
    c) Once done the check the explain plan, it should be the same like target database.
    -If step 2 confirms that statistics is the problem then you should take the export of source table statistics and import in target to get the same explain plan.
    HTH

  • Manage query - run time errors

    Hi,
    we have one user working at one desktop which receives error "run-time error 5" followed by "401 Automation error" when selecting the manage query wizard proceeding from the BPC Excel action pane.
    We only have this at one desktop. Any ideas?
    Dries

    Hi,
    You can check the query run time by using T code RSRT.
    1. Go to T code RSRT
    2. Give your query name.
    3. Click on the "Execute + Debug" tab
    4. Various check boxed for Debug options will appear on the screen
    5. Under the "Others" node you will find the check box for "Display Statics Data"
    6. Check this check box and click on continue.
    7. This will execute the query and provide you selection screen if any.
    8. Once the query is completely executed, click on "back" button or simply hit F3.
    9. This takes you to the "Statistic Data for Query Runtime screen"
    10. Here you can take the total of "Duration" column to get the total duration for query execution.
    Please refer following link for details:
    [http://help.sap.com/saphelp_nw70/helpdata/en/43/e3807a6df402d3e10000000a1553f7/frameset.htm]
    Hope this answers your query.
    - Geetanjali

  • How to measure query run time and mnitor performance

    Hai All,
                   A simple question. How to measure query run time and mnitor performance? I want to see the parameters like how long it took to execute, how much space it took etc.
    Thank you.

    hi,
    some ways
    1. use transaction st03, expert mode.
    2. tables rsddstat*
    3. install bw statistics (technical content)
    there are docs on this, also bi knowledge performance center.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    BW Performance Tuning Knowledge Center - SAP Developer Network (SDN)
    Business Intelligence Performance Tuning [original link is broken]
    also take a look
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/31b6b490-0201-0010-e4b6-a1523327025e
    Prakash's weblog on this topic..
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'.

  • Query Running Slow due to nvl.

    I have a Cursor Based query written as a Procedure.when i invoke that procedure,I found that two condition statements are the ones which is making my query run very slow.
    Since this has been handled with NVL statements query is running very slow.Currently query takes more than one hour to execute.if i comment these two statements and run the query,it takes ony 20 secs to complete.
    Those two statements are
    'and rbsa.batch_source_id = nvl(p_source_type_id, rbsa.batch_source_id)'
    'and rsa.salesrep_id between nvl(p_from_salesrep_id, rsa.salesrep_id) and nvl(p_to_salesrep_id, rsa.salesrep_id)'
    Is there any other alternative to replace these two statements by other means.
    Thanks in Advance...

    Dear Friend,
    Please try to replace nvl(p_source_type_id, rbsa.batch_source_id) with decode(p_source_type_id,NULL,rbsa.batch_source_id,p_source_type_id)
    It will speedup your query.
    Regards
    Ahamed Rafeeque Cherkala

  • To optimize query run-time

    Hi,
    I have an sql query that access a single table which contains approx. 10 million records. The table has 4 columns (there are no indexes defined for the columns), which are 'new_id', 'time', 'access_code' and 'graph_qty'. With time being defined in 'dd-mm-yyyy hh:mm:ss'. The idea is to scan the table to retrieve records whose 'time' and whose 'access_code' match with the condition in the where-clause and then de-normalize these records based on the 'access_code'. The resultant set would have records wherein 'access_code' is denormalized into columns, with each 'access_code' having a unique column.
    The query runs for about 30mins.
    /* '&¤t_date' is a pre-defined variable containing a date (assume sysdate)*/
    SELECT
    new_id
    , trunc(time) AS time
    , SUM(CASE WHEN access_code = '100' THEN graph_qty END ) AS graph_1
    , SUM(CASE WHEN access_code = '200' THEN graph_qty END ) AS graph_2
    , SUM(CASE WHEN access_code = '300' THEN graph_qty END ) AS graph_3
    , SUM(CASE WHEN access_code = '400' THEN graph_qty END ) AS graph_4
    , SUM(CASE WHEN access_code = '500' THEN graph_qty END ) AS graph_5
    , SUM(CASE WHEN access_code = '600' THEN graph_qty END ) AS graph_6
    , SUM(CASE WHEN access_code = '700' THEN graph_qty END ) AS graph_7
    , SUM(CASE WHEN access_code = '800' THEN graph_qty END ) AS graph_8
    , SUM(CASE WHEN access_code = '900' THEN graph_qty END ) AS graph_9
    , SUM(CASE WHEN access_code = '1000' THEN graph_qty END ) AS graph_10
    FROM
    dummy_table
    WHERE trunc(time) IN ( '&¤t_date'
    , ADD_MONTHS('&¤t_date',-1)
    , ADD_MONTHS('&¤t_date',-3)
    , ADD_MONTHS('&¤t_date',-6)
    , ADD_MONTHS('&¤t_date',-12)
    , ADD_MONTHS('&¤t_date',-13)
    , ADD_MONTHS('&¤t_date',-15)
    , ADD_MONTHS('&¤t_date',-18)
    , ADD_MONTHS('&¤t_date',-24)
    AND access_code IN ('100','200','300','400','500','600','700','800','900','1000')
    GROUP BY
    new_id
    , time
    Please suggest ways to reduce the qry run-time.
    Thanks,
    kartik

    does this 20min include the time to display the data?
    how long much does it take to execute the following queries?
    select /*+ full(t) */ count(*) from dummy_table t;
    select count(*) from(
    SELECT
    new_id
    , trunc(time) AS time
    , SUM(CASE WHEN access_code = '100' THEN graph_qty END ) AS graph_1
    , SUM(CASE WHEN access_code = '200' THEN graph_qty END ) AS graph_2
    , SUM(CASE WHEN access_code = '300' THEN graph_qty END ) AS graph_3
    , SUM(CASE WHEN access_code = '400' THEN graph_qty END ) AS graph_4
    , SUM(CASE WHEN access_code = '500' THEN graph_qty END ) AS graph_5
    , SUM(CASE WHEN access_code = '600' THEN graph_qty END ) AS graph_6
    , SUM(CASE WHEN access_code = '700' THEN graph_qty END ) AS graph_7
    , SUM(CASE WHEN access_code = '800' THEN graph_qty END ) AS graph_8
    , SUM(CASE WHEN access_code = '900' THEN graph_qty END ) AS graph_9
    , SUM(CASE WHEN access_code = '1000' THEN graph_qty END ) AS graph_10
    FROM
    dummy_table
    WHERE trunc(time) IN ( '&¤t_date'
    , ADD_MONTHS('&¤t_date',-1)
    , ADD_MONTHS('&¤t_date',-3)
    , ADD_MONTHS('&¤t_date',-6)
    , ADD_MONTHS('&¤t_date',-12)
    , ADD_MONTHS('&¤t_date',-13)
    , ADD_MONTHS('&¤t_date',-15)
    , ADD_MONTHS('&¤t_date',-18)
    , ADD_MONTHS('&¤t_date',-24)
    AND access_code IN ('100','200','300','400','500','600','700','800','900','1000')
    select count(*) from(
    SELECT
    new_id
    , trunc(time) AS time
    , SUM(CASE WHEN access_code = '100' THEN graph_qty END ) AS graph_1
    , SUM(CASE WHEN access_code = '200' THEN graph_qty END ) AS graph_2
    , SUM(CASE WHEN access_code = '300' THEN graph_qty END ) AS graph_3
    , SUM(CASE WHEN access_code = '400' THEN graph_qty END ) AS graph_4
    , SUM(CASE WHEN access_code = '500' THEN graph_qty END ) AS graph_5
    , SUM(CASE WHEN access_code = '600' THEN graph_qty END ) AS graph_6
    , SUM(CASE WHEN access_code = '700' THEN graph_qty END ) AS graph_7
    , SUM(CASE WHEN access_code = '800' THEN graph_qty END ) AS graph_8
    , SUM(CASE WHEN access_code = '900' THEN graph_qty END ) AS graph_9
    , SUM(CASE WHEN access_code = '1000' THEN graph_qty END ) AS graph_10
    FROM
    dummy_table
    WHERE trunc(time) IN ( '&¤t_date'
    , ADD_MONTHS('&¤t_date',-1)
    , ADD_MONTHS('&¤t_date',-3)
    , ADD_MONTHS('&¤t_date',-6)
    , ADD_MONTHS('&¤t_date',-12)
    , ADD_MONTHS('&¤t_date',-13)
    , ADD_MONTHS('&¤t_date',-15)
    , ADD_MONTHS('&¤t_date',-18)
    , ADD_MONTHS('&¤t_date',-24)
    AND access_code IN ('100','200','300','400','500','600','700','800','900','1000')
    GROUP BY
    new_id
    , time

Maybe you are looking for

  • IWeb published to .mac- slideshow not working

    The slideshow and 'click to enlarge' features are not working. The button is there, but nothing happens when I (or anyone) click on them. How do I activate these functions? Also, I downloaded Moble Me, but I don't think I activated it because when I

  • CXML files with PI

    We are looking to import cXML files into PI and have been advised to get Conversion Agent by Itemfield/Informatica. I find it hard to believe that you can not import cXML files which are just a type of XML into PI without additional software. Does an

  • Container  ALV not working in background

    Hi all, I created an ALV grid, which runs fine foreground. In the background however the spool is created but cancelled. REATE OBJECT g_custom_container         EXPORTING           container_name              = 'G_CUSTOM_CONTAINER'         EXCEPTIONS

  • Any help regarding Graph cursor?

    Hi All, I am working with a multi plot graph and i am using a single cursor associated with all the plots. In my application ,from the 10 plots user can select and view selected no of plots ie,after execution it displays 10 plots now the user use som

  • Xml error can't fix

    So I created a flash banner and I want the whole banner to be clickable I made an invisble button on the timeline into a movie clip and insdie the movieclip in the actions layer my code is thi: mybutton.addEventListener(MouseEvent.MOUSE_UP, mouseUpHa