Poor report query performance

Hi Team, below is the report view which is causing slowness , let me know any suggestions if you have
CREATE VIEW [REPORT].[View_MachinePerformanceBySlot] AS  
SELECT  
      VA.SITE_NUM, 
       VA.SLOT_NUMBER, 
       VA.AREA_NAME,        
       VA.MANUFACTURER_NAME, 
       VA.ATYP_ID, 
       VA.HOLD_PERCENTAGE, 
       VA.SLOT_DENOM, 
       VA.SERIAL_NUM, 
       VA.AREA_ID, 
       a.THEM_NAME, 
    a.GAME_NAME, 
       a.TCAT_LONG_NAME, 
       a.TGRP_LONG_NAME, 
       VA.MTYP_NAME, 
       VA.OWNER_LABEL_KEY, 
       vsmr.SDS_Bets AS HANDLE, 
       vsmr.SDS_Plays AS HANDLE_PULL, 
       vsmr.Days_On AS DAYS_ACTIVE, 
       VA.GAME_TYPE, 
       Mtr_NamedAsstID, 
       Mtr_GameDay AS MVR_GDAY_DATE, 
    vsmr.PTYP_ID AS MVR_PTYP_ID, 
    Mtr_PeriodType AS PERIOD_TYPE, 
    CAST(SDS_Bets AS FLOAT)  AS MVR_BETS, 
    CAST(SLIP_APJP_JACKPOT + SLIP_PROGRESSIVE_JAKPT  
   + SLIP_MYST_JACKPOT + SLIP_CC_JACKPOT  
   + SLIP_CELEBRATION_JACKPOT + vsmr.CASH_PROG+vsmr.NONCASH_PROG  
    AS FLOAT)AS JACKPOTS, 
    SDS_MachinePaidProgressiveWins, 
    MVR_THEORETICAL_WIN, 
    (CASE WHEN vsmr.SDS_Bets = 0 THEN 0 
   ELSE (MVR_THEORETICAL_WIN * 100) / vsmr.SDS_Bets 
    END)  AS ACTUAL_PERCENTAGE, 
    ((CASE WHEN vsmr.SDS_Bets = 0 THEN 0 ELSE (MVR_THEORETICAL_WIN * 100) / vsmr.SDS_Bets END)  
  - VA.HOLD_PERCENTAGE) AS VAR_HOLD_PERCENT, 
    Days_On AS MVR_DAYS_ONLINE_VAL, 
    (vsmr.SDS_1_bills + vsmr.SDS_5_bills + vsmr.SDS_10_bills + vsmr.SDS_20_bills  
       + vsmr.SDS_50_bills + vsmr.SDS_100_bills + vsmr.SDS_CoinDrop) AS Bills_Coins, 
       CAST(vsmr.SDS_1_bills + vsmr.SDS_5_bills + vsmr.SDS_10_bills + vsmr.SDS_20_bills  
            + vsmr.SDS_50_bills + vsmr.SDS_100_bills + vsmr.SDS_CoinDrop + 
            + vsmr.SDS_EFTInCashablePromo + vsmr.SDS_EFTInNonCashable 
            + vsmr.SDS_EFTInCashable + vsmr.SDS_TicketInCashable + vsmr.SDS_TicketInNonCashable 
            + vsmr.SDS_TicketInPromoCashable 
            -( vsmr.SLIP_APJP_JACKPOT + vsmr.SLIP_PROG_JAKPT_ST + vsmr.SLIP_CC_JACKPOT +vsmr.SLIP_MYST_JACKPOT 
    + vsmr.SLIP_DISPUTE + vsmr.SLIP_FILL+vsmr.SLIP_CELEBRATION_JACKPOT +vsmr.CASH_PROG 
    +vsmr.NONCASH_PROG - vsmr.SLIP_BLEED 
            -( vsmr.SDS_EFTOutCashablePromo+vsmr.SDS_EFTOutNonCashable+vsmr.SDS_EFTOutCashable+ 
     SDS_TicketOutNonCashable+SDS_TicketOutCashable 
         AS FLOAT) AS SDS_WIN,  
    (vsmr.SLIP_APJP_JACKPOT + vsmr.SLIP_PROG_JAKPT_ST + vsmr.SLIP_CC_JACKPOT+vsmr.SLIP_MYST_JACKPOT  
   + vsmr.SLIP_DISPUTE + vsmr.SLIP_FILL+vsmr.SLIP_CELEBRATION_JACKPOT - vsmr.SLIP_BLEED 
  )AS SLIP_EXPENSES, 
    -- WIN 
    CAST((vsmr.SDS_1_bills + vsmr.SDS_5_bills + vsmr.SDS_10_bills + vsmr.SDS_20_bills  
            + vsmr.SDS_50_bills + vsmr.SDS_100_bills + vsmr.SDS_CoinDrop + 
            + vsmr.SDS_EFTInCashablePromo + vsmr.SDS_EFTInNonCashable 
            + vsmr.SDS_EFTInCashable + vsmr.SDS_TicketInCashable + vsmr.SDS_TicketInNonCashable 
            + vsmr.SDS_TicketInPromoCashable 
        )AS FLOAT) AS WIN, 
        -- SHORTS 
        (vsmr.SLIP_APJP_JACKPOT +vsmr.SLIP_PROG_JAKPT_ST + vsmr.SLIP_CC_JACKPOT 
   +vsmr.SLIP_DISPUTE +vsmr.SLIP_CELEBRATION_JACKPOT  
   +(vsmr.SLIP_FILL - vsmr.SLIP_BLEED) 
   +(vsmr.SDS_EFTOutCashablePromo+vsmr.SDS_EFTOutNonCashable+vsmr.SDS_EFTOutCashable 
    +SDS_TicketOutNonCashable+SDS_TicketOutCashable) 
  )AS SHORTS,  
  -- MYSTERY_SHORT 
        vsmr.SLIP_MYST_JACKPOT + vsmr.CASH_PROG + vsmr.NONCASH_PROG  AS MYSTERY_SHORT, 
  -- SLIP_LINK_PROG_JAKPT 
  ISNULL(SLIP_LINK_PROG_JAKPT, 0)  AS SLIP_LINK_PROG_JAKPT,     
        -- ACTUAL_WIN 
        (ISNULL(ActualMtr.ACTUAL_CASH_COUPON_VAL, 0) 
        +ISNULL(ActualMtr.ACTUAL_NONCASH_COUPON_VAL, 0)  
        +ISNULL((ActualMtr.ACTUAL_1_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_5_BILLS), 0)  
        +ISNULL((ActualMtr.ACTUAL_10_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_20_BILLS), 0)  
        +ISNULL((ActualMtr.ACTUAL_50_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_100_BILLS), 0) 
        +ISNULL(vamcr.SCALE_AMT, 0)  
        +ISNULL(ActualMtr.ACTUAL_TKTINCASH, 0) 
        +ISNULL(ActualMtr.ACTUAL_TKTINNONCASH, 0) 
        +ISNULL(ActualMtr.ACTUAL_TKTINPROMOCASH, 0) 
        +(vsmr.SDS_EFTInCashablePromo + vsmr.SDS_EFTInNonCashable + vsmr.SDS_EFTInCashable ) 
        -(vsmr.SLIP_APJP_JACKPOT + vsmr.SLIP_PROG_JAKPT_ST + vsmr.SLIP_CC_JACKPOT+vsmr.SLIP_MYST_JACKPOT  
            + vsmr.SLIP_DISPUTE + vsmr.SLIP_FILL +vsmr.SLIP_CELEBRATION_JACKPOT  
            +vsmr.CASH_PROG+vsmr.NONCASH_PROG - vsmr.SLIP_BLEED) 
        -(vsmr.SDS_EFTOutCashablePromo+vsmr.SDS_EFTOutNonCashable+vsmr.SDS_EFTOutCashable 
    +SDS_TicketOutNonCashable+SDS_TicketOutCashable) 
  -ISNULL(SLIP_LINK_PROG_JAKPT, 0) 
         ) AS ACTUAL_WIN 
        ,(ISNULL(ActualMtr.ACTUAL_CASH_COUPON_VAL, 0) 
        +ISNULL(ActualMtr.ACTUAL_NONCASH_COUPON_VAL, 0)  
        +ISNULL((ActualMtr.ACTUAL_1_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_5_BILLS), 0)  
        +ISNULL((ActualMtr.ACTUAL_10_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_20_BILLS), 0)  
        +ISNULL((ActualMtr.ACTUAL_50_BILLS), 0) + ISNULL((ActualMtr.ACTUAL_100_BILLS), 0) 
        +ISNULL(vamcr.SCALE_AMT, 0)  
        +ISNULL(ActualMtr.ACTUAL_TKTINCASH, 0) 
        +ISNULL(ActualMtr.ACTUAL_TKTINNONCASH, 0) 
        +ISNULL(ActualMtr.ACTUAL_TKTINPROMOCASH, 0) 
        +ISNULL(vsmr.SDS_EFTInCashablePromo,0 )  
        +ISNULL(vsmr.SDS_EFTInNonCashable, 0)  
        +ISNULL(vsmr.SDS_EFTInCashable, 0)) AS PHY_WIN 
        ,(vsmr.CASH_PROG_SLOT_CONTRIBUTION + vsmr.CASH_PSR_ARV_AMT_VAL+vsmr.NON_CASH_PROG_SLOT_CONTRIBUTION  
        +vsmr.NON_CASH_PSR_ARV_AMT_VAL ) AS PROVISION, 
        ISNULL(a.PTBL_NO_OF_PAYLINES,0) AS PTBL_NO_OF_PAYLINES -- check with ARV 
  FROM  ACCOUNTING.VIEW_SDS_METER_ROLLUP_WITH_PROG AS vsmr    
 JOIN REPORT.VIEW_ASSET VA 
    ON  VA. NAMED_ASSET_ID = Mtr_NamedAsstID  
 LEFT JOIN Accounting.VIEW_ACTUAL_METER_PERIODIC_ROLLUP AS ActualMtr 
             ON  (ActualMtr.NAMEDASSTID = vsmr.Mtr_NamedAsstID) 
             AND (ActualMtr.GAMEDAY = vsmr.Mtr_GameDay) 
             AND (ActualMtr.PTYP_ID = vsmr.PTYP_ID) 
 LEFT JOIN ACCOUNTING.VIEW_ACTUAL_METER_COIN_ROLLUP vamcr 
             ON vamcr.CN_NAMEDASSTID = vsmr.Mtr_NamedAsstID 
             AND vamcr.CN_GAMEDAY = vsmr.Mtr_GameDay 
             AND vamcr.CN_PTYP_ID = vsmr.PTYP_ID 
LEFT JOIN  
(SELECT NAGI_NAST_ID,AT.THEM_NAME,TC.TCAT_LONG_NAME,Tg.TGRP_LONG_NAME ,PTBL.PTBL_NO_OF_PAYLINES,aco.USER_CUSTOM10 AS GAME_NAME FROM  
 ACCOUNTING.NAMED_ASSET_GAME_INFO   
 LEFT  JOIN ACCOUNTING.GAME_INFO gf 
            ON  gf.GINFO_ID = NAGI_GINFO_ID 
            AND NAGI_IS_LATEST=1 
         JOIN  ACCOUNTING.PAYTABLE PTBL 
            ON  PTBL.PTBL_ID = gf.GINFO_PTBL_ID 
          JOIN  ASSET.THEME AT 
            ON  AT.THEM_ID= gf.GINFO_ASST_THME_ID 
         JOIN  ASSET.THEME_CATEGORY tc 
            ON  TC.TCAT_ID = AT.THEME_PARENT_ID 
         JOIN  asset.THEME_GROUP tg 
            on tc.TCAT_TGRP_ID=tg.TGRP_ID 
         JOIN  asset.THEME_TYPE TT  
            on TT.TTYP_ID=AT.TTYP_ID 
  JOIN ACCOUNTING.NAMED_ASSET na 
   on na.NAST_ID = NAGI_NAST_ID 
  JOIN ASSET.ASSET_CONFIGURATION ac 
   on ac.ACNF_NUMBER = na.NAST_NAME AND ac.ACNF_DELETED_TS is null 
  JOIN ASSET.ASSET_CONFIGURATION_OPTION aco 
   on aco.ACNF_ID = ac.ACNF_ID 
            ) as  a  
            ON  vsmr.Mtr_NamedAsstID =a.NAGI_NAST_ID 

I would change the part below with a CTE
LEFT JOIN  
(SELECT NAGI_NAST_ID,AT.THEM_NAME,TC.TCAT_LONG_NAME,Tg.TGRP_LONG_NAME ,PTBL.PTBL_NO_OF_PAYLINES,aco.USER_CUSTOM10 AS GAME_NAME FROM  
 ACCOUNTING.NAMED_ASSET_GAME_INFO   
 LEFT  JOIN ACCOUNTING.GAME_INFO gf 
            ON  gf.GINFO_ID = NAGI_GINFO_ID 
            AND NAGI_IS_LATEST=1 
         JOIN  ACCOUNTING.PAYTABLE PTBL 
            ON  PTBL.PTBL_ID = gf.GINFO_PTBL_ID 
          JOIN  ASSET.THEME AT 
            ON  AT.THEM_ID= gf.GINFO_ASST_THME_ID 
         JOIN  ASSET.THEME_CATEGORY tc 
            ON  TC.TCAT_ID = AT.THEME_PARENT_ID 
         JOIN  asset.THEME_GROUP tg 
            on tc.TCAT_TGRP_ID=tg.TGRP_ID 
         JOIN  asset.THEME_TYPE TT  
            on TT.TTYP_ID=AT.TTYP_ID 
  JOIN ACCOUNTING.NAMED_ASSET na 
   on na.NAST_ID = NAGI_NAST_ID 
  JOIN ASSET.ASSET_CONFIGURATION ac 
   on ac.ACNF_NUMBER = na.NAST_NAME AND ac.ACNF_DELETED_TS is null 
  JOIN ASSET.ASSET_CONFIGURATION_OPTION aco 
   on aco.ACNF_ID = ac.ACNF_ID 
            ) as  a  

Similar Messages

  • Why am I Observing Poor Geospatial Query Performance?

    Our HANA Rev. 72 Amazon instance is performing very poorly when selecting geospatial polygons from tables.  I'm hoping someone out there knows why.
    Here's one example.  The table below has just 51 records; one for each state (includingPuerto Rico). The table has a GEOMETRY column that contains a polygon outline of the state.
    The query below uses ST_Covers() to find the state at a given latitude, longitude. It takes more than 9 seconds to execute!
    SELECT  STATE_NAME, STATE_FIPS as STATE_ID, SHAPE.ST_AsGeoJSON() as STATE_POLYGON
    from GEO_SHAPES.US_STATES where SHAPE.ST_Covers(new ST_Point('POINT(-105.123 39.456)')) = 1
    Statement 'SELECT STATE_NAME, STATE_FIPS as STATE_ID, SHAPE.ST_AsGeoJSON() as STATE_POLYGON from ...'  successfully executed in 9.326 seconds  (server processing time: 8.629 seconds)
    Essentially all the time goes into the SHAPE.ST_Covers() function.  The same query without this function runs in 6 ms.
    Does anybody have any idea why?

    JeffKasper wrote:
    So I have been running activity monitor and what I am seeing is 40 MB of "Free" RAM with 630 MB "Wired", 2.5 GB "Inactive" and about 5 GB of "Active".
    Both "Free" and "Inactive" are (supposed to be) available for use by any process that wants it.
    When you first start up, most memory is, of course, "Free." As apps or system processes need memory, that's where OSX gets it. When they release it, however, it does not go back to Free, but to "Inactive" and is identified with the last process that used it. This is done to speed up assigning it back to the previous process if it requests it (which of course is quite common).
    So as time passes, you'll see less and less Free memory and more and more Inactive memory; this means your Mac is working properly. In fact, after running for a long time, if there's much Free memory left, it is, in a sense, wasted!
    The thing to watch for is Paging. If the "Page outs" figure is high, or changing rapidly, then OSX is having to page stuff out because it's out of both Free and Inactive memory.
    A better way to monitor page-outs is via a Terminal command. (The Terminal app is in your Applications/Utilities folder.) Enter the following, exactly as shown, at the prompt:
    sar -g 60 10
    Leave the terminal window open, then try to re-create the unresponsive problem.
    This should tell you if you are doing pageouts. You'll see a line in the Terminal window every 60 seconds for 10 minutes (or until you quit Terminal), showing the number of pageouts per second. A few pageouts is normal. If you have large numbers of pageouts, then you have a memory problem.

  • Unexplained poor table query performance

    Hi All
    I am really open to any advice as I have hit a kind of brick wall,  a developer came to me asking about y a procedure was performing so slowly in beta as opposed to dev and after looking at exactly what it did I indentified the offending
    select statement. 
    The query was basically passing some ids into a user defined table and using that thoses ids to filter.
    Select gc.id
    From temperatures as gcm left outer join
    gauges gc ON ( gc.id = gcm.id Or gc.id IS NULL )
    AND ( gc.countryid = gcm.countryid or gcm.countryid is null )
    where souriceid = 3
    So the gauges table has around 90K where as the temperatures has around 3 million .
    K the test on the development server and the above returns in under 3 seconds where as the beta is just over 1 minute .
    The beta in terms of processing power is much fast and both have the same version of SQL2012 sp1 ( 11.0.3128 ( x64))
    having ran a quick query on index fragmentation i find there are a few indexes within the temperature table that are reasonable high.   I then rebuild them and see that they are pretty much back to an acceptable level.  Again I try the select
    and a few times and get a range of times .
    I then tried a restore from the weekend just to see if there was anything that may have changed and wondering if I was beginning to clutch at straws.
    low and behold the restore was not only quick but from an index fragmentation point of view not in as great shape.
    Ive compared the two tables which are identical with the only difference being in data to which I copied over to the restore and got the same 2 second result.
    Any help on what to do next would be great ,  as I could replace the table with the restored one but I would like to know why this is happening .
    Many Thanks
    Robert

    The query is a bit strange with the NULL checks on gc.id and gcm.countryid.
    Since temperatures is the retained (outer) table, you can remove the part "or gcm.countryid is null".
    Also, if table gauges does not allow NULLs (or does not have NULLs) in column id, you should remove the part "OR gc.id IS NULL".
    If the query can be simplified as stated above, then all you need is a compound index on (id, countryid) or on (countryid, id) on both tables.
    If the problem still persists, you can check the query plan to see what is different, and that should give you a clue about the issue.
    Please note that for performance related queries, it is essential to show the exact query you are using. For example, if you are using a local variable or a parameter instead of "3" in your query, that makes a big difference.
    If you need more help, then please post DDL for the tables and indexes that are involved.
    Gert-Jan

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • Report burst:To increase query performance in xcelsius

    Is there anyway to increase query performance in xcelsius by using report bursting

    Fremlin,
    Report bursting is only for distributing your reports to your end users.
    You can improve performance only by following the [Best practices|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac] in xcelsius.
    -Anil

  • Performance problem with report query

    Hi,
    I am encountering a performance issue with a page returning a report.
    I have a page that has a region which joins 2 tables. One table has about 220,00 rows, while the other contains roughly 60,000 rows. In the region source of the report region, the query includes join condition with local variables. For example, the page is page 70, and some join conditions are:
    and a.id=:P70_ID
    and a.name like :P70_NAME
    I run the query that returns a large number of rows from sqlplus, and it takes less than 30 sec to complete.
    When I run the page, the report took about 3 minutes to return.
    In this case, :P70_NAME is initialized to '%' on the page.
    I then tried to substitute variable value directly in the query:
    and a.id=1000
    and a.name like '%'
    this time the report returned in about 30 sec.
    I then tried another thing which specified the region as "PL/SQL Function returning sql query", and modified the region as follows:
    l_sql := '.......';
    l_sql := l_sql || 'and a.id=' || v('P70_ID')
    and similar substituting :P70_NAME to v('P70_NAME') and append its value to the l_sql string.
    The report query page also returned in 30 sec.
    Is there any known performance issue with using the bind variable (:PXX_XXX) in the report region?

    If you are able.. flush the shared_pool, run your
    report then query the v$sql_area or v$sql_text tables.
    Or do a google query and look up Cary Milsap's piece on enabling extended trace .. there is your sure fire way of finding the problem sql. I am still learning htmldb but is there a way to alter session enable trace in some pre-query block?

  • Poor query performance in Prod.

    I am facing lots of issues in my queries.
    The query is working fine in Dev. but after i transported it to Prod. the query is taking too much time to retreive the result.
    Why i am facing this issue.
    How can i do the performance tuning for the query.?
    The query is built on multiprovider and it is also jumping to the ODS for ODS query.
    But the query performance is really low and poor in Production.
    And to surprise the query is wroking perfectly and faster in Dev.
    What can be the suggestion.
    Please send documents for performnace tuning, notes number... etc.

    Are datavolumes huge in Prod Box...dat may be cause 4 d slow runtimes.
    <b>Look at below performance improving techs</b>
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/aec09790-0201-0010-8eb9-e82df5763455
    Business Intelligence Performance Tuning [original link is broken]
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    Note 565725 - Optimizing the performance of ODS objects

  • How to improve query performance when reporting on ods object?

    Hi,
    Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
    Thanks in advance,
    Ravi Alakuntla.

    Hi Ravi,
    Check these links which may cater your requirement,
    Re: performance issues of ODS
    Which criteria to follow to pick InfoObj. as secondary index of ODS?
    PDF on BW performance tuning,
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Regards,
    Mani.

  • Poor query performance when joining CONTAINS to another table

    We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
    For example, we can find all the records a user has access to from our base table by the following query:
    SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID;
    This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
    Our search query looks like this:
    SELECT score(1), d.*
    FROM duns d
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
    2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
    WITH subset
    AS
    (SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID
    SELECT score(1), d.*
    FROM duns d
    JOIN subset s
    ON d.duns_loc = s.duns_loc
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
    Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
    Thanks!!

    Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
    SCOTT@orcl_11gR2> -- tables:
    SCOTT@orcl_11gR2> CREATE TABLE duns
      2    (duns_loc  NUMBER,
      3       text_key  VARCHAR2 (30))
      4  /
    Table created.
    SCOTT@orcl_11gR2> CREATE TABLE primary_contact
      2    (duns_loc  NUMBER,
      3       emp_id       NUMBER)
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- data:
    SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO duns
      2  SELECT object_id, object_name
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact
      2  SELECT object_id, namespace
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> -- indexes:
    SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
      2  ON duns (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
      2  ON primary_contact (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
    SCOTT@orcl_11gR2> -- as suggested by Roger:
    SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
      2  ON duns (text_key)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY duns_loc
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- gather statistics:
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- variables:
    SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
    SCOTT@orcl_11gR2> EXEC :employeeid := 1
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search := 'highway'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- original query:
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> WITH
      2    subset AS
      3        (SELECT d.duns_loc
      4         FROM      duns d
      5         JOIN      primary_contact pc
      6         ON      d.duns_loc = pc.duns_loc
      7         AND      pc.emp_id = :employeeID)
      8  SELECT score(1), d.*
      9  FROM   duns d
    10  JOIN   subset s
    11  ON     d.duns_loc = s.duns_loc
    12  WHERE  CONTAINS (TEXT_KEY, :search,1) > 0
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 4228563783
    | Id  | Operation                      | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |   1 |  SORT ORDER BY                 |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |*  2 |   HASH JOIN                    |                   |     2 |    84 |   120   (3)| 00:00:02 |
    |   3 |    NESTED LOOPS                |                   |    38 |  1292 |    50   (2)| 00:00:01 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  5 |      DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | DUNS_DUNS_LOC_IDX |     1 |     5 |     1   (0)| 00:00:01 |
    |*  7 |    TABLE ACCESS FULL           | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
       5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
       6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
       7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
    SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
    SCOTT@orcl_11gR2> WITH
      2    subset1 AS
      3        (SELECT pc.duns_loc
      4         FROM      primary_contact pc
      5         WHERE  pc.emp_id = :employeeID),
      6    subset2 AS
      7        (SELECT score(1), d.*
      8         FROM      duns d
      9         WHERE  CONTAINS (TEXT_KEY, :search,1) > 0)
    10  SELECT subset2.*
    11  FROM   subset1, subset2
    12  WHERE  subset1.duns_loc = subset2.duns_loc
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
    SCOTT@orcl_11gR2> SELECT subset2.*
      2  FROM   (SELECT pc.duns_loc
      3            FROM   primary_contact pc
      4            WHERE  pc.emp_id = :employeeID) subset1,
      5           (SELECT score(1), d.*
      6            FROM   duns d
      7            WHERE  CONTAINS (TEXT_KEY, :search,1) > 0) subset2
      8  WHERE  subset1.duns_loc = subset2.duns_loc
      9  ORDER  BY score(1) DESC
    10  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- ansi join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  JOIN   primary_contact
      4  ON     duns.duns_loc = primary_contact.duns_loc
      5  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      6  AND    primary_contact.emp_id = :employeeid
      7  ORDER  BY SCORE(1) DESC
      8  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- old join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns, primary_contact
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc = primary_contact.duns_loc
      5  AND    primary_contact.emp_id = :employeeid
      6  ORDER  BY SCORE(1) DESC
      7  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- in clause:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc IN
      5           (SELECT primary_contact.duns_loc
      6            FROM   primary_contact
      7            WHERE  primary_contact.emp_id = :employeeid)
      8  ORDER  BY SCORE(1) DESC
      9  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 3825821668
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN SEMI              |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2>

  • Is it possible to perform a submit after calling a report query in a branch

    I have a page which has a number of buttons, and a report on status records.
    Behind each button is some javascript which calls impromptu to force the user to enter a description via a modal text box.
    After the user has entered the text, the javascript submits the page setting the request to a status value.
    The page has a process which populates a number of collections and another process which inserts a status record.
    The page also has a number of branches, which call report queries (with individual layouts) depending upon the status of the request.
    I think the problem is because the branches call the report queries, the page is not refreshing and the user is not seeing the newly created status record.
    Possible solutions?
    1. Is it possible to call the report queries from javascript?
    2. Can the branch for the report query be forced to submit the page?
    Thanks
    Paul

    Hi Ravi,
    Thanks for your prompt response.
    Settings are as follows:
    POSITION MANAGEMENT  (STANDARD SETTINGS)
    3500 - MTM, FX trans, post to same components, Step Type 4, Procedure 1000 (MTM)
    4000 - Spot/Spot, FX trans Spot/spot, post to used components, Step Type 6, Procedure 1000
    Which is preferable?
    In my example, I am using 3500
    FOREIGN CURRENCY VALUATION PROCEDURE
    1000 - Mark to Market P + L
    Price/Rate Type = M
    Comp for valuation = Book value
    Write up rule = write up to MV/PV
    Write down to MV/PV
    Clear exchange rate gains/losses = NULL
    Many thanks for your assistance.
    Regards,
    Chris

  • Query Performance for OLE DB OLAP Reporting

    Hi Experts,
    what are the advantages of enhancing query performance by
    A) building Aggregates or
    B) using Information Broadcaster Query Precalculation?
    Since the settings in Information Broadcaster could be done by any user - will the precalculated version be used only for this user or for all users exeucting the query?
    Are these settings also used if the query is executed via a 3rd party Frontend tool?
    Thanks,
    Angie

    Hi Angie,
    Which is the third party tool that's accessing the query? Is it BO? If so there's a lot of information available.

  • Poor query performance only with migrated 7.0 queries

    Dear Team,
    We are facing a serious query performance issue after migration of queries from 3.5 to 7.0.
    I executed a query in 3.5 with some variable values which takes fraction of seconds to display the output. But the same migrated query with same variable entries is taking very long time and giving time out error.
    We are not using any aggregates in the InfoProvider level.
    Both the queries are based on same cube but 3.5 query is taking less time and 7.0 is taking very long time if more selection is done.
    I checked for notes where I didn't find specific note for this particular scenario. I found notes only for general query performance improvement.
    I want to know the reason why only in 7.0 the same 3.5 query is taking a long time and giving time out error. And please suggest some notes or suggestions related to this scenario.
    Regards,
    Chan

    Hi,
    Queries in BI 7.0 are almost the same as queries in 3.x format.
    inorder to check if the problem is in the query runtime (database time) or JAVA runtime (probably rendering) you should try running it from RSRT once in JAVA web and once in ABAP web.
    if the problem is only with JAVA web, than u should take the URL and add &profiling=X at the end.
    after the query execution u can use statistics which will be shown at the top of the page.
    With my experience, the problem is in the rendering phase of the query. Things that could be done is to limit the number of rows shown at each page, that could be done by changing the 0ANALYSIS web template - it's one of the web template parameters.
    Tomer.

  • How to improve query performance using infoset

    I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!

    Hi
    As info set itself doesn't have any data so it improves Performance
    also go through the below tips.
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
    How to read ST03N datasets from DB
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Inventory Ageing query performance

    Hi All,
       I have created inventory ageing query on our custom cube which is replica of 0IC_C03. We have data from 2003 onwards. the performance of the query is very poor the system almost hangs. I tried to create aggregates to improve performance but its failed. What i should do to improve the performance and why the aggregate filling is failed. Cube have compressed data. Pls guide.
    Regards:
    Jitendra

    Inaddition to the above posts
    Check the below points ... and take action accordingly to increase the query performance.
    mainly check --Is the Cube data Compressed. it will increase the performance of the query..
    1)If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2)Check code for all exit variables used in a report.
    3)Check the read mode for the query. recommended is H.
    4)If Alternative UOM solution is used, turn off query cache.
    5)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    6)Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    7)Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed.
    Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    8)Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    9)If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing.
    10)Check the user exits usage involved in OLAP run time?
    11)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    12)
    Turn on the BW Statistics: RSA1, choose Tools -> BW statistics for InfoCubes(Choose OLAP and WHM for your relevant Cubes)
    To check the Query Performance problem
    Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all InfoCubes.
    You need to run ST03N in expert mode to get these values
    based on the analysis and the values taken from the above  - Check if an aggregate is suitable or setting OLAP etc.
    Edited by: prashanthk on Nov 26, 2010 9:17 AM

Maybe you are looking for

  • Mail+Spotlight Search Not Working in Lion

    I've been dealing with this problem for over a month and have not been able to fix it.  Search works fine in Lion on several computers, but on my laptop it still won't do a complete search.  It will only pull up emails from people in my contacts list

  • Problem Opening a Page!

    Hello, I seem to be having a problem with an FTP Upload program. Its very simple and pointless but im trying to learn how to make one. The problem is, I have users who input data in a textfield (Yes its a Swing application). And when they input all d

  • Can i create pdx programmatically in c# ?

    Can i create pdx programmatically in c# ? If yes, then is it must to use adobe sdk for it ?

  • 9iAS Release2 on RedHat 8 - ORA-1503

    I'm attempting an install of Oracle 9iAS Release 2 (9.0.2.1) on RedHat 8. During install I initially received some linking errors, but was able to resolve them by performing the following actions: 1. Go into the $ORACLE_HOME/bin directory 2. Open the

  • 24f editing in Premiere Elements 4

    Hello, I would like to make a DVD from some event footage a friend shot for me. The problem is his camera was set to record in 24f, which I assume is 24p, and was shot in standard 4:3 definition as well. From what I have read, PE4 will not edit this