Mdx Query performance problem

Hi
Is there any way to control the performance of Mdx expressions that use the Filter function? The following Mdx statement is an example of a query we are generating to return filtered characteristic values for users to make selections for variables.
Note: It is intentional that the column axis is not populated as we are interested only in the returned characteristic values.
SELECT {} N COLUMNS,
Order(
     Filter(
          {[ZPLANTYPE].[All].Children},
          (([ZPLANTYPE].CurrentMember.Name >= 'a' AND [ZPLANTYPE].CurrentMember.Name < 'b') OR
          ([ZPLANTYPE].CurrentMember.Name >= 'A' AND [ZPLANTYPE].CurrentMember.Name < 'B'))
     [ZPLANTYPE].CurrentMember.Name, BASC
) ON ROWS FROM [$IC_FLT]
In a real example with 162,000 charateristics this query is taking up to 5 minutes to run - clearly unacceptable as part of a user interface. It appears that behind the scenes a sequential read of the underlying dimesnion table is being carried out.
It is difficult to create a more sophisticated query due to the lack of string handling logic in the raw Mdx language.

Hi,
I have been through the queries, and understand that the "_MSCM1" is being aggregated across Product and Paid Amount from the query extract below:
member [Accounts].[_MSCM1] as 'AGGREGATE({[_Product2]}, [Accounts].[Paid Amount])'
If I am getting it right, there is an aggregation rule missing for [Paid Amount] (I think that's the reason, the query is to aggregate _MSCM1 by "Paid Amount" ie just like any other dimension).
Could you please check this once and this is why I think BI is generating two queries? I am sorry, if I got this wrong.
Hope this helps.
Thank you,
Dhar

Similar Messages

  • Essbase MDX Query Performance Problem

    Hello,
    I'm doing an analysis in OBIEE to Essbase cubes, but I don't know why OBIEE generates two MDX queries against Essbase. The first one returns in a reasonable time ( 5 minutos ) but the second one never returns.
    With
    set [_Year] as '[Year].Generations(2).members'
    set [_Month] as '[Mês Caixa].Generations(2).members'
    set [_Product2] as 'Filter([Product].Generations(2).members, (([Product].CurrentMember.MEMBER_Name = "SPECIAL" OR [Product].CurrentMember.MEMBER_ALIAS = "SPECIAL") OR ([Product].CurrentMember.MEMBER_Name = "EXECUTIVE" OR [Product].CurrentMember.MEMBER_ALIAS = "EXECUTIVE")))'
    set [_Client Name] as 'Filter([Client Name].Generations(2).members, (([Client Name].CurrentMember.MEMBER_Name = "JOHN DOE" OR [Client Name].CurrentMember.MEMBER_ALIAS = "JOHN DOE")))'
    set [_Service Name] as 'Generate([Service Name].Generations(2).members, Descendants([Service Name].currentmember, [Service Name].Generations(4), leaves))'
    select
    { [Accounts].[Paid Amount]
    } on columns,
    NON EMPTY {crossjoin({[_Year]},crossjoin({[_Month]},crossjoin({[_Product2]},crossjoin({[_Client Name]},{[_Service Name]}))))} properties MEMBER_NAME, GEN_NUMBER, [Year].[MEMBER_UNIQUE_NAME], [Year].[Memnor], [Mês Caixa].[MEMBER_UNIQUE_NAME], [Mês Caixa].[Memnor], [Product].[MEMBER_UNIQUE_NAME], [Product].[Memnor], [Client Name].[MEMBER_UNIQUE_NAME], [Client Name].[Memnor], [Service Name].[Member_Alias] on rows
    from [cli.Client]
    With
    set [_Year] as '[Year].Generations(2).members'
    set [_Month] as '[Mês Caixa].Generations(2).members'
    set [_Product2] as 'Filter([Product].Generations(2).members, (([Product].CurrentMember.MEMBER_Name = "SPECIAL" OR [Product].CurrentMember.MEMBER_ALIAS = "SPECIAL") OR ([Product].CurrentMember.MEMBER_Name = "EXECUTIVE" OR [Product].CurrentMember.MEMBER_ALIAS = "EXECUTIVE")))'
    set [_Client Name] as 'Filter([Client Name].Generations(2).members, (([Client Name].CurrentMember.MEMBER_Name = "JOHN DOE" OR [Client Name].CurrentMember.MEMBER_ALIAS = "JOHN DOE")))'
    set [_Service Name] as 'Generate([Service Name].Generations(2).members, Descendants([Service Name].currentmember, [Service Name].Generations(4), leaves))'
    member [Accounts].[_MSCM1] as 'AGGREGATE({[_Product2]}, [Accounts].[Paid Amount])'
    select
    { [Accounts].[_MSCM1]
    } on columns,
    NON EMPTY {crossjoin({[_Year]},crossjoin({[_Month]},crossjoin({[_Client Name]},{[_Service Name]})))} properties MEMBER_NAME, GEN_NUMBER, [Year].[MEMBER_UNIQUE_NAME], [Mês Caixa].[MEMBER_UNIQUE_NAME], [Client Name].[MEMBER_UNIQUE_NAME], [Service Name].[Member_Alias] on rows
    from [cli.Client]
    Does anyone know why OBIEE generate these two queries and how to optimize them since it's generated automatically by OBIEE ?
    Thanks,

    Hi,
    I have been through the queries, and understand that the "_MSCM1" is being aggregated across Product and Paid Amount from the query extract below:
    member [Accounts].[_MSCM1] as 'AGGREGATE({[_Product2]}, [Accounts].[Paid Amount])'
    If I am getting it right, there is an aggregation rule missing for [Paid Amount] (I think that's the reason, the query is to aggregate _MSCM1 by "Paid Amount" ie just like any other dimension).
    Could you please check this once and this is why I think BI is generating two queries? I am sorry, if I got this wrong.
    Hope this helps.
    Thank you,
    Dhar

  • Query performance problem

    I am having performance problems executing a query.
    System:
    Windows 2003 EE
    Oracle 9i version 9.2.0.6
    DETAIL table with 120Million rows partitioned in 19 partitions by SD_DATEKEY field
    We are trying to retrieve the info from an account (SD_KEY) ordered by date (SD_DATEKEY). This account has about 7000 rows and it takes about 1 minute to return the first 100 rows ordered by SD_DATEKEY. This time should be around 5 seconds to be acceptable.
    There is a partitioned index by SD_KEY and SD_DATEKEY.
    This is the query:
    SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' AND ROWNUM < 101 ORDER BY SD_DATEKEY
    The problem is that all the 7000 rows are read prior to be ordered. I think that it is not necessary for the optimizer to access all the partitions to read all the rows because only the first 100 are needed and the partitions are bounded by SD_DATEKEY.
    Any idea to accelerate this query? I know that including a WHERE clause for SD_DATEKEY will increase the performance but I need the first 100 rows and I don't know the date to limit the query.
    Anybody knows if this time is a normal response time for tis query or should it be improved?
    Thank to all in advance for the future help.

    Thank to all for the replies.
    - We have computed statistics and no changes in the response time.
    - We are discussing about restrict the query to some partitions but for the moment this is not the best solution because we don't know where are the latest 100 rows.
    - The query from Maurice had the same response time (more or less)
    select * from
    (SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' ORDER BY SD_DATEKEY)
    where ROWNUM < 101
    - We have a local index on SD_DATEKEY. Do we need another one on SD_KEY? Should it be created as BITMAP?
    I can't test inmediately your sugestions because this is a problem with one of our customers. In our test system (that has only 10Million records) the indexes accelerate the query but this is not the same in the customer system. I think the problem is the total records on the table.

  • Query Performance Problem!! Oracle 25 minutes || SQLServer 3 minutes

    Hi all,
    I'm having a performance problem with this query bellow. It runs in 3 minutes on SQLServer and 25 minutes in Oracle.
    SELECT
    CASE WHEN (GROUPING(a.estado) = 1) THEN 'TOTAL'
    ELSE ISNULL(a.estado, 'UNKNOWN')
    END AS estado,
    CASE WHEN (GROUPING(m.id_plano) = 1) THEN 'GERAL'
    ELSE ISNULL(m.id_plano, 'UNKNOWN')
    END AS id_plano,
    sum(m.valor_2s_parcelas) valor_2s_parcelas,
    convert(decimal(15,2),convert(int,sum(convert(int,(m.valor_2s_parcelas+.0000000001)*100)*
    isnull(e.percentual,0.0))/100.0+.0000000001))/100 BB_Educar
    FROM
    movimento_dco m ,
    evento_plano e,
    agencia_tb a
    WHERE
    m.id_plano = e.id_plano
    AND m.agencia *= a.prefixo
    --AND  m.id_plano LIKE     'pm60%'
    AND m.data_pagamento >= '20070501'
    AND m.data_pagamento <= '20070531'
    AND m.codigo_retorno = '00'
    AND m.id_parcela > 1
    AND m.valor_2s_parcelas > 0.
    AND e.id_evento = 'BB-Educar'
    AND a.banco_id = '001'
    AND a.ordem = '00'
    group by m.id_plano, a.estado WITH ROLLUP
    order by a.estado, m.id_plano DESC
    Can anyone help me with this query?

    What version of Oracle, what version of SQL? Are the tables the same exact size? are they both indexed the same? Are you running on the some or similar hardware? Are the Oracle parameters similar like SGA size and PGA_AGGREGATE Target? Did you run statistics in Oracle?
    Did you compare execution plans in SQL Server vs Oracle to see if SQl Servers execution plan is more superior than the one Oracle is trying to use? (most likely stale statistics).
    There are many variables and we need more information than just the Query : ).

  • VAL_FIELD selection to determine RSDRI or MDX query: performance tuning

    according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
    I did so and found an interesting issue in one of the COPA report.
    with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
    I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
    in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
    so in smmary,
    BAS(PARENT) =>MDX query.
    BAS(CHILD1)=>RSDRI query.
    BAS(CHILD2)=>RSDRI query.
    BAS(CHILD3)=>RSDRI query.
    BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
    I know VAL_FIELD is SAP reserved name for BPC dimensions.  my question is why BAS(PARENT) =>MDX query.?
    interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
    George

    Ok - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection. 
    I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
    I should have paid more attention to the Info message I got in the BEx Query Designed.  It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
    After moving the variables to the Characteristic Restrictions my report worked as expected.  The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
    Hope this helps someone else

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Fuzzy searching and concatenated datastore query performance problems.

    I am using the concatenated datastore and indexing two columns.
    The query I am executing includes an exact match on one column and a fuzzy match on the second column.
    When I execute the query, performance should improve as the exact match column is set to return less values.
    This is the case when we execute an exact match search on both columns.
    However, when one column is an exact match and the second column is a fuzzy match this is not true.
    Is this normal processing??? and why??? Is this a bug??
    If you need more information please let me know.
    We are under a deadline and this is our final road block.
    TIA
    Colleen GEislinger

    I see that you have posted the message in the Oracle text forum, good! You should get a better, more timely answer there.
    Larry

  • Query performance problem when using hierarchies

    Hello All,
    I've a query which is built on a hieracrhy with the following structure for eg.
    <b>A</b>
    |_<b>B</b>
    | |_L1
    |
    |_<b>C</b>
      |_L2
    When I restrict the query to hierarchy levels B and C simultaneously the query executes fine. But when i directly restrict the query to heirarchy level A , the query runs endlessly.
    Could some one please help me out as to why this is the case ?
    I don't have aggregates built on any of the hierarchy level.
    Best Regards,
    Sanjay

    Hi Roberto,
    thanks for your response. However, the problem is not solved even after applying the suggestions of the note 738098 :(. These queries used to execute fine until yesterday and there have been no major additions to the hierarchy. Please let me know if there is any thing else that can be done. We are planning to bounce the system and see if there are any performance improvements.
    PS: I've awarded points to you nevertheless, as the option suggested in the note seems useful and should be tried in case of these kind of performance issues
    Best Regards,
    Sanjay

  • MDX query performance on ASO cube with dynamic members

    We have an ASO cube and we are using MDX queries to extract data from that cube. We are doing some performance testing on the MDX data extract.
    Recently we made around 15-20 account dimension members dynamic in the ASO cube, and it is taking around 1 and a half hour for the query to run on an empty cube. Earlier the query was running in 1 minute on the empty cube when there were no dynamic members in the cube.
    Am not clear why it takes so much time to extract data from MDX on an empty cube when there is nothing to extract. The performance has also degraded while extracting data on the cube with data in it.
    Does dynamic members in the outline affect the MDX performance? Is there a way to exclude dynamic members from the MDX extract?
    I appreciate any insights on this issue.

    I guess it depends on what the formulas of those members in the dynamic hierarchy are doing.
    As an extreme example, I can write a member formula to count every unique member combination in the cube and assign it to multiple members, regardless if I have any data in the database or not, that function is going to resolve itself when you query it and it is going to take a lot of time. You are probably somewhere in between there and a simple function that doesn't require any over head. So without seeing the MDX it is hard to say what about it might be causing an issue.
    As far as excluding members there are various function in MDX to narrow down the set you are querying
    Filter(), Contains(), Except(), Is(), Subset(), UDA(), etc.
    Keep in mind you did not make members dynamic, you made a hierarchy dynamic, that is not the same thing and it does impact the way Essbase internally optimizes the database based on Stored vs dynamic hierarchies. So that alone can have an impact as well.

  • Query Performance problem after upgrade from 8i to 10g

    Following query takes longer time in 10g.
    SELECT LIC_ID,FSCL_YR,KEY_NME,CRTE_TME_STMP,REMT_AMT,UNASGN_AMT,BAD_CK_IND,CSH_RCPT_PARTY_ID,csh_rcpt_id,REC_TYP,XENT_ID,CLNT_CDE,BTCH_CSH_STA,file_nbr,
    lic_nbr,TAX_NBR,ASGN_AMT FROM (
    SELECT /*+ FIRST_ROWS*/
         cpty.lic_id,
    cpty.clnt_cde,
    cpty.csh_rcpt_party_id,
    cpty.csh_rcpt_id,
    cpty.rec_typ,
    cpty.xent_id,
    cr.fscl_yr,
    cbh.btch_csh_sta,
    nam.key_nme,
    lic.file_nbr,
    lic.lic_nbr,
    cr.crte_tme_stmp,
    cr.remt_amt,
    cr.unasgn_amt,
    ee.tax_nbr,
    cr.asgn_amt,
    cr.bad_ck_ind
    FROM lic lic
    ,csh_rcpt_party cpty
    ,name nam
    ,xent ee
    ,csh_rcpt cr
    ,csh_btch_hdr cbh
    WHERE 1 = 1
    AND ee.xent_id = nam.xent_id
    AND cbh.btch_id = cr.btch_id
    AND cr.csh_rcpt_id = cpty.csh_rcpt_id
    AND ee.xent_id = cpty.xent_id
    AND cpty.lic_id = lic.lic_id(+)
    AND (cpty.clnt_cde IN ( SELECT clnt_cde
    FROM clnt
                   START WITH clnt_cde = '4006'
    CONNECT BY PRIOR clnt_cde_prnt = clnt_cde)
    OR cpty.clnt_cde IS NULL)
    AND nam.cur_nme_ind = 'Y'
    AND nam.ent_nme_typ = 'P' AND nam.key_nme LIKE 'WHITE%')
    order by lic_id
    Explain Plan in 8i
    0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=17 Card=1
    Bytes=107)
    1 0 FILTER
    2 1 NESTED LOOPS (Cost=17 Card=1 Bytes=107)
    3 2 NESTED LOOPS (Cost=15 Card=1 Bytes=101)
    4 3 NESTED LOOPS (OUTER) (Cost=13 Card=1 Bytes=73)
    5 4 NESTED LOOPS (Cost=11 Card=1 Bytes=60)
    6 5 NESTED LOOPS (Cost=6 Card=1 Bytes=35)
    7 6 INDEX (RANGE SCAN) OF 'NAME_WBSRCH1_I' (NON-UN
    IQUE) (Cost=4 Card=1 Bytes=26)
    8 6 TABLE ACCESS (BY INDEX ROWID) OF 'XENT' (Cost=
    2 Card=4649627 Bytes=41846643)
    9 8 INDEX (UNIQUE SCAN) OF 'EE_PK' (UNIQUE) (Cos
    t=1 Card=4649627)
    10 5 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT_PARTY
    ' (Cost=5 Card=442076 Bytes=11051900)
    11 10 INDEX (RANGE SCAN) OF 'CPTY_EE_FK_I' (NON-UNIQ
    UE) (Cost=2 Card=442076)
    12 4 TABLE ACCESS (BY INDEX ROWID) OF 'LIC' (Cost=2 Car
    d=3254422 Bytes=42307486)
    13 12 INDEX (UNIQUE SCAN) OF 'LIC_PK' (UNIQUE) (Cost=1
    Card=3254422)
    14 3 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT' (Cost=2
    Card=6811443 Bytes=190720404)
    15 14 INDEX (UNIQUE SCAN) OF 'CR_PK' (UNIQUE) (Cost=1 Ca
    rd=6811443)
    16 2 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_BTCH_HDR' (Cost=
    2 Card=454314 Bytes=2725884)
    17 16 INDEX (UNIQUE SCAN) OF 'CBH_PK' (UNIQUE) (Cost=1 Car
    d=454314)
    18 1 FILTER
    19 18 CONNECT BY
    20 19 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (UNIQUE) (Cost=1 Ca
    rd=1 Bytes=4)
    21 19 TABLE ACCESS (BY USER ROWID) OF 'CLNT'
    22 19 TABLE ACCESS (BY INDEX ROWID) OF 'CLNT' (Cost=2 Card
    =1 Bytes=7)
    23 22 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (UNIQUE) (Cost=1
    Card=1)
    Explain Plan in 10g
    0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=19 Card=1
    Bytes=112)
    1 0 SORT (ORDER BY) (Cost=19 Card=1 Bytes=112)
    2 1 FILTER
    3 2 NESTED LOOPS (Cost=18 Card=1 Bytes=112)
    4 3 NESTED LOOPS (Cost=16 Card=1 Bytes=106)
    5 4 NESTED LOOPS (OUTER) (Cost=14 Card=1 Bytes=78)
    6 5 NESTED LOOPS (Cost=12 Card=1 Bytes=65)
    7 6 NESTED LOOPS (Cost=6 Card=1 Bytes=34)
    8 7 INDEX (RANGE SCAN) OF 'NAME_WBSRCH1_I' (INDE
    X) (Cost=4 Card=1 Bytes=25)
    9 7 TABLE ACCESS (BY INDEX ROWID) OF 'XENT' (TAB
    LE) (Cost=2 Card=1 Bytes=9)
    10 9 INDEX (UNIQUE SCAN) OF 'EE_PK' (INDEX (UNI
    QUE)) (Cost=1 Card=1)
    11 6 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT_PAR
    TY' (TABLE) (Cost=6 Card=1 Bytes=31)
    12 11 INDEX (RANGE SCAN) OF 'CPTY_EE_FK_I' (INDEX)
    (Cost=2 Card=4)
    13 5 TABLE ACCESS (BY INDEX ROWID) OF 'LIC' (TABLE) (
    Cost=2 Card=1 Bytes=13)
    14 13 INDEX (UNIQUE SCAN) OF 'LIC_PK' (INDEX (UNIQUE
    )) (Cost=1 Card=1)
    15 4 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT' (TABLE
    ) (Cost=2 Card=1 Bytes=28)
    16 15 INDEX (UNIQUE SCAN) OF 'CR_PK' (INDEX (UNIQUE))
    (Cost=1 Card=1)
    17 3 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_BTCH_HDR' (TAB
    LE) (Cost=2 Card=1 Bytes=6)
    18 17 INDEX (UNIQUE SCAN) OF 'CBH_PK' (INDEX (UNIQUE)) (
    Cost=1 Card=1)
    19 2 FILTER
    20 19 CONNECT BY (WITH FILTERING)
    21 20 TABLE ACCESS (BY INDEX ROWID) OF 'CLNT' (TABLE) (C
    ost=2 Card=1 Bytes=15)
    22 21 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (INDEX (UNIQUE)
    ) (Cost=1 Card=1)
    23 20 NESTED LOOPS
    24 23 BUFFER (SORT)
    25 24 CONNECT BY PUMP
    26 23 TABLE ACCESS (BY INDEX ROWID) OF 'CLNT' (TABLE)
    (Cost=2 Card=1 Bytes=7)
    27 26 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (INDEX (UNIQU
    E)) (Cost=1 Card=1)
    28 20 TABLE ACCESS (FULL) OF 'CLNT' (TABLE) (Cost=5 Card
    =541 Bytes=5951)
    Explain plan looks different in steps 19 to 28. I am not sure why 10g have more steps

    Hi
    I have no experience in 8i. I do know 10g does costing different from 8i. So I think the other plan might got elliminated.
    Normally when I see differences. I just collect statistics on the tables and the indexes and remove the hints. Hints are not good . This has helped me to solve few problems.
    Thanks
    CT

  • Spatial query performance problem after upgrade to 10G

    I am in the process of converting my database from a 9i box to a new 10G 64-bit box. But I have found a problem which is causing some reports to be slower on the new box. I have simplified the queries down to having the user_sdo_geom_metadata table joined to use the diminfo in the queries (I know that I am not using them in these queries, but I simplified for testing purposes...)
    If I run the following I get and look at the explain plan I get full table scans for both spatial tables and index lookups for the user_sdo_geom_metadata table queries and runs for about 14 seconds.
    SELECT ROWNUM
    from COUNTIES s,
    NOMINATIONS O,
    (select diminfo from user_sdo_geom_metadata where table_name='COUNTIES') S_DIM,
    (select diminfo from user_sdo_geom_metadata where table_name='NOMINATIONS') O_DIM
    where sdo_filter(S.GEOM,o.geom, 'querytype=WINDOW')='TRUE'
    and sdo_geom.within_distance(o.geom,0,S.GEOM,.5)='TRUE';
    If I just remove the two user_sdo_geom_metadata joins, I get spatial index usage on COUNTIES and the whole thing runs in less that a second.
    SELECT ROWNUM
    from COUNTIES s,
    NOMINATIONS O
    where sdo_filter(S.GEOM,o.geom, 'querytype=WINDOW')='TRUE'
    and sdo_geom.within_distance(o.geom,0,S.GEOM,.5)='TRUE';
    I have rebuilt the indexes, gathered stats, and tried hints to force the first query to use the spatial index. None of which made any change.
    Has anyone else seen this?
    Gerard Vidrine

    Hi Gerard,
    When the query window comes from a table Oracle always recommends:
    1) Use the /*+ ordered */ hint
    2) Put the table the quiery window comes from (geometry-2 in the query) first in the from clause
    However, your query is also written very strange. Do you know about SDO_WITHIN_DISTANCE? Or are you trying to do SDO_ANYINTERACT (since the distance is 0)?
    So I would write the query you have as:
    SELECT s.ROWNUM
    from NOMINATIONS O, COUNTIES s
    where sdo_relate(S.GEOM,o.geom, 'querytype=WINDOW mask=anyinteract')='TRUE';
    or in Oracle10g:
    SELECT s.ROWNUM
    from NOMINATIONS O, COUNTIES s
    where sdo_anyinteract(S.GEOM,o.geom)='TRUE';

  • Spatial query performance problems

    In preparation for making using of spatial data in our oracle database, I wanted to create a view (materialised) that brings together data from a few different tables into one place ready for publishing as a WMS layer.
    I'm being stumped at first base by crippling performance of Oracle spatial function. Later joins of ordinary fields are ok, but the spatial joining of two tables using the following sql runs for an absurd length of time (i've given up - I don't know how long it actually takes only that it takes far too long)
    SELECT /*+ ordered */
    lg.GRIDREF, lg.SYSTEM, lg.PARENT, lg.TYPE,
    lrd.REGION_CODE
    FROM TABLE (SDO_JOIN('L_GRIDS','BOUNDARY','L_REGION_DEFINITION','BOUNDARY','mask=COVERS')) c,
    L_GRIDS lg, L_REGION_DEFINITION lrd
    WHERE c.rowid1 = lg.rowid AND c.rowid2 = lrd.rowid
    ORDER BY lrd.REGION_CODE
    Both tables have spatial indexs. L_REGION_DEFINITION contains 200 rows with complex boundaries stored as spatial objects. L_GRIDS contains 475,000 rows, each with a trivially simple spatial object consisting of a square polygon of 4 points.
    The database is 10g patched to latest. The server is dual quad Xeon processors with 16gb of ram. I didn't expect it to be a lightning query, but surely it should be usable?
    Any ideas?

    Try to upgrade to at least 11.2.0.2 and use the following query
    SELECT /*+ leading(lrd lg) */
    lg.GRIDREF, lg.SYSTEM, lg.PARENT, lg.TYPE,
    lrd.REGION_CODE
    FROM L_GRIDS lg, L_REGION_DEFINITION lrd
    WHERE sdo_relate(lg.boundary, lrd.boundary, 'mask=COVEREDBY') = 'TRUE'
    ORDER BY lrd.REGION_CODE;
    And since not sure about your query's intention, maybe it is "mask=INSIDE+COVEREDBY",
    please check out oracle spatial developer guide for details about different masks.

  • Physical query (performance)problem in obiee11.1.1.6.8

    Hi all,
    i have built  same Logical model  for sql server database and Oracle database.i have performance issue in sql server where as it not showing the filter in physical query(nqs query log) hitting the DB of SQL server( so its taking more time to respond) but same model i m using for oracle Db its passing the physical query with filter to oracle DB (it is not taking more time  )to respond in obiee11.1.1.6.8
    Please help me

    Can you run the same query on Physical SQL Server DB and let us know the outcome ?
    Thanks,

  • Query, Performance Problem in apex

    Hi All,
    I am using
    select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website,rn from (select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website,dense_rank() over(order by contactname,address1)as rn, row_number() over (partition by contactname, address1 order by contactname, address1) as rn1 from vw_sub_cl_add1 where siteid=v('P10_SITENO') and bpcnum_0 = v('P10_CLNO')) emp where rn1 =1 and rn >= v('P10_RN')
    the above query to extract the details from a view, in a pl/sql region
    pagination is also working fine.
    ie only 4 records at a time will get displayed.
    Problem is
    it is taking 1min and 5 seconds to display, next set of records.
    please, could any tell me how to reduce the time of rendering the page.?
    Thanks in advance
    bye
    Srikavi

    If it's really true, that when using bind variables the query is fast, than can you rewrite your query in your application to use bind variables?
    Try to rewrite your query to use :P10_SITENO instead of v('P10_SITENO') etc. where possible.
    You can find more details at Patrick Wolf's blog http://www.inside-oracle-apex.com/2006/12/drop-in-replacement-for-v-and-nv.html

  • Simple query performance problem

    Hey!
    I'm using two simple XQUpdate queries in my wholedoc container.
    a) insert nodes <node name="my_name"/> as last into collection('xml_content.dbxml')[dbxml:metadata('dbxml:name')='document.xml']/nodes[1]
    b) delete node collection('xml_content.dbxml')[dbxml:metadata('dbxml:name')='document.xml']/nodes[1]/node[@name='my_name'][last()]
    The queries are operating on the same document.
    1) First a bunch of 'insert' queries has been executed (ca.50),
    2) Then a bunch of delete queries (ca. 50).
    The attribute name of element node varies.
    After a couple of iterations 1) and 2) each XQUpdate statement takes a lot of time to be completed (ca. 5-10 secs, whereas before it took much less then a second).
    The number of node elements in nodes element never exceeded 50. And eventually it works very slow even with 2 node elements.
    Does anybody have an idea what goes wrong after certain number of queries? What are the possible solutions here? How can I examine what is wrong?
    I didn't find relevant information in DB XML docs. Maybe I should look at BDB docs?
    Thanks in advance,
    Vyacheslav

    Here is a patch to fix the problem in 2.4.16. Note that the slowdown that this patch fixes only applies to whole document containers.
    Lauren Foutz
    diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.cpp dbxml-2.4.16/dbxml/src/dbxml/Indexer.cpp
    --- dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.cpp     2008-10-21 17:27:22.000000000 -0400
    +++ dbxml-2.4.16/dbxml/src/dbxml/Indexer.cpp     2009-04-27 14:06:40.000000000 -0400
    @@ -477,7 +477,8 @@
                        if(updateStats_) {
                             // Get the size of the node
                             size_t nodeSize = 0;
    -                         if(ninfo != 0) {
    +                         // Node size is kept only for node containers
    +                         if(ninfo != 0 && container_->isNodeContainer()) {
                                  const NsFormat &fmt =
                                       NsFormat::getFormat(NS_PROTOCOL_VERSION);
                                  nodeSize = ninfo->getNodeDataSize();
    @@ -487,18 +488,22 @@
                                                        0, /*count*/true);
    -                         // Store the node stats for this node
    +                         /* Store the node stats for this node, only the descendants
    +                          * of the node being partially indexed are being removed/added
    +                          */
                             StructuralStats *cstats = &cis->stats[0];
    -                         cstats->numberOfNodes_ = 1;
    +                         cstats->numberOfNodes_ = this->getStatsNumberOfNodes(ninfo);
                             cstats->sumSize_ = nodeSize;
                             // Increment the descendant stats in the parent
                             StructuralStats *pstats = 0;
                             if (pis) {
                                  pstats = &pis->stats[0];
    -                              pstats->sumChildSize_ += nodeSize;
    -                              pstats->sumDescendantSize_ +=
    -                                   nodeSize + cstats->sumDescendantSize_;
    +                              if (container_->isNodeContainer()) {
    +                                   pstats->sumChildSize_ += nodeSize;
    +                                   pstats->sumDescendantSize_ +=
    +                                        nodeSize + cstats->sumDescendantSize_;
    +                              }
                                  pstats = &pis->stats[k.getID1()];
                                  pstats->sumNumberOfChildren_ += 1;
    diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.hpp dbxml-2.4.16/dbxml/src/dbxml/Indexer.hpp
    --- dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.hpp     2008-10-21 17:27:18.000000000 -0400
    +++ dbxml-2.4.16/dbxml/src/dbxml/Indexer.hpp     2009-04-27 14:08:20.000000000 -0400
    @@ -19,6 +19,7 @@
    #include "OperationContext.hpp"
    #include "KeyStash.hpp"
    #include "StructuralStatsDatabase.hpp"
    +#include "nodeStore/NsNode.hpp"
    namespace DbXml
    @@ -181,6 +182,8 @@
         void checkUniqueConstraint(const Key &key);
         void addIDForString(const unsigned char *strng);
    +     
    +     virtual int64_t getStatsNumberOfNodes(const IndexNodeInfo *ninfo) const { return 1; }
    protected:     
         // The operation context within which the index keys are added
    diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.cpp dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.cpp
    --- dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.cpp     2008-10-21 17:27:22.000000000 -0400
    +++ dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.cpp     2009-04-27 14:04:42.000000000 -0400
    @@ -103,6 +103,7 @@
              const DocID &did = document_.getID();
              DbWrapper &db = *document_.getDocDb();
              ElementIndexList nodes(*this);
    +          partialIndexNode_ = node->getNid();
              do {
                   bool hasValueIndex = false;
                   bool hasEdgePresenceIndex = false;
    @@ -124,6 +125,7 @@
              nodes.generate(*this);
    +     partialIndexNode_ = 0;
         return ancestorHasValueIndex;
    @@ -203,6 +205,19 @@
    +
    +int64_t NsReindexer::getStatsNumberOfNodes(IndexNodeInfo *ninfo) const
    +{
    +     /* Get the number of this node being removed or added, only the descendants
    +      * of the node being partially indexed are being removed/added
    +      */
    +     DBXML_ASSERT(!partialIndexNode_ || (ninfo != 0));
    +     if (!partialIndexNode_ || (partialIndexNode_.compareNids(ninfo->getNodeID()) < 0)) {
    +          return 1;     
    +     }
    +     return 0;
    +}
    +
    const char *NsReindexer::lookupUri(int uriIndex)
    diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.hpp dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.hpp
    --- dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.hpp     2008-10-21 17:27:18.000000000 -0400
    +++ dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.hpp     2009-04-27 14:09:04.000000000 -0400
    @@ -45,6 +45,7 @@
         const char *lookupUri(int uriIndex);
         void indexAttribute(const char *aname, int auri,
                       NsNodeRef &parent, int index);
    +     virtual int64_t getStatsNumberOfNodes(IndexNodeInfo *ninfo) const;
    private:
         IndexSpecification is_;
         KeyStash stash_;
    @@ -54,6 +55,9 @@
         // this is redundant wrt Indexer, but dict_ in Indexer triggers
         // behavior that this class does not want
         DictionaryDatabase *dictionary_;
    +          
    +     // The node being indexed in partial indexing
    +     NsNid partialIndexNode_;
    }Edited by: LaurenFoutz on May 1, 2009 5:47 AM

Maybe you are looking for