BICS performance vs. MDX

Does anyone know what the performance improvement is of using BICS over MDX?  I expect it to be significant.
Thanks,
John

Hi John,
i'm not sure there is an official benchmark about this however what i heard is that the performance are similar on small result sets (few thousands of rows).
Moreover the datatype conversion in MDX that can have an impact on performance for large result sets.
thx,
Romaric

Similar Messages

  • Performance of MDX Calculation

    HI All,
    What are the steps and ways to optimize the performance of the calculated Member created in the Cube.
    I have the below Code for the calculated member:
    CREATE MEMBER CURRENTCUBE.[Measures].[K3001 - Loaded Freight Rate (USD/FFE)]
     AS IIF([Measures].[Actual/Forecast FFE Loaded] = 0, NULL, 
    ([Accounts].[LMB Lvl4].&[LMB.4110],[Accounts].[Account Type].&[GR],[Measures].[Actual/Forecast USD])
    +
    ([Accounts].[LMB Lvl5].&[LMB.5157],[Accounts].[Account Type].&[Rest of PnL],[Measures].[Actual/Forecast USD])
    ( [Measures].[Actual/Forecast FFE Loaded])), 
    VISIBLE = 1 ,  DISPLAY_FOLDER = 'Revenue' ,  ASSOCIATED_MEASURE_GROUP = 'Key Figures';   
    Here Accounts is a dimension.
    Now this calculation shows me the correct value, but when I drill it with other Dimension say STRING, it take 40-50 sec to give the result.
    Here I want to know whether there is some correction needed in my query or any other optimization technique is required.
    Thanks
    Sudipta Ghosh
    Sudipta Ghosh Tata Consultancy Services

    Hi Sudipta,
    IIF is one of the most popular MDX functions. Yet, it can cause significant performance degradation, which is often blamed on other parts of the system. Many times it is simple to rewrite the MDX expression to get rid of IIF altogether, and other times it
    is possible to slightly change the IIF to increase performance.
    http://sqlblog.com/blogs/mosha/archive/2007/01/28/performance-of-iif-function-in-mdx.aspx
    In addtional, i'd suggest you enable SQL Sever profiler to monitor the queries fired by the process, once you find some queries took a very long time to run, consider creating the smaller cube partition or optimzing the query by adding index or
    partition to improve the query performance. Here are some links about performance tuning.
    http://www.mssqltips.com/sqlservertip/2565/ssas--best-practices-and-performance-optimization--part-1-of-4/
    http://sqlmag.com/t-sql/top-9-analysis-services-tips
    http://channel9.msdn.com/Events/TechEd/NewZealand/2013/DBI414
    Hope this helps.
    Regards,
    Charlie Liao
    TechNet Community Support

  • SUBSELECT vs WHERE performance issues MDX

    Hello,
    I've built SSRS report that gets CustomerID as parameter and runs MDX query with it.
    With WHERE clause it only takes 1 second to run, while if I pass it to SUBSELECT clause it takes 13 seconds!
    And I have to use SUBSELECT because I want to show the member's name in the results
    The syntax of the long query is:
        SELECT NON EMPTY { [Measures].[Revenue] } ON COLUMNS,
        NON EMPTY { ([CUBE DIM DATE].[Month CD].[Month CD].ALLMEMBERS *
        [CUBE DIM CUSTOMER].[Account MNG].[Account MNG].ALLMEMBERS *
        [CUBE DIM PRODUCT].[Product CD].[Product CD].ALLMEMBERS ) }
        DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME, MEMBER_KEY ON ROWS FROM
        ( SELECT ({ [CUBE DIM CUSTOMER].[Customer No].&[111111]})   on 0 from   [CUBE_Prod] )
    So if instead of the last line I use:
        [CUBE_Prod]  WHERE [CUBE DIM CUSTOMER].[Customer No].&[111111]
    ...leaving all the rest the same then it only takes 1 second.
    Obviously, I am missing something...
    Please help...
    Thank you in advance
    Michael
    Michael

    http://stackoverflow.com/questions/21463364/mdx-sub-select-vs-where-performance-issues
    no sense in leaving points on the table ;-)
    BI Developer and lover of data (Blog |
    Twitter)

  • Mdx Query performance problem

    Hi
    Is there any way to control the performance of Mdx expressions that use the Filter function? The following Mdx statement is an example of a query we are generating to return filtered characteristic values for users to make selections for variables.
    Note: It is intentional that the column axis is not populated as we are interested only in the returned characteristic values.
    SELECT {} N COLUMNS,
    Order(
         Filter(
              {[ZPLANTYPE].[All].Children},
              (([ZPLANTYPE].CurrentMember.Name >= 'a' AND [ZPLANTYPE].CurrentMember.Name < 'b') OR
              ([ZPLANTYPE].CurrentMember.Name >= 'A' AND [ZPLANTYPE].CurrentMember.Name < 'B'))
         [ZPLANTYPE].CurrentMember.Name, BASC
    ) ON ROWS FROM [$IC_FLT]
    In a real example with 162,000 charateristics this query is taking up to 5 minutes to run - clearly unacceptable as part of a user interface. It appears that behind the scenes a sequential read of the underlying dimesnion table is being carried out.
    It is difficult to create a more sophisticated query due to the lack of string handling logic in the raw Mdx language.

    Hi,
    I have been through the queries, and understand that the "_MSCM1" is being aggregated across Product and Paid Amount from the query extract below:
    member [Accounts].[_MSCM1] as 'AGGREGATE({[_Product2]}, [Accounts].[Paid Amount])'
    If I am getting it right, there is an aggregation rule missing for [Paid Amount] (I think that's the reason, the query is to aggregate _MSCM1 by "Paid Amount" ie just like any other dimension).
    Could you please check this once and this is why I think BI is generating two queries? I am sorry, if I got this wrong.
    Hope this helps.
    Thank you,
    Dhar

  • SSAS Tabular. MDX slow when reporting high cardinality columns.

    Even with small fact tables( ~20 million rows) MDX is extremely slow when there are high cardinality columns in the body of the report.
    e.g. The DAX query is subsecond.
    Evaluate
    SUMMARIZE (
    CALCULATETABLE('Posted Entry',
    'Cost Centre'[COST_CENTRE_ID]="981224"
    , 'Vendor'[VENDOR_NU]="100001"
    ,'Posted Entry'[DR_CR]="S")
    ,'Posted Entry'[DOCUMENT_ID]
    ,'Posted Entry'[DOCUMENT_LINE_DS]
    ,'Posted Entry'[TAX_CODE_ID]
    ,"Posted Amount",[GL Amount]
    ,"Document Count",[Document Count]
    ,"Record Count",[Row Count]
    ,"Document Line Count",[Document Line Count]
    ,"Vendor Count",[Vendor Count]
    order by
    'Posted Entry'[GL Amount] desc
    The MDX equivalent takes 1 minute 13 seconds.
    Select
    { [Measures].[Document Count],[Measures].[Document Line Count],[Measures].[GL Amount], [Measures].[Row Count],[Measures].[Vendor Count]} On Columns ,
    NON EMPTY [Posted Entry].[DOCUMENT_ID_LINE].[DOCUMENT_ID_LINE].AllMembers * [Posted Entry].[DOCUMENT_LINE_DS].[DOCUMENT_LINE_DS].AllMembers * [Posted Entry].[TAX_CODE_ID].[TAX_CODE_ID].AllMembers On Rows
    From [Scrambled Posted Entry]
    WHERE ( [Cost Centre].[COST_CENTRE_ID].&[981224] ,[Vendor].[VENDOR_NU].&[100001] ,{[Posted Entry].[DR_CR].&[S]})
    I've tried this under 2012 SP1 and it is still a problem. The slow MDX happens when there is a high cardinality column in the rows and selection is done on joined tables. DAX performs well; MDX doesn't. Using client generated MDX or bigger fact tables makes
    the situation worse.
    Is there a go fast switch for MDX in Tabular models?

    Hi,
    There's only 50 rows returned. The MDX is still slow if you only return a couple of rows.
    It comes down to the DAX produces a lot more efficient queries against the engine.
    FOR DAX
    e.g.
    after a number of reference queries in the trace the main vertipaq se query is
    SELECT
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID],
    SUM([Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[POSTING_ENTRY_AMT])
    FROM [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1]
    WHERE
     ([Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]) IN {('0273185857', 'COUOXKCZKKU:CKZTCO CCU YCOT
    XY UUKUO ZTC', 'P0'), ('0272325356', 'ZXOBWUB ZOOOUBL CCBW ZTOKKUB:YKB 9T KOD', 'P0'), ('0271408149', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 7.3ZT BUY', 'P0'), ('0273174968', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KBW', 'P0'), ('0273785256', 'ZOUYOWU ZOCO CLU:Y/WTC-KC
    YOBT 3ZT JXO', 'P0'), ('0273967993', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KCB', 'P0'), ('0272435413', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT BUY', 'P0'), ('0273785417', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT BUY', 'P0'), ('0272791529', 'ZOUYOWU ZOCO CLU:Y/WTC-KC
    YOBT 7.3ZT JXO', 'P0'), ('0270592030', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 89.3Z JXO', 'P0')...[49 total tuples, not all displayed]};
    showing a CPU time of 312 and duration of 156. It looks like it has constructed an in clause for every row it is retrieving.
    The total for the DAX query from the profiler is 889 CPU time and duration of 1669
    For the MDX
    after a number of reference queries in the trace the expensive vertipaq se query is
    SELECT
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID_LINE], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]
    FROM [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1]
    WHERE
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DR_CR] = 'S';
    showing a CPU time of 49213 and duration of 25818.
    It looks like it is only filtering by a debit/credit indicator .. this will be half the fact table.
    After that it fires up some tuple based queries (similar to the MDX but with crossjoins)
    SELECT
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID_LINE], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]
    FROM [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1]
    LEFT OUTER JOIN [Vendor_6b7b13d5-69b8-48dd-b7dc-14bcacb6b641] ON [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[VENDOR_NU]=[Vendor_6b7b13d5-69b8-48dd-b7dc-14bcacb6b641].[VENDOR_NU]
    LEFT OUTER JOIN [Cost Centre_f181022d-ef5c-474a-9871-51a30095a864] ON [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[COST_CENTRE_ID]=[Cost Centre_f181022d-ef5c-474a-9871-51a30095a864].[COST_CENTRE_ID]
    WHERE
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DR_CR] = 'S' VAND
    ([Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID_LINE], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]) IN {('0271068437/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 7.3ZT ZTC', 'P0'), ('0272510444/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KBW', 'P0'), ('0272606954/1', null, 'P0'), ('0273967993/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KCB', 'P0'), ('0272325356/1', 'ZXOBWUB ZOOOUBL CCBW ZTOKKUB:YKB 9T KOD', 'P0'), ('0272325518/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KUW', 'P0'), ('0273231318/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 7.3ZT ZWB', 'P0'), ('0273967504/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KBW', 'P0'), ('0274055644/1', 'YBUCC OBUC YTT OYX:OD 5.3F81.3ZT TOZUT', 'P5'), ('0272435413/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT BUY', 'P0')...[49 total tuples, not all displayed]};
    This query takes 671 CPU and duration 234; more expensive than the most expensive part of the DAX query but still insignificant compared to the expensive part of the MDX.
    The total for the MDX query from the profiler is 47206 CPU time and duration of 73024.
    To me the problem looks like the MDX fires a very expensive query against the fact table and only filters by 1 element of the fact table; then goes about refining the set later on.

  • Secured Dimension is blocking my mdx allocation

    I needed to set my account dimension as secured (READ and WRITE), because my security is defined by account member.
    But the problem is that with this, I can no longer perform allocation (mdx), even if I execute the procedure with a user that has a profile with all taks profile permitions (read and write) in all members.
    Hope you can help me.
    Thanks a lot!!!

    Hi,
    First of all, I would say that securing the account dimension is something that can be done, but is not really a best practice.
    This being said, I would maybe advise you to use *IGNORE_SECURITY statement in the allocation logic.
    I could maybe help.
    Kind Regards,
    Patrick

  • Controlling the MDX constructed by Excel PivotTables

    Running SQL Profiler while refreshing an Excel PivotTable that runs for several minutes revealed how poorly performing the MDX is that Excel is constructing. Can someone answer why the MDX is this poor? Using an example against the Adventure Works database,
    the following query is used to populate a PivotTable.
    SELECT {
    [Measures].[Internet Sales Amount],
    [Measures].[Internet Order Quantity]
    } DIMENSION PROPERTIES
    PARENT_UNIQUE_NAME,
    HIERARCHY_UNIQUE_NAME
    ON COLUMNS ,
    NON EMPTY
    CrossJoin(
    CrossJoin(
    CrossJoin(
    CrossJoin(
    Hierarchize({DrilldownLevel({[Customer].[Customer].[All Customers]},,,INCLUDE_CALC_MEMBERS)}),
    Hierarchize({DrilldownLevel({[Customer].[Country].[All Customers]},,,INCLUDE_CALC_MEMBERS)})
    Hierarchize({DrilldownLevel({[Customer].[State-Province].[All Customers]},,,INCLUDE_CALC_MEMBERS)})
    ), Hierarchize({DrilldownLevel({[Customer].[City].[All Customers]},,,INCLUDE_CALC_MEMBERS)})
    ), Hierarchize({DrilldownLevel({[Customer].[Postal Code].[All Customers]},,,INCLUDE_CALC_MEMBERS)})
    ) DIMENSION PROPERTIES
    PARENT_UNIQUE_NAME,
    HIERARCHY_UNIQUE_NAME,
    [Customer].[State-Province].[State-Province].[Country],
    [Customer].[Postal Code].[Postal Code].[City],
    [Customer].[Customer].[Customer].[Address],
    [Customer].[Customer].[Customer].[Birth Date],
    [Customer].[Customer].[Customer].[Commute Distance],
    [Customer].[Customer].[Customer].[Date of First Purchase],
    [Customer].[Customer].[Customer].[Education],
    [Customer].[Customer].[Customer].[Email Address],
    [Customer].[Customer].[Customer].[Gender],
    [Customer].[Customer].[Customer].[Home Owner],
    [Customer].[Customer].[Customer].[Marital Status],
    [Customer].[Customer].[Customer].[Number of Cars Owned],
    [Customer].[Customer].[Customer].[Number of Children At Home],
    [Customer].[Customer].[Customer].[Occupation],
    [Customer].[Customer].[Customer].[Phone],
    [Customer].[Customer].[Customer].[Postal Code],
    [Customer].[Customer].[Customer].[Total Children],
    [Customer].[Customer].[Customer].[Yearly Income],
    [Customer].[City].[City].[State-Province]
    ON ROWS
    FROM [Adventure Works]
    CELL PROPERTIES
    VALUE,
    FORMAT_STRING,
    LANGUAGE,
    BACK_COLOR,
    FORE_COLOR,
    FONT_FLAGS
    If on the Display tab of the PivotTable Options dialog box, the Show Properties in Tooltips and Show calculated members from OLAP server are unchecked, the query is simplified but still extremely slow.
    SELECT {
    [Measures].[Internet Sales Amount],
    [Measures].[Internet Order Quantity]
    } DIMENSION PROPERTIES
    PARENT_UNIQUE_NAME,
    HIERARCHY_UNIQUE_NAME
    ON COLUMNS ,
    NON EMPTY
    CrossJoin(
    CrossJoin(
    CrossJoin(
    CrossJoin(
    Hierarchize({DrilldownLevel({[Customer].[Customer].[All Customers]})}),
    Hierarchize({DrilldownLevel({[Customer].[Country].[All Customers]})})
    Hierarchize({DrilldownLevel({[Customer].[State-Province].[All Customers]})})
    Hierarchize({DrilldownLevel({[Customer].[City].[All Customers]})})
    Hierarchize({DrilldownLevel({[Customer].[Postal Code].[All Customers]})})
    ) DIMENSION PROPERTIES
    PARENT_UNIQUE_NAME,
    HIERARCHY_UNIQUE_NAME
    ON ROWS
    FROM [Adventure Works]
    CELL PROPERTIES
    VALUE,
    FORMAT_STRING,
    LANGUAGE,
    BACK_COLOR,
    FORE_COLOR,
    FONT_FLAGS
    But that's not the true performance killer. Why is the Hierarchize(DrillDownLevel(......)) construct used? It's crazy stupid. The following query is well over 10 fold faster and it's not even all that good.
    SELECT {
    [Measures].[Internet Sales Amount],
    [Measures].[Internet Order Quantity]
    } DIMENSION PROPERTIES
    PARENT_UNIQUE_NAME,
    HIERARCHY_UNIQUE_NAME
    ON COLUMNS ,
    NON EMPTY
    CrossJoin(
    CrossJoin(
    CrossJoin(
    CrossJoin(
    {[Customer].[Customer].[All Customers].Children},
    {[Customer].[Country].[All Customers].Children}
    {[Customer].[State-Province].[All Customers].Children}
    {[Customer].[City].[All Customers].Children}
    {[Customer].[Postal Code].[All Customers].Children}
    ) DIMENSION PROPERTIES
    PARENT_UNIQUE_NAME,
    HIERARCHY_UNIQUE_NAME
    ON ROWS
    FROM [Adventure Works]
    CELL PROPERTIES
    VALUE,
    FORMAT_STRING,
    LANGUAGE,
    BACK_COLOR,
    FORE_COLOR,
    FONT_FLAGS
    Martin
    &lt;a href=&quot;http://martinsbiblog.spaces.live.com&quot; target=&quot;_blank&quot;&gt;http://martinmason.wordpress.com&lt;/a&gt;

    Thanks to everyone for responding. And yes, Muthukumaran, the actual query I'm using has user hierarchies defined and the attributes associated with a given dimension are adjacent to each other. If the attribute hierarchies are positioned in the order that
    users originally had the PivotTable laid out, the query takes at least 10 times longer. And Christian, been following your blog for a long time, since the currency conversion posts of an eon ago, so I'd seen the reference posted above. As this report is being
    delivered in Excel Services, using VBA isn't an option.
    While Excel 2013 has some tremendous enhancements for querying a multidimensional cube, finally have a GUI to construct calculated members and don't have to rely on OLAP PivotTable Extensions to provide this functionality, as these MDX construction problems
    persist from version to version to version to version, it's been nearly impossible to propose a pure Microsoft BI solution to a perspective client. Yes, we now have Power View, but honestly, put Power View up against Tableau? You're kidding, right? Plus, the
    whole Microsoft BI story from the get-go has been empower information workers with the tools they're familiar with. Seems to me like the entire BI team at Microsoft has their priorities out of whack. Rather than building the next Power Whatever, why don't
    you give us the ability to use QueryTables with custom MDX and wire them together with Slicers. Then, may be then, you'll have a usable BI frontend. Muthukumaran, can you take that to the BI team?
    Fifteen years and Microsoft still has yet to produce a decent BI frontend. Crazy.
    Martin
    &lt;a href=&quot;http://martinsbiblog.spaces.live.com&quot; target=&quot;_blank&quot;&gt;http://martinmason.wordpress.com&lt;/a&gt;

  • BPC Script Logic - XDIM_FILTER results in error

    I am writing a Script Logic in BPC and I need it to include all records where the DESTINATION dimension member has the Dimension Property of PLANDEST = "Y"
    This approach works:
    *SELECT(%DestinationMbrs%,"[ID]","Destination","[PlanDest] = 'Y'")
    *XDIM_MEMBERSET DESTINATION = %DestinationMbrs%
    This approach does not work:
    *XDIM_MEMBERSET DESTINATION = <ALL>
    *XDIM_FILTER DESTINATION=[DESTINATION].PROPERTIES("PlanDest")="Y"
    It results in the error message at runtime:
    Error in step 1 of QueryCubeAndDebug: -2147467259 Query (1,156) The PLANDEST dimension attribute was not found.
    The reason I would like to use the second approach is that the first approach relies on the SELECT statement.  The documentation on the SELECT statement says:
    The *SELECT statement is executed at the time the logic is validated, and the expanded result is
    written in the LGX file. This means that if the related dimension is modified, it may be necessary
    to re-validate the rules.
    So if I change the DESTINATION dimension members PLANDEST flags, I need to re-validate the script logic.  That is a maintenance nightmare and a problem waiting to happen.
    How do I solve this so that the dimension attribute is evaluated at runtime, not at logic validation time?

    As Yuan Said, if you are using MDX logic and want to use property, you should check Inapp at the 'manage property' menu of admin console.
    Usually, InApp sholud not be selected for better performance of MDX query. (SAP recommendation)
    But. Here are two cases that you should select.
    1. MDX logic in the logic script.
    2. Dimension formula.
    I hope it will make clear for all.
    James Lim

  • Poor MDX performance on F4 master data lookup

    Hi,
    <P>
    I've posted this to this forum as it didn't get much help in the BW 7.0 forum. I'm thinking it was too MDX oriented to get any help there. Hopefully someone has some ideas.
    <P>
    We have upgraded our BW system to 7.0 EHP1 SP6 from BW 3.5. There is substantial use of SAP BusinessObjects Enterprise XI 3.1 (BOXI) and also significant use of navigational attibutes. Everything works fine in 3.5 and we have worked through a number of performance problems in BW 7.0. We are using BOXI 3.1 SP1 but have tested with SP2 and it generates the same MDX. We do however have all the latest MDX related notes, including the composite note 1142664.
    <P>
    We have a number of "fat" queries that act as universes for BOXI and it is when BOXI sends a MDX statement that includes certain crossjoins with navigational attributes that things fall apart. This is an example of one that runs in about a minute in 3.5:
    <P>
    SELECT { [Measures]. [494GFZKQ2EHOMQEPILFPU9QMV], [Measures].[494GFZSELD3E5CY5OFI24BPCN], [Measures].[494GG07RNAAT6M1203MQOFMS7], [Measures]. [494GG0N4P7I87V3YBRRF8JK7R] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0MAT_SALES__ZPRODCAT]. [LEVEL01].MEMBERS, EXCEPT( { [0MAT_SALES__ZASS_GRP]. [LEVEL01].MEMBERS } , { { [0MAT_SALES__ZASS_GRP].[M5], [0MAT_SALES__ZASS_GRP].[M6] } } ) ), EXCEPT( { [0SALES_OFF]. [LEVEL01].MEMBERS } , { { [0SALES_OFF].[#] } } ) ), [0SALES_OFF__ZPLNTAREA].[LEVEL01].MEMBERS ), [0SALES_OFF__ZPLNTREGN]. [LEVEL01].MEMBERS ), [ZMFIFWEEK].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES MEMBER_UNIQUE_NAME, MEMBER_NAME, MEMBER_CAPTION ON ROWS FROM [ZMSD01/ZMSD01_QBO_Q0010]
    <P>
    However in 7.0 there appear to be some master data lookups that are killing performance before we even get to the BW queries. Note that in RSRT terms this is prior to even getting the popup screen withe "display aggregate".
    <P>
    They were taking 700 seconds but now take about 150 seconds after an index was created on the ODS /BIC/AZOSDOR0300. From what I can see, the navigational attributes require BW to ask "what are the valid SIDs for SALES_OFF in this multiprovider". The odd thing is that BW 3.5 does no such query. It just hits the fact tables directly.
    <P>
    SELECT "SID" , "SALES_OFF" FROM ( SELECT "S0000"."SID","P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000"."SALES_OFF" = "S0000"."SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BI0/D0PCA_C021" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDBL018" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR028" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR038" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR058" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR081" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDPAY016" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "P0000"."SALES_OFF" IN ( SELECT "O"."SALES_OFF" AS "KEY" FROM "/BIC/AZOSDOR0300" "O" ) ) ORDER BY "SALES_OFF" ASC
    <P>
    I had assumed this had something to do with BOXI - but I don't think this is a MDX specific problem, even though it's hard to test in RSRT as it's a query navigation. Also I assumed it might be something to do with the F4 master data lookup but that's not the case, because of course this "fat" query doesn't have a selection screen, just a small initial view and a large number of free characteristics. Still I set the characteristic settings not only to do lookups on the master data values and that made no difference. Nonetheless you can see in the MDXTEST trace that event 6001: F4: Read Data. Curiously this is an extra one that sits between event 40011: MDX Initialization and event 40010: MDX Execution.
    <P>
    I've tuned this query as much as I can from the Oracle perspective and checked the indexes and statistics. Also checked Oracle is perfectly tuned and parameterized as for 10.2.0.4 with the May 2010 patchset for AIX. But this query returns an estimated 56 million rows and runs an expensive UNION join on them - so no suprise that it's slow. As a point of interest changing it from UNION to UNION ALL cuts the time to 30 seconds. Don't think that helps me though, other than confirming that it is the sort which is expensive on 56m records.
    <P>
    Thinking that the UNORDER MDX statement might make a difference, I changed the MDX to the following but that didn't make any difference either.
    <P>
    SELECT { [Measures].[494GFZKQ2EHOMQEPILFPU9QMV], [Measures].[494GFZSELD3E5CY5OFI24BPCN], [Measures].[494GG07RNAAT6M1203MQOFMS7], [Measures].[494GG0N4P7I87V3YBRRF8JK7R] } ON COLUMNS ,
    NON EMPTY UNORDER( CROSSJOIN(
      UNORDER( CROSSJOIN(
        UNORDER( CROSSJOIN(
          UNORDER( CROSSJOIN(
            UNORDER( CROSSJOIN(
              [0MAT_SALES__ZPRODCAT].[LEVEL01].MEMBERS, EXCEPT(
                { [0MAT_SALES__ZASS_GRP].[LEVEL01].MEMBERS } , { { [0MAT_SALES__ZASS_GRP].[M5], [0MAT_SALES__ZASS_GRP].[M6] } }
            ) ), EXCEPT(
              { [0SALES_OFF].[LEVEL01].MEMBERS } , { { [0SALES_OFF].[#] } }
          ) ), [0SALES_OFF__ZPLNTAREA].[LEVEL01].MEMBERS
        ) ), [0SALES_OFF__ZPLNTREGN].[LEVEL01].MEMBERS
      ) ), [ZMFIFWEEK].[LEVEL01].MEMBERS
    DIMENSION PROPERTIES MEMBER_UNIQUE_NAME, MEMBER_NAME, MEMBER_CAPTION ON ROWS FROM [ZMSD01/ZMSD01_QBO_Q0010]
    <P>
    Does anyone know why BW 7.0 behaves differently in this respect and what I can do to resolve the problem? It is very difficult to make any changes to the universe or BEx query because there are thousands of Webi queries written over the top and the regression test would be very expensive.
    <P>
    Regards,
    <P>
    John

    Hi John,
    couple of comments:
    - first of all you posted it in the wrong forum. This belongs into the BW forum
    - MDX enhancements in regards to BusinessObjects are part of BW 7.01 SP05 - not just BW 7.0
    would suggest you post it in the BW forum
    Ingo

  • ASO MDX member formula and performance

    Hi,
    I am doing some testing about MDX formulas and performance. I found a performance issue but I can not understand why is taking so long time a report.
    The situation is:
    I create a report or a MDX query with:
    6 dimensions in row and 1 dimension in column
    rows:
    Period - Filtered using a member
    Year - Filtered using a member
    Relationship Manager - Filtered using a member
    Report Type - Filtered using a member
    Local Relationship Manager - 4400 members level 0
    Global Relationship Manager - 10400 members level 0
    Column:
    Account dimension, only a member
    The member selected for Report Type (RM.Local) has a formula
    My Report Type dimension has 10 members, there is one member where I store data called : RM.Input
    My first test was
    RM.Local his formula is [RM.Input] , the report is run in 1 second
    RM.Local his formula is ([RM.Input],[MTD]) where MTD is a member level 0 store in my view dimension. The report run in 20 minutes. I was not expecting so bad performance when I only pointing at [RM.Input],[MTD]
    Do you consider this time is reasonable when I am using this formula?
    The mdx report is:
    With
    set [_Local Relationship Manager3] as 'Descendants([All Local Relationship Managers], 2)' = level 0 members
    set [_Global Relationship Manager4] as '[Global Relationship Manager].Generations(4).members' = level 0 members
    set [_Period0] as '{[Period].[Oct]}'
    set [_Relationship Manager4] as '{[Relationship Manager].[Dummy1)]}'
    set [_Report Type0] as '{[Report Type].[RM.Local]}'
    set [_Year2] as '{[Year].[FY-2013]}'
    select
    { [Account].[Expenses]
    } on columns,
    NON EMPTY {crossjoin({[_Local Relationship Manager3]},crossjoin({[_Global Relationship Manager4]},crossjoin({[_Period0]},crossjoin({[_Relationship Manager4]},crossjoin({[_Report Type0]},{[_Year2]})))))} properties MEMBER_NAME, GEN_NUMBER, [Global Relationship Manager].[MEMBER_UNIQUE_NAME], [Global Relationship Manager].[Memnor], [Local Relationship Manager].[MEMBER_UNIQUE_NAME], [Local Relationship Manager].[Memnor], [Relationship Manager].[MEMBER_UNIQUE_NAME], [Relationship Manager].[Memnor], [Period].[Default], [Report Type].[Default], [Year].[MEMBER_UNIQUE_NAME], [Year].[Memnor] on rows
    from [DICISRM.DICISRM]

    Ok Try this one
    But here you have to change the MDX formula every month.
    Year
    --FY2009
    --FY2010
    --FY2011
    --FY2012
    Period
    --TotalYear
    ----Qtr1
    -------Jan
    -------Feb
    -------Mar
    Let say if you're CurrentYear  is FY2011 and you're Current Month is March then you're MDX will be
    case when contains([Year].CurrentMember,MemberRange([FY2009],[FY2010])) and contains([Period].CurrentMember,MemberRange([Jan],[Feb]))
    Then
    B
    else
    C
    end
    For the Next month you just have to make a change in the MemberRange I.e.,(Replace Feb with Mar)
    *case when contains([Year].CurrentMember,MemberRange([FY2009],[FY2010])) and contains([Period].CurrentMember,MemberRange([Jan],[Mar]))*
    Then
    B
    else
    C
    end
    I tested it and Its working fine.
    I think this will solve you're problem but there might be a more elegant solution out there.
    Regards,
    RSG

  • VAL_FIELD selection to determine RSDRI or MDX query: performance tuning

    according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
    I did so and found an interesting issue in one of the COPA report.
    with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
    I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
    in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
    so in smmary,
    BAS(PARENT) =>MDX query.
    BAS(CHILD1)=>RSDRI query.
    BAS(CHILD2)=>RSDRI query.
    BAS(CHILD3)=>RSDRI query.
    BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
    I know VAL_FIELD is SAP reserved name for BPC dimensions.  my question is why BAS(PARENT) =>MDX query.?
    interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
    George

    Ok - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection. 
    I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
    I should have paid more attention to the Info message I got in the BEx Query Designed.  It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
    After moving the variables to the Characteristic Restrictions my report worked as expected.  The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
    Hope this helps someone else

  • How to enable "SelectAll " in MultiSelect MDX filter in Performance Point Services 2010

    Hi,
    I have multi select MDX filter in Performance Point Services 2010. I need all the values of the filter to be selected by default. i.e when the page is loaded for the first time, all the values in the multi select filter should be selected.
    Is there any property enabled for this? Any other solution can be achieved?
    Kindly help.
    Regards,
    Mani

    At least I haven't managed to alter the display value. It uses the member name directly.
    The only thing that comes to mind is creating a user hierarchy, so that countries and states can be shown as a tree in the filter.

  • MDX Code - Performance for Calculated Measure

    Hi
    I've very little experience of using MDX but i was provided with the below code to create a calculated OLAP measure that appears to be the reason the performance of my report is so poor. I'm hoping someone can help me write something alot more effiecently?
    sum(tail(nonemptycrossjoin(Descendants([Reporting Date Hierarchies].[YWD].currentmember,[Reporting Date Hierarchies].[YWD].[rep_dt_1])),1),[Measures].[outstandSUM])
    The code essentially looks at daily data and in my report i have time hierachy YWD in rows and Months (non time hierachy dimension) in coloumns, it is a business requirement to provide the report as such. When looking at this initial level the yearly figures
    represent end of month position for given year, at week level end of week positions for monthly and yearly.
    Could someome suggest alternative and more performance friendly code?
    Thanks in advance

    Hi Noobiemoobie,
    In your query
    sum(tail(nonemptycrossjoin(Descendants([Reporting Date Hierarchies].[YWD].currentmember,[Reporting Date Hierarchies].[YWD].[rep_dt_1])),1),[Measures].[outstandSUM])
    There are only one set inside the nonemptycrossjoin function, why do your use nonemptycrossjoin function? If you crossjoin -sized sets (e.g., sets that contain more than 100 items each), you can end up with a result set that contains many thousands of items-enough
    to seriously impair performance. For the detail information, please see:
    http://sqlmag.com/data-access/cross-join-performance
    Please try to remove the nonemptycrossjoin function and check if this issue is persists or not. Besides, here are some links about performance tuning for you reference.
    http://www.bidn.com/blogs/DustinRyan/bidn-blog/2636/top-3-simplest-ways-to-improve-your-mdx-query
    http://blogs.msdn.com/b/azazr/archive/2008/05/01/ways-of-improving-mdx-performance-and-improvements-with-mdx-in-katmai-sql-2008.aspx
    Regards,
    Charlie Liao
    TechNet Community Support

  • Essbase MDX Performance

    Hello all,
    I am developing a report with obiee 10.1.3.4 sitting on top of Essbase and I am developing some reports which i am running into issues lately. The MDX generated by OBIEE is causing some performance issues especially in the reports where we use Rank function. I am aware of work arounds present(Christian Blogs) to get there by directly filtering in the MDX in the case of Rank function. But in general i believe there is some patch available which increases the MDX performance on a whole. So i just wanted to know if any one out there are able to achieve success with any of those patches related to MDX perfomance. Also let me know if it does some sort of tuning to the MDX query
    Thanks
    Prash
    Edited by: user10283076 on Apr 20, 2009 3:20 PM
    Edited by: user10283076 on Apr 20, 2009 3:58 PM

    Hi Prash,
    nope, nothing yet. 10.1.3.4.1 is just around the corner but from what I hear, it's not that "quantum leap" needed in the OBIEE/Essbase integration. That's definitely only going to hit us with OBIEE 11.
    Influencing the MDX generated my OBIEE isn't an easy task, unless you go and replace every column with an EVALUATE function. And that's beside the point of OBIEE.
    One alternative you may want to think of: use BIP to create your reports by writing pure MDX and then displaying the BIP reports on the dashboards.
    Cheers,
    Christi@n

  • MDX formulae reduces application performance

    Hi,
    I have few MDX formulae in my account dimension which is built as per our reporting requirements. But unfortunately having these formulae in greately affecting the performance. It takes 7- 8 minutes to generate a report if I have formulae.
    Can anyone tell me is there anyway I can increase the performance of the application without removing MDX formulae?
    Thanks
    Sharath

    Ok. Let me explain like this:
    Net Income is the highest level made up of below children and grand children.
    Memeber--Hlevel--Parent--
    Description  -
    EAT--H5--
    (Earnings After Tax)
    Tax -
    H4----
    EAT
    EBT--H4EAT--
    (Earnings Before Tax)
    Depreciation--H3--
    EBT
    Interest--H3--
    EBT
    EBIDT--H3EBT--
    (Earnings before Interest, Depreciation and Tax)
    GP--H2EBIDT--
    (Gross Profit)
    REVENUE--H1--
    GP
    COST--H1--
    GP
    In the above structure if EBIDT is a calculated member then I should add formula to both EBT and EAT as per MDX rule even though I can have this done through roll-ups. Other wise EBT and EAT will just return the value of EBIDT and ignores Interest, Depreciation and tax.
    Without formula it takes 25-30 sec. and with formula it takes 500-600 sec. to get the numbers from the report.
    Edited by: Raghavendra Sharath on Mar 20, 2009 6:03 PM

Maybe you are looking for

  • Songs not showing up on my iPhone 4 and iPad 1 on iOS 5.0.1?

    I have about one third of my music library not showing up on my iPhone 4 and iPad 1, even though it is ticked and everything, on iTunes it says that it should have been synced... For example, I have 'only sync ticked songs and videos to my iPhone/iPa

  • Oracle 11g Standard Edition + Locator Java API

    We have Oracle 11g Standard which includes the Oracle Locator package, but not the Oracle Spatial. I would like to use the Oracle® Spatial Java API in my java code, but am unsure of the licensing implications of this. I require some form of java api

  • IPhoto sort order problem

    After install of 4.2 IPhoto only syncs in date order on Ipad. Will not sort in alphabetic order despite trying numerous times.

  • Database Error "no suitable driver found..." (WIS 10901)

    Hello, have a ZCM 11.2.1 with Primarys (SLES 11 SP1) and external MS-SQL-Database (Win Server 2008 R2). Month ago, we aditionally setup a Reportingserver in ZCM 11.1 and all worked fine. Made the "ZENworks11SP2_ReportingSP4 patch" and it was OK too.

  • Non-Video Editing Graphics Adapter

    Over in http://forums.adobe.com/thread/1318710 I described a problem with wife's ATI HD 5450 card and Win8.1 breaking the 3D function in her Cad program I am concerned that "what ever" it was about 8.1 that broke her 3D function will creep into "some