Stored Measure shouldn't aggregate

I have a stored measure (lbs sold) that should remain a static value in my model. I don't want the value to aggregate or change as I roll-up on the model. right now lbs sold is starting out accurate at the lower level, but as I roll-up, it sums up. How do I keep it from doing this? I set the value to Exclude member from consolidation and it is a stored measure loaded in, not dynamically calculated. thanks

I'm not sure how AWXML names objects and partitions in this case. If you execute the following query:
select OBJNAME, PARTNAME, (dbms_lob.getlength(awlob)/1024/)1024
from aw$poc
where gen# = 0
You will get the "large" storage from the AW. See if any of those values look familiar. It's possible we are missing some name tags on the rows.
Jim

Similar Messages

  • Multiple fact tables, aggregation and model problems

    Hi,
    I am developing a OLAP Application with 2 dimensions (product,location) and a few measures (sales_qty, sales_value, sale_price, cost_price, promotion_price, average_market_price, etc). I also have a BiBean Crosstab to explore the data.
    I'm having the following problems:
    - The measures are in different fact tables.
    When using cwm2_olap_table_map.Map_FactTbl_Measure, I can only map the measures from the first fact table, the others return "Exact fetch returns more than the request number of rows".
    - The 'price_' measures shouldn't aggregate to higher levels of the dimension hierarchies. I have changed the default aggregation plan, but in the crosstab data is still shown in the higher levels. Is there any way to show N/A in levels that I don't want to aggregate?
    - How can I add a calculated field that is the result of a complex set of business rules (with IF/THEN/ELSE statements and calls to existing oracle functions) and precalculate it during batch?
    Thanks,
    Ricardo

    Keith, thanks for the quick answer!
    Some questions regarding your comments:
    1) The measures are in different fact tables
    - My application will show all the measures in the same report. My question is, will performance be affected by creating separate cubes or creating one cube? I believe that if I don't use an AW, it shouldn't, but I will have an AW. Performance is the main reason why I'm using Oracle OLAP. I need fast access to measures in different fact tables in one report. Those measures will be used in complex calculated fields, and probably in 'what if' analysis.
    2) When using cwm2_olap_table_map.Map_FactTbl_Measure, I can only map the measures from the first fact table, the others return "Exact fetch returns more than the request number of rows".
    Here is the complete script I am using to create the cube:
    execute cwm2_olap_cube.Create_Cube('OLAP','CUBE_REL_3','CUBE_REL_3','MY_CUBE','MY_CUBE');
    execute cwm2_olap_cube.add_dimension_to_cube('OLAP', 'CUBE_REL_3','OLAP', 'DIM_STORE');
    execute cwm2_olap_cube.add_dimension_to_cube('OLAP', 'CUBE_REL_3','OLAP', 'DIM_ITEM');
    -- MEASURES - FACT TABLE 1 (F_SALES)
    execute cwm2_olap_measure.create_measure('OLAP', 'CUBE_REL_3','SALES_MARGIN', 'Sales Margin','Sales Margin', 'Sales Margin');
    execute cwm2_olap_measure.create_measure('OLAP', 'CUBE_REL_3','SALES_QTY','Sales Quantity','Sales Quantity','Sales Quantity');
    execute cwm2_olap_measure.create_measure('OLAP', 'CUBE_REL_3','SALES_VALUE', 'Sales Value','Sales Value', 'Sales Value');
    -- MEASURES - FACT TABLE 2 (F_PVP_MIN_MAX)
    execute cwm2_olap_measure.create_measure('OLAP', 'CUBE_REL_3','PVP_MIN','pvp min','pvp min','pvp min');
    execute cwm2_olap_measure.create_measure('OLAP', 'CUBE_REL_3','PVP_MAX', 'pvp max','pvp max', 'pvp max');
    -- FACT TABLE 1 MAPPINGS
    execute cwm2_olap_table_map.Map_FactTbl_LevelKey('OLAP','CUBE_REL_3','OLAP','f_sales','LOWESTLEVEL','DIM:OLAP.DIM_STORE/HIER:STORE/LVL:L0/COL:COD_STORE;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER1/LVL:L0/COL:COD_ITEM;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER2/LVL:L0/COL:COD_ITEM;');
    execute cwm2_olap_table_map.Map_FactTbl_Measure('OLAP', 'CUBE_REL_3','SALES_MARGIN','OLAP','f_sales','SALES_MARGIN', 'DIM:OLAP.DIM_STORE/HIER:STORE/LVL:L0/COL:COD_STORE;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER1/LVL:L0/COL:COD_ITEM;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER2/LVL:L0/COL:COD_ITEM;');
    execute cwm2_olap_table_map.Map_FactTbl_Measure('OLAP', 'CUBE_REL_3','SALES_QTY', 'OLAP', 'f_sales', 'SALES_QTY','DIM:OLAP.DIM_STORE/HIER:STORE/LVL:L0/COL:COD_STORE;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER1/LVL:L0/COL:COD_ITEM;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER2/LVL:L0/COL:COD_ITEM;');
    execute cwm2_olap_table_map.Map_FactTbl_Measure('OLAP', 'CUBE_REL_3','SALES_VALUE', 'OLAP', 'f_sales', 'SALES_VALUE','DIM:OLAP.DIM_STORE/HIER:STORE/LVL:L0/COL:COD_STORE;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER1/LVL:L0/COL:COD_ITEM;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER2/LVL:L0/COL:COD_ITEM;');
    -- FACT TABLE 2 MAPPINGS
    execute cwm2_olap_table_map.Map_FactTbl_LevelKey('OLAP','CUBE_REL_3','OLAP','f_pvp_min_max','LOWESTLEVEL','DIM:OLAP.DIM_STORE/HIER:STORE_HIER1/LVL:L1/COL:COD_STORE;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER1/LVL:L4/COL:COD_ITEM;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER2/LVL:L4/COL:COD_ITEM;');
    execute cwm2_olap_table_map.Map_FactTbl_Measure('OLAP', 'CUBE_REL_3','PVP_MIN','OLAP','f_pvp_min_max','PVP_MIN', 'DIM:OLAP.DIM_STORE/HIER:STORE_HIER1/LVL:L1/COL:COD_STORE;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER1/LVL:L4/COL:COD_ITEM;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER2/LVL:L4/COL:COD_ITEM;');
    execute cwm2_olap_table_map.Map_FactTbl_Measure('OLAP', 'CUBE_REL_3','PVP_MAX', 'OLAP', 'f_pvp_min_max', 'PVP_MAX','DIM:OLAP.DIM_STORE/HIER:STORE_HIER1/LVL:L1/COL:COD_STORE;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER1/LVL:L4/COL:COD_ITEM;DIM:OLAP.DIM_ITEM/HIER:ITEM_HIER2/LVL:L4/COL:COD_ITEM;');
    -- CUBE VALIDATE
    execute CWM2_OLAP_VALIDATE.Validate_Cube('OLAP','CUBE_REL_3');
    The error is in the cwm2_olap_table_map.Map_FactTbl_Measure command for the first measure of the second fact table. Am I doing something wrong?
    Regarding issues 3) and 4), I will follow your sugestions. I'll get back to you on this later.
    Once again, thanks for the help, it is being most helpful.
    Ricardo Sá

  • Methods available for storing Custom Measures

    Just a general question.
    Using 11g. If I have a complex DML program, which calculates values using existing stored measures within the cube and would like to store the end results of the values calculated during the run of my program. What methods are available?
    Should my program be run as part of DBMS_CUBE package, should it be part of a maintenance script? I appreciate any info on this.

    It would probably be useful to understand what the calculation needs to do, but in general you can embed this as David describes or you can do it directly using OLAP DML. Working off David's example:
    exec dbms_cube.build('UNITS_CUBE USING (SET UNITS_CUBE.STORED_MEASURE = UNITS_CUBE.CALC_MEASURE'));
    The direct OLAP DML method would look something like this:
    SET units_cube_store_measure = units_cube_calc_measure
    Or, let's say I want to assign data to sales_cube.sales with the expression sales_cubes.sales = sales_cube.quantity_sold * price_cube.unit_price. This might like like:
    SET sales_cube_sales = sales_cube_quantity_sold * price_cube_unit_price
    Note that these expressions refer to the OLAP DML objects (that is, the physical objects) rather than the API level objects.
    If you want to constrain the values being set, use the LIMIT command. E.g.,
    LIMIT time TO time_levelrel 'MONTH'
    LIMIT product TO product_levelrel 'ITEM'
    LIMIT geography TO geography_levelrel 'CUSTOMER'
    SET sales_cube_sales = sales_cube_quantity_sold * price_cube_unit_price
    David also notes that you need to pay attention to looping. The above example will "loop dense", that is loop over the base dimensions. Since data is usually sparse, that's not going to be as efficient as it can be. So, introduce the ACROSS command and loop over the composite dimension of the sales_cube. Assuming that the cube is partitioned (almost all cubes should be partitioned), the program now looks like this:
    LIMIT time TO time_levelrel 'MONTH'
    LIMIT product TO product_levelrel 'ITEM'
    LIMIT geography TO geography_levelrel 'CUSTOMER'
    SET sales_cube_sales = sales_cube_quantity_sold * price_cube_unit_price ACROSS sales_cube_prt_template
    The ACROSS command will cause the SET command to loop only values in the sales_cube where data exists (for any store measure).
    Finally, don't forget to save your work with UPDATE and COMMIT commands. E.g.,
    LIMIT time TO time_levelrel 'MONTH'
    LIMIT product TO product_levelrel 'ITEM'
    LIMIT geography TO geography_levelrel 'CUSTOMER'
    SET sales_cube_sales = sales_cube_quantity_sold * price_cube_unit_price ACROSS sales_cube_prt_template
    UPDATE
    COMMIT
    David also mentions that dbms_cube.build will run jobs in parallel (assuming the cube is partitioned). That's really nice. You can parallelize this in OLAP DML. See http://oracleolap.blogspot.com/2010/03/parallel-execution-of-olap-dml.html for an example.
    So, which method should you use? It depends on what you are trying to do. If it's easy to do with dbms_cube.build, I would probably do that. But, if you need more power and/or control, the OLAP DML method might be best.

  • How to exclude certain values in a measure.

    Hi,
    I have fact data something like..
    ID
    MyValue
    MySetValue
    1
    200
    1
    2
    300
    1
    3
    400
    0
    4
    500
    0
    5
    600
    1
    Now I want to create a measure MyValue which would be sum(MyValue). This i can do using SUM aggregation.
    But I also want a measure which is SUM(MyValue) but excludes rows for which MySetValue is 0.
    So first measure should return 2000 and second measure should return 1100.
    I know this can be done by adding a new column in view or in DSV which does the filtering. But is there a way of doing the same in calculated member ?
    Thanks.
    liquidloop[at]live[dot]co[dot]uk

    The best way is going to be creating a calculated column in your DSV (or view) and then defining a stored measure based on that calculated column. When business rules are applicable at the grain of the fact table, the best approach is going to be to include
    that logic at the fact table. There's ways to create it in the Calculation Tab of the Cube Editor but it's almost never the right way to go. The method that Charles uses above is a devastating bad approach as an aggregation function over a Filter clause though
    possible, the SSAS Engine when it sees a statement like those should just raise an error rather than processing the statement. (See point #1 in the following blog:
    http://sqlblog.com/blogs/mosha/archive/2008/10/22/optimizing-mdx-aggregation-functions.aspx )The real issue with the approach that Charles uses in your case above is that a Filter statement applies to the current context in aggregate and WILL NOT apply
    to each row at the source. Therefore, not only inefficient, it's also incorrect.
    To really approach the problem, you would need to create a dimension based on the MySetValue column. In the small example that you have above, that dimension would only have a single attribute hierarchy with  two members but in a real life situation,
    that column could and often would be a non-discrete function. Saying that you only have two possible values, and only the single attribute hierarchy with those two members, you could define the new measure as the following:
    CREATE
    MEMBER CurrentCube.[Measures].[Modified MyValue]
    AS
    IIf(
                  [MySetValue].[MySetValue].CurrentMember
    IS [MySetValue].[MySetValue].&[0],
    NULL,
                  [Measures].[MyValue]
    FORMAT_STRING="#,##0";
    However, the slice, ([MySetValue].[MySetValue].[All],[Measures].[Modified MyValue] would still be exactly equal to [Measures].[MyValue]. You could fix that using a SOCPE statement.
    SCOPE(
           [Measures].[Modified MyValue],
           [MySetValue].[MySetValue].[All]
    THIS = (
                  [MySetValue].[MySet Value].&[1],
                  [Measures].[MyValue]
    END
    SCOPE;
    So as you can see, just do it the right way to begin with. Create a column in your DSV and create a measure from that calculated column. 
    Martin
    Martin Mason Wordpress Blog

  • Best Way To Handle Non Additive Measures

    I am developing a cube looking at total reservations created against a daily allocation by customer. So lets say I am looking at these measures at the lowest level and show location X for Customer Y with an allocation of 100 and total reservations made as 50. Everything here is OK. My issue comes when I roll up to look at just the location. It is taking the allocation and summing this value but I need this value to remain static at all levels. Is there a way to set an accounts measure to never aggregate? I have tried a few different settings such as NEVER SHARE and setting the member aggregation properties to <NONE> in my OLAP model and it continues to aggregate at all levels. I have alos tried adding this as a dimension but because the value is numeric and because I have a few additional allocation measures that can have the same values, I have issues with duplicates. As additional info.....I am building this using EIS. It's entirely possible that I am approaching this the wrong way so any feedback would be appreciated. I can provide more detail if needed.
    Thanks,
    Bob

    Why don't you put the Account that stores the allocated amount in its own special hierarchy? This hierarchy might have Gen2 parent called "Allocate" with a label only tag and below that a series of ~ tagged members underneath. Give it a goofy name so that there can be no question of the Account's purpose, e.g., Reservations - To Be Allocated.
    Your post doesn't indicate what tool you're using for input, but have a separate sheet/form/schedule for the input of the amount to be allocated, have the user enter that amount, save it, and have a calc/HBR launch on save that does the allocation.
    Then your second view of the data (form, report, etc.) doesn't include that Account and no-one's the wiser. You haven't lost original input data and since the forecaster looks at the "real" Accounts hierarchy except when inputting data to be allocated, he'll see the spread numbers only.
    The only thing I might add to this approach is a series of dedicated Location members that receive the allocated number but that's really a design preference more than anything.
    Regards,
    Cameron Lackpour

  • Migrating an aggregate to new HA pair in cDOT cluster

    We currently have a FAS8020 8.3 cDOT cluster in our DR site, that is currently used for dev/QA storage as well as SnapVault destinations from our production site. We are in the process of migrating over from a FAS3250 at that site that is currently 8.2 7-mode. Currently, the SnapVault destinations are stored in a SATA aggregate attached to the FAS8020. When all migration from the FAS3250 is complete, we intend to reimage it to cDOT and pull it into the existing cluster and dedicate it to our SnapVault/SnapMirror backups. We then want to physically relocate the SATA aggregate that's currently attached to the FAS8020 over to the FAS3250 while keeping the volumes intact, similar to the aggregate import procedure that was available in 7-mode. Is this possible and is there a documented procedure for doing this?

    I've researched the same concept myself for similar reasons - specifically to "merge" two independent cDot clusters together.  It is simply not possible to do aggregate moves by physical shelf installation anymore. The underlying reason is the metadata.  An aggregate with the accompanying volumes in 7-mode are fully self described by the WAFL filesystem on the disks.  In cDot, they are not.  The cluster database, a copy of which is maintained on all nodes in the cluster, contains a ton of reference pointers as to what is where, what SVMs stuff belongs to, other relationship data, etc.  There is no mechanism or utility that exists to map/merge 7-mode generic volumes into a cluster, building up the cDot metadata and ownership as you go. I'm told by those with inside knowledge that cluster to cluster merge is something the DoT developers consider as a magic target if it could be automated mainstream, but 7-mode to cluster mode isn't even on the radar.  With the existing data migration tools, and the fact that 7-mode is now officially EOL, physical aggregate migration between 7-mode and cDot isn't in the cards.

  • Different aggregation operator for two measures in single cube

    Hi,
    I have three measures age_month and age_year,pages for which aggregation operator should be like this
    for age_month and age_year
    time - max
    account - sum
    sales org - sum
    for pages
    time - sum
    account - sum
    sales org - sum
    I am creating MOLAP cube in OWB and deploying to AWM. I can create above structure in OWB but when I deploy it to AWM I see that for all the dimensions and all the measures aggregation operator as sum. Off course I can change the aggregation operator at cube level but it make changes to all underlying measures too.
    Also I can see in XML of the cube and see the operator for two measures is as max but UI shows as sum. After load also I am seeing in report it is calculating as sum instead of max along time dimension.
    Any help would be highly appreciated.
    Thanks
    Brijesh

    If you have an existing cube (already defined and aggregation setup) then modifying the aggregation behavior like changing order of dimensions/aggregation operators is not very robust (likely to fail due to inability of internal objects to sync up to modified defintion). Its always better to take a backup of cube definition and drop/rebuild a fresh cube with new dim order/agg operators.
    How can you have a compressed cube and also set the aggregation property per dimension?
    I was under the impression that if it is a compressed cube, then not only is there a restriction that all measures have same aggregation properties (defined at cube level) but also the aggregation behavior defined at cube level should use the same operator (note: any one operator out of sum or max or min) for all dimensions.
    Edited by: Shankar S. on Dec 19, 2008 11:54 AM
    Perhaps this additional restriction is only required if we intend to use the Cube as a source for relational queries using Cube organized MVs.
    Another way to do this is given below:
    I'm assuming that when you say that you require Max along Time and Sum along other dimensions, you mean that you want LAST (chronological) along time at leaf level and SUM along other dimension. Also assuming that Time hierarchy has the following levels: DAY, MONTH, QUARTER, YEAR. Big assumption :) as finding out Max along time requires the help of the fact/measure whereas finding out last along time requires us to use the dimension (along with dimension metadata) alone.
    Define 1 Cube: Cube1 ... Structure: Compressed Composite, datatype: DECIMAL
    Set Aggregation: SUM along all dimensions
    Create 3 stored measures ( age_m_sto, age_y_sto and pages)
    You may want to set description for age_m_sto/age_y_sto as "******** INTERNAL: DO NOT USE ******** Stored Measure used for age_month/age_year"
    Create 2 calculated measures - age_month and age_year defined as follows ...
    olap expresssion to be given in AWM:
    <age_month>: OLAP_DML_EXPRESSION('cube1_age_m_sto(time if time_levelrel eq ''DAY'' then time else statlast(limit(time to bottomdescendants using time_parentrel time(time time))))', DECIMAL)
    <age_year>: OLAP_DML_EXPRESSION('cube1_age_y_sto(time if time_levelrel eq ''DAY'' then time else statlast(limit(time to bottomdescendants using time_parentrel time(time time))))', DECIMAL)
    NOTE: The caculated measure performs the LAST along time action using the stored measure. For every higher level time dimension member, it reaches along the dimension to the last leaf level member and reads the stored measure for that lowest level time member.
    Map and Maintain Cube.
    From the SQL Cube View: Use only the columns corresponding to stored measure: pages and calculated measures: age_month, age_pages (ignore columns corresponding to age_m_sto, age_y_sto)
    HTH
    Shankar

  • Webi - Multiply row values and aggregate

    Hi,
    I am trying to do some report level calculation in which I need to multiple two measures and then aggregate the values.
    Here is an example:
    Region     City     Unit Price     Qty Sold     Rev
    A     C1     10     2     20
    A     C2     20     3     60
    C     C2     30     4     120
                        200
    Table above has Unit price & Qty sold for each region and city.
    I calculated Rev by Unit price * Qty sold, and also got sum of rev as 200.
    But when I try to show the Rev at region level I dont get the right figures:
    Region     What I need     What I get
    A     80     150
    C     120     120
         200     270
    For Region A I shd get 80 ((102)+(203)) and not 150 (30*5).
    can someone provide suggestions to get this to work...
    I've tried using some context but no luck so far..

    Use the formula below for revenue:
    Sum(([Unit Price]*[Qty Sold]) ForEach ([Region];[City]))
    Make sure you have both region and city dimensions in webi query.

  • Relation betn Measuring Point & Classification

    In Transaction Ik03, After entering measuring point ... Press enter.Now go to GOTO-->Classification.
    On this screen we have two fields characteristic description & value.
    i want the table or function module which give the relationship betn this value & measuring point.
    Thanks In advance....

    hi Yogesh,
    Measurement document is in table IMRG. IMPTT is the table storing Measuring point. You can start at these tables.
    regards,
    Vinoth

  • Mapping-source column(for the measure)disappears after pressing apply

    I am using AWM 11.2.0.2.0B and Oracle 11.2.
    I have successfully mapped my dimensions. Then I defined a stored measure, say DISTANCE and I want to map it to a column in my database. When I click the "apply" button, the mapping disappears! And I can not find any error messages. I have tried mapping another column to it (defining a different measure), still the mapping disappears after applying.
    The type of the data(measure) is number(22) in my database, so I choose Data type: Number, Precision:22 and Scale=0 in my cube(storage section) and I check "use compression", therefore the measure will inherit the mentioned data type (of cube). I also tried using integer as the data type, no success. I first doubted that it expects some different type of data to be mapped, but seems that is not the problem. Every column I put there as my source column disappears after clicking on apply!
    Any idea what the problem is or where I should find a log?
    Edited by: 903009 on Dec 17, 2011 10:03 PM
    Edited by: 903009 on Dec 17, 2011 10:05 PM

    Hi Veeresh,
    As per my understanding, this error happens when the OLE DB Source component cannot find the corresponding source column in the database table. Possible reason is the the column name in the external database table has changed or
    the column has been removed.
    To verify this issue, please go to the Advanced Editor for OLE DB Source Input and Output Properties tab to see the External columns and Output columns. To fix it, please refresh the data or recreate the Source.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Where there are standard template or schema for cube/dimention/measure

    Hi,
    since i should create many dimentions, cubes and measures, i want to know whether there are some standard template or schemas for them so that i can buildup some xml file and import into data warehouse.
    thanks for your help since i'm a totally newer on olap

    There are no standard templates. If you use AWM, then it is easy to create objects.
    The other "programmatic" option is use DBMS_CUBE.IMPORT_XML and then provide your own XML to create:
    (1). dimensions, attributes, hierarchies and all the mappings
    (2). cubes, stored measures and all the mappings
    (3). Calculated measures
    Create your AW and some dimensions and a cube inside it, using AWM.
    Then take the XML of the each dimension and each cube, and look at it in editor (like notepad).
    You will get some idea what XML tags are used, and then (if you like) you can create new dimensions and cubes by writing your own XML.
    Search this forum for keyword "XML", and you will see lot of postings.
    Search Terms: XML
    Category: OLAP
    Data Range: ALL
    After you get the results, then SORT BY "DATE"
    .

  • Many To Many relationship, Distinct Aggregate issue

    Hi, 
    I need some help pleeeaaase... 
    Ok, so i'm implementing a GL cube, and i have a bunch of accounts which can be aggregated to different levels of a parent child hierarchy, because i'm using a parent child hierarchy SSAS does a distinct "sum" to the Fact table, and so my Totals
    get a bit messed up... 
    So, i've tried looking for way to turn off this feature, but couldn't find one.
    The alternative which i'm also stuck on, which is to create a member which goes to the child level, for each total, and re-aggregates ALL the values that are there.
    I seem to have created a member in an MDX query that works perfectly... yay!, BUT when i put this into a calculated member... well it goes pear shaped, and i can't understand how it arrives at the number.. 
    Please shout if my explanation is a bit sketchy..
    Here is the code.
    MDX Query....
    WITH MEMBER [Measures].CurrentBalanceActualDecendants
    AS 
    SUM(
                           EXCEPT(
                                      DESCENDANTS([Dimension Report Detail].[Finance Report Hierachy].CurrentMember, 2, BEFORE),
                                      [Dimension Report Detail].[Finance Report Hierachy].CurrentMember
                         [Measures].[Current month Actual]
    MEMBER [Measures].[BalanceTestc]
    AS( IIF (
                         [Measures].CurrentBalanceActualDecendants = 0, 
                         [Measures].[Current month Actual],
                         [Measures].CurrentBalanceActualDecendants
    SELECT {[Measures].[Current month Actual],
    [Measures].[BalanceTestc]}
    on 0
    DESCENDANTS([Dimension Report Detail].[Finance Report Hierachy].&[4], 3, BEFORE)
    --[Dimension Report Detail].[Finance Report Hierachy].Allmembers
    )on 1
    FROM [opfinanceGLTransaction]
    WHERE { [Dimension Snapshot].[Date Value].&[2014-06-30T00:00:00],
    [Dimension Snapshot].[Date Value].&[2014-05-31T00:00:00],
    [Dimension Snapshot].[Date Value].&[2014-04-30T00:00:00],
    [Dimension Snapshot].[Date Value].&[2014-03-31T00:00:00],
    [Dimension Snapshot].[Date Value].&[2014-02-28T00:00:00],
    [Dimension Snapshot].[Date Value].&[2014-01-31T00:00:00]
    CALCULATED Measure.......
    CREATE MEMBER CURRENTCUBE.[Measures].[Current Month Actual]
     AS IIF(
                         [Measures].CurrentmonthActualDecendants = 0, 
                         [Measures].[Balance],
                         [Measures].CurrentmonthActualDecendants
    VISIBLE = 1 ,  ASSOCIATED_MEASURE_GROUP = 'Fact GL Transaction';   
    CREATE MEMBER CURRENTCUBE.[Measures].CurrentmonthActualDecendants
     AS AGGREGATE(
                           EXCEPT(
                                      DESCENDANTS([Dimension Report Detail].[Finance Report Hierachy].CurrentMember, 2, BEFORE),
                                      [Dimension Report Detail].[Finance Report Hierachy].CurrentMember
                         [Measures].[Balance]
    VISIBLE = 1 ,  ASSOCIATED_MEASURE_GROUP = 'Fact GL Transaction';        

    Have you seen the "Multiple Parent-Child Hierarchies" design pattern in this paper:
    http://www.sqlbi.com/articles/many2many/
    I'm wondering whether you really need such MDX.
    Marco Russo (Blog,
    Twitter,
    LinkedIn) - sqlbi.com:
    Articles, Videos,
    Tools, Consultancy,
    Training
    Format with DAX Formatter and design with
    DAX Patterns. Learn
    Power Pivot and SSAS Tabular.

  • Modify example "Frequency Measurement.vi" for cFP to read two channels

    I am using the cFP-CTR-502 to measure frequency on one channel using a modified version of the built-in example, "Frequency Measurement.vi" found under 'Hardware Input and Output>Fieldpoint>Advanced'.  How do I modify this example further to incorporate additional frequency-measurement channels?

    AEICincy,
    Modifying the example for multiple frequency measurements shouldn't be too difficult.  If you wish to use the same gating pulse, the only thing you need to do is set up a new channel in MAX (set it up the same way you set up the first channel as per step 3 of the instructions), set the counter channel on the new front panel controls, and add code to read and reset the counter (which can be copy/pasted in the False case of the code).  I have attached an example of a VI with these changes.
    Message Edited by Devin_K on 01-28-2008 08:38 AM
    Devin K
    Systems Engineering - RTT & HIL
    Attachments:
    Frequency Measurement.vi ‏106 KB

  • Aggregate - Efficient

    Hi There,
    I am trying to figure out how to measure whether my aggregate is good for a particular query.
    I activated BW statistic already. I went to ST03N. I  ran a particular query ( I purposely do not activate aggregate). I can see Select Record is 15,231 and Transfered 6319. Select/Transfe = 2.4.
    Then I ran RSRT and get the "found aggregate". Then i created the aggregate. Fill it up, etc.
    Ran the query again. Back to ST03N, the total select and transfered is the same????
    Does it mean the aggregate is not right. I went to RSRT and debug "found aggregate". I can see the system picks up the aggregate i created though.
    Have i missed something here?
    REgards

    Hi,
    Aggregate valuation.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    Use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Thanks,
    JituK

  • Cube Solve Time when using MAX Aggregation Operator

    Hello,
    We have created a cube to implement the count distinct measure we need.
    The cube contains only one measure (COUNT) and uses the MAX operator to aggregate across all other dimensions except for the one we want to count (which uses the SUM operator). We have set the precompute percent to 60% for the bottom partition and 0% for the top partition. The cube is compressed.
    The problem is that the SOLVE step for a partition when performing a COMPLETE cube build, seems to be taking a very long time and is taking up huge amounts of TEMPORARY tablespace.
    We have succesfully created another cube with the same dataset which uses the SUM operator across all dimensions.
    This cube build was completed in a reasonable amount of time even though we had 5 stored measures and 80% aggregation for the top partition.
    Is this behaviour expected when using MAX operator?
    Thank you,
    Vicky

    Thank you, David.
    As you said we are using mixed operators because we are doing a distinct count.
    We will try setting the precompute percent to 35%,although I'm a bit worried about the query performance in this case.
    Neelesh, I think that Atomic Refresh was set to TRUE during the last refresh but the cube was the only object in the build script.
    No other cubes or dimensions were maintained in the same build so I don't think it could have affected the use of TEMP tablespace.
    Generally we don't use Atomic Refresh.
    Thank you,
    Vicky

Maybe you are looking for

  • Bi Issue

    hi        i have installed the sales overview infocube(0SD_C03) and also the querry.Nowe from portals i try to acees an already existing Iview which has query string referging to the infocube and the specific querry...it displys an errro saying (Cann

  • OWB client standalone 11.2.0.3 error API5036 to repository 11.2.0.2

    Hi can we connect to a repository 11.2.0.2.0 with a OWB client standalone 11.2.0.3.0? actually we have this error: API5036: Client version 11.2.0.3.0 is not compatible with repository version 11.2.0.2.0 client is windows7 (64bits) please, how to solv

  • XACT_E_NOENLIST: OCI_ERROR - 3113 and 3114

    Hello, our .NET web service fails with XACT_E_NOENLIST after rolling back a transaction. Scenario: 1. Call a web service function which throws an exception and aborts the transaction. 2. Any subsequent request fails with XACT_E_NOENLIST. 3. A 3rd req

  • Oracle Geeks Products Survey

    Dear oracle geeks/gurue We are a startup company that sell apparels and accessories with oracle bugs and error messages on those products. we are currently sizing the oracle market and it will be great if you can take time to fill our 5 min. survey.

  • How close all tab in safari

    Is there some way to close all the tabs in safari in just one time.  I have iPhone 4s.  iOS 5.1