Weighted aggregation

I have a cube, most of the measure within this cube will be aggregated by a sum, but one measure needs to be aggregated by a weighted sum. My issue is that I am not sure where to get the "based on" value from for this option, should I have a measure mapped to a column in my fact table or should I use an attribute in a dimension? Also can this method be used accross all dimensions or just one?
Basically I have a value and the weighted sum should based on the days in the month (month is the leaf level for the time dimension), I have a column in the time dimension with this value that could be mapped as an attribute if suitable.
I have looked around the net for some documention or examples to work from but have not found anything hence coming here, I have also tried out a few different things with this aggregation method but the results are not what I would expect.

You can set the BASED ON expression to a measure in a cube, an attribute in a dimension, or anything derived from this. Unfortunately AWM only supports direct measure mappings, but there are two workarounds if you need to reference an attribute.
The first is to use AWM to set the BASED ON expression to a measure -- any measure will do. Then export the cube to XML and look for the ConsistentSolve section. If you are taking the average over dimension TIME and you chose the measure ANY_MEASURE in ANY_CUBE, then it could look something like this.
   <ConsistentSolve>
      <![CDATA[SOLVE
  AVG(WEIGHTBY ANY_CUBE.ANY_MEASURE) OVER "TIME",
  .. other aggregations
)]]>
    </ConsistentSolve>All you need to do is change the part that says "ANY_CUBE.ANY_MEASURE" to the attribute you want. E.g. "TIME.NUM_DAYS".
   <ConsistentSolve>
      <![CDATA[SOLVE
  AVG(WEIGHTBY "TIME"."NUM_DAYS") OVER "TIME",
  .. other aggregations
)]]>
    </ConsistentSolve>You can then recreate the cube from this XML.
As an alternative you can create a calculated measure, WEIGHT_CALC_MEASURE say, and specify the arbitrary weight expression there. Then just base your order on the calculated measure.

Similar Messages

  • Workspace manager vs. shadow tables

    Hi,
    I have the requirement to track any changes (insert/update/delete) on db tables.
    At the end the user should be able to view the change history of records in the GUI.
    The two possible methods are (in my opinion):
    a) workspace manger
    b) manage shadow tables manually (with triggers)
    Has anyone experience with workspace manager for this use case?
    What are the pros and contras of the two methods?
    Database is 10gR2
    regards
    Peter

    We are using OWB to create OLAP because you have your metadata properly defined in the design repository of OWB from where you can deploy to different databases and schemas. We are also using OWB to create tables and other relational objects instead of using SQL Developer or Toad to do so.
    Nevertheless there are some restrictions when using OWB: You cannot create programs with OWB (e.g. for limiting access to certain objects), not all aggregation operators are supported (e.g. the weighted aggregation operators like WSUM are not supported by OWB), you cannot create models, ...
    If you come to these restrictions you could write "after-deployment scripts", i.e. you deploy your dimensions and cubes from OWB and let the scripts do what you could not model with OWB.
    Hope this helps!

  • Analytic Workspace Manager vs Warehouse Builder

    When is it best to use Analytic Workspace Manager over Warehouse Builder to create the OLAP? Please advise.

    We are using OWB to create OLAP because you have your metadata properly defined in the design repository of OWB from where you can deploy to different databases and schemas. We are also using OWB to create tables and other relational objects instead of using SQL Developer or Toad to do so.
    Nevertheless there are some restrictions when using OWB: You cannot create programs with OWB (e.g. for limiting access to certain objects), not all aggregation operators are supported (e.g. the weighted aggregation operators like WSUM are not supported by OWB), you cannot create models, ...
    If you come to these restrictions you could write "after-deployment scripts", i.e. you deploy your dimensions and cubes from OWB and let the scripts do what you could not model with OWB.
    Hope this helps!

  • Shared Members in Custom can have different aggregation weight?

    Hi guys,
    I have created a Flow dimension to track the cash flow movements. Under TotalFlows, I have the different movements (OpBalance, CloBalance, Variation, Gain, Loss), all with aggregation weight of 1. But I have to create an additional structure (sibling of TotalFlows), called TotalFlows2, with shared members (OpBalance, CloBalance...) but with aggregation weight of zero.
    Can I used the Shared Member with different aggregation weight? Or should I rename them (for ex., TF2_OpBalance)?
    Please, advise.
    Thanks!
    Jai

    Absolutely use the shared member and set the aggregation weight to zero in your duplicate structure, this is the key benefit to custom dimensions.

  • 'Average Weighted with Calendar Days' in Exception Aggregation

    Dear specialists,
    I have created a query with a lot of calculated keyfigures.
    One of my calculated keyfigures shows percentage values.
    When I select a calendar day, my report shows correct results
    for the percentage values.
    But when I execute the report without selecting a calendar day,
    I get strange results like 3,21 % instead of 97,49 %.
    There is a new functionality with version 7.0, where we can set
    Aggregation properties like 'Average Weighted with Calendar Days' .
    But when I select this with Ref. characteristic 'calendar day',
    my report does not show any value or any result.
    Are there any other properties to set in Aggregation or Calculations.
    Please give me more information about this issue
    regards Osman Akuzun

    Hi,
           When you define an Exception aggregation on number of work days. There might be some reference characteristic defined in your Bex.
    For exmaple, take a scenario like below. Your BW data is like below
    0Employee           StockIssued
    XXX                            50                                
    XXX                            10              
    YYY                           20              
    YYY                            30             
    YYY                            40             
    ZZZ                            50              
    ZZZ                            100            
    ZZZ                            30              
    Suppose the first row which has values like XXX and 50 is not a working day and remaining all are working days then the exception aggregation applied using AV2 on the characteristic  0Employee will be caluclated in your Report as below
    For XXX, it will       10/1 = 10 (because only 1 working day for the employee XXX)
    For YYY, it will       90/3 = 30 (because all  3  are working days for the employee YYY)
    For ZZZ, it will       180/3 = 60 (because  all  3  are working days for the employee ZZZ)
    This is how the exception aggregation works. Try to see the similar kind of example in your system and try to relate in the same lines explained by me. You can understand.
    Regards
    Sunil

  • Aggregation Behaviour in BEX Analyzer

    Hi Experts,
    using an Info-Set I combine information from to Info-Provider.
    One Info-Provider contains key-figure "amount" while the other Info-Provider contains key-figure "factor".
    Within a Query based on my Info-Set I multiply "amount" with "key-figure". This works fine for the smallest granularity.
    The Problem appears when I start aggregation in BEX.
    Now the cumulated "amount" is multiplied with the cumulated "factor". The results are wrong.
    What can I do?
    Exception aggregation is in my point of view not an answer, since "amount" has to be weighted with "factor".
    BR,
    Thorsten

    Hi Thorsten,
    how umulated "amount" is multiplied with the cumulated "factor". in query, in calulated key figure you have options.
    right on Calculted keyfigure which useing for mutiplication -> proprties -> botttom of properties you have option before aggregation. check this option.
    hope this helps
    Regards
    Daya Sagar

  • Weighted Average in Pivot View

    Hi,
    Can you please throw some light on the below scenario based on weighted average:
    For Ex:
    Cloumn1---Column2-----Column3
    R1-----------Line Item1--10
    R1-----------Line Item2--20
    R2-----------Line Item3--30
    R2-----------Line Item4--40
    Now i am trying to achieve weighted average based on the group by of Column1.
    10+20=30==>30/2=15
    30+40=70==>70/2=35
    Before raising this thread, i had a look at the below URL:
    http://siebel.ittoolbox.com/groups/technical-functional/siebel-analytics-l/weighted-average-for-fact-column-values-1242947
    Can you please let me know how we can achieve this in the pivot view(OBIEE 10.1.3.4.1).
    Your help is highly appreciated.
    Thanks in Advance
    Siva

    Hi,
    Try-
    AVG(measure_col by column1)
    And try making in the column formula -Aggregation Rule (Totals Row)-->Average
    And also try making the aggregation rule in the pivot table of the column as-->Average
    Hope this helped/ answered.
    Regards
    MuRam
    Edited by: MuRam on Jun 11, 2012 4:46 AM

  • Weighted average calculation in query

    Hello
    i'm having some issues calculating an weighted average...
    imagine the next scenario
    LINE | FORMAT | TIME | WORKERS
    10    | 000123    | 10   | 5
    10    | 000123    | 350  | 2
    10    | 000123    | 75   | 1
    From this i need the wighted average of the nr workers considering the time they were working
    so, to calculate that i have to do for example:
    (time * worker)/total time
    in numbers...
    ((105)+(3502)(75*1))/(10350+75)
    =1,89 workers for that line, for that format and for all that time
    I am able to do that formula inside bex analyser, however when the user drills down the report or sorts the key figures in other order the result is not correct.
    any suggestions?
    Best regards

    Hi Ricardo
    What we did, in a similar case, we used two formulas. The first one was the calculation (time * worker) with exception aggregation (summation).
    The second formula was the previous formula divided to time.
    The trick is which characteristic to use for the aggregation in the first formula, i think you should have the most detailed one.
    Then i believe the two formulas will calculate correctly the weighted average based on any selections/analysis that you do.
    Regards
    Yiannis

  • Parent value allocation considering aggregation operator

    Hello all Essbase gurus,
    I would appreciate an idea how to solve this allocation requirement.
    Outline
    (_Important_: In this example, consolidation property is set to "(+)Addition" for two members and "(-)Subtraction" for next three members.) :
    Account
         Total cost (~)
              Account1 (+)
              Account2 (+)
              Account3 (-)
              Account4 (-)
              Account5 (-)
    Users can enter or import values to Lev0 and Lev1 members. Let say, user entered value and expects 100 in "Total cost" member and 20 in "Account1".
    Account
         Total cost (~)     100
              Account1 (+)     20
              Account2 (+)
              Account3 (-)
              Account4 (-)
              Account5 (-)
    I need prepare calc script to allocate rest of value (80) to missing value members considering consolidation sign/operator. Weight is number of members with (+) and with (-).
    And this is my problem -> I don't know how to get consolidation property value into calc script.
    Expected result in this case:
    Account
         Total cost (~)     100
              Account1 (+)     20
              Account2 (+)     -40
              Account3 (-)     -40
              Account4 (-)     -40
              Account5 (-)     -40
    Is it possible to solve it using any available Essbase technology (ESSCMD, MaxL, MDX, Java functions, etc.)?
    Number of accounts can vary based on parent member. Accounts can be shared in alternate hierarchies.
    Similar question is Allocate an amount to #Missing values only but this assumes same aggregation operator for all members.
    Best regards
    Vladislav

    I tested in sample.basic, but I don't think its a good idea.
    1. set weight as store with formula
    if(@islev(measures,0))
    if(actual == #missing)
    1;
    else
    0;
    endif;
    endif;
    2. add this hierarchy to measures and input actual data p1,100 and c1,20
    P1 (+) 100
    C1(+) 20
    C2(+)
    C3(-)
    C4(-)
    C5(-)
    3. test it
    fix(JAN,@CHILD(P1),"new york","100-10")
    weight;
    endfix
    /* aggregation based on outline*/
    fix(JAN,weight,"new york","100-10")
    P1;
    endfix
    var temp = 0 ,changeParent=0;
    fix(JAN,@CHILD(P1),"new york","100-10")
    ACTUAL(
    IF(ACTUAL == #MISSING)
    if(changeParent == 0 or temp == 0)
    temp = @PARENTVAL(measures) - @SUM(@SIBLINGS (@currmbr(measures))) ;/*this sum will return the wrong result too*/
    endif;
    temp/ weight->@parent(measures);
    changeParent = @PARENTVAL(measures) - @SUM(@SIBLINGS (@currmbr(measures))) ;
    endfix

  • POC aggregated ok in CNE5 on WBSe level 1, but not used in RA calculation

    Hi all,
    I have a WBS structure with let's say 2 levels of WBSe, one WBse at level 1, 2 at level 2.
    I want  to :
    - calculate the POCs at WBSe level 2, then aggregate them at level 1, using weighted WBSe.
    - run the RA calculation for all WBSe.
    The first part is working ok with measurement methods in WBSe, weights in CJ20n on WBSe level 2 : transaction CNE1 to calculate POC, then transaction CNE5 : I get the correct POC on level 2, and the aggregated POC at level 1.
    But when I run the result-analysis calculation (KKAJ) for the whole project : POC is displayed correctly on WBSe level 2, but the aggregated POC is not used in WBSe level 1, and therefore RA results are wrong at level 1 : I always get 0 in the POC for the WBSe level 1, or sometimes 100 if I play with the OKG3 parameters.
    The only clue I have is that, in table COSR used to store the POC, on the SKF for Result-Analysis usage, I get 3 lines :
    - one with value type P1, with 0 on all the line
    - one with value type P2, with the correct aggregated POC value, Business transaction PEV1
    one exactly the same as above, but with negative values and Business transaction PEV2.
    Don't know if it is the reason for this aggregated POC nt to be used in RA calculation.
    Thank you all for any help.
    Patrick

    Did a fresh install and still having the same problem. Giving up now. Mod can now close this thread

  • Different aggregation operators for the measures in a compressed cube

    I am using OWB 10gR2 to create a cube and its dimensions (deployed into a 10gR2 database). Since the cube has 11 dimensions I set all dimensions to sparse and the cube to compressed. The cube has 4 measures, two of them have SUM as aggregation operator for the TIME dimensions the other two should have AVERAGE (or FIRST). I have SUM for all other dimensions.
    After loading data into the cube for the first time I realized that the aggregation for the TIME dimension was not always (although sometimes) correct. It was really strange because either the aggregated values were correct (for SUM and for AVERAGE) or seemed to be "near" the correct result (like average of 145.279 and 145.281 is 145.282 instead of 145.280 or 122+44+16=180 instead of 182). For all other dimensions the aggregation was OK.
    Now I have the following questions:
    1. Is it possible to have different aggregations for different measures in the same COMPRESSED cube?
    2. Is it possible to have the AVERAGE or FIRST aggregation operator for measures in a COMPRESSED cube?
    For a 10gR1 database the answer would be NO, but for a 10gR2 database I do not know. I could not find the answer, neither in the Oracle documentation nor somewhere else.

    What I found in Oracle presentation is that in 10GR2 compressed cube enhancements support all aggregation methods except weighted methods (first, last, minimum, maximum and so on). It is from September 2005 so maybe something changed since then.
    Regarding your question about the results I think it is caused by the fact that calculation are made on doubles and then there is a compression, so maybe precsion is lost a little bit :(. I really am curious whether it is because of numeric (precision loss) issues.

  • Applying a weight measure

    Hello,
    I'm doing a program in Java whereas an article is matched against the top 10 topic-related entries in Google using similarity algorithms.
    Now let us assume that the original article is matched against the first one and the similarity is 0.98 (nearly identical). Against article 2 we get 0.86, against article 3 we get 0.92 and so on...
    For this program, I decided to also consider the PageRank (PR) score for each of the 10 Google entries. So for example the PR for the first entry is 8, the PR for the second article is 6 and so on...
    My query is how to apply a weight upon the similarity score generated from the PR score. For example, for the first match I could say 0.98 * 8 = 7.84 but this score doesn't make much sense. I need the PR score to serve as a weight on the similarity score, but my idea of just multiplying the two is not very useful.
    Does anyone can provide any suggestions how can I do this please?
    Thanks!

    Hi qqsu,
    Yes this is just an initial thing, it has many issues and problems that need to be investigated further.
    You're right about the pagerank issue, but the thing is that the 11th topic would probably be less related to the topic than the first ten. And also we have to keep in mind that Google not only considers keywords, but also inlinks from authoritative sources. So a high pagerank score, although not necessarily meaning it is the best article, gives a rough indication of the content. And also we would have an aggregation from 10 different locations, not just one, so the ranking should hopefully be more accurate.
    However, I found a solution to the weight problem by using logarithms for the pagerank scores.

  • Weighted average total displaying as  average

    Dear BI Guru's,
    one of my report Weighted average total value displaying incorrect values, for example
    Clinker Value   Clinker Rate
    (Rs in lacs)        (Rs/MT)
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    0.00
    342.00
    1.21
    342.00
    0.72
    342.00
    0.80
    342.00
    2.73
    19.59
    Clinker Rate MT should display 342 instead of 19.59, i was applied exception aggregation for clinker value Total and ref characteristic plant. same exception aggregation applied to net quantity keyfigure.
    Clinker rate calculation = Clinker value/Net quantity,
    thanks in advance who replay as earliest.
    regards
    Ramesh G

    What is the line item infoobject in your data which is causing given list of records?
    Try TOTAL  with line item infoobject as ref char but not plant.
    Try with constant selection on your KF to display 342.

  • Weight factors in a many-to-many relationship with bridge table

    Hi, I have the same N:N relationship schema of this link:
    http://www.rittmanmead.com/2008/08/28/the-mystery-of-obiee-bridge-tables/
    In my bridge table I have a weight factor for every couple (admission,diagnosis). If I aggregate and query in Answers these columns:
    DIAGNOSIS | ADMISSIONS_COSTS
    every single diagnosis has the sum of the WHOLE Admission_cost it refers to, not its contribute to it (for example 0.30 as weight factor). The result is an ADMISSION_COSTS sum larger than the ADMISSION_COSTS sum in the lowest detail level, because it sums many times the same cost.
    How could I use my weight factor and calculate the right diagnosis contribute to its admission? In BI Admin I tried to build a calculated LogicalColumn based on Physical column, but in the expression builder I can select only the ADMISSION_COST measure physical column, and it doesn't let me pick the weight factor from the bridge table.
    Thanks in advance!

    I'm developing a CS degree project with 2 professors, Matteo Golfarelli and Stefano Rizzi, who have developed the Dimensional Fact Model for data warehouses and wrote many books about it.
    They followed the Kimball theory about N:N and used its bridge table concept, so when I said them that in OBIEE there is this definition they were very happy.
    But they stopped this happiness when I said that bridge tables only connect fact tables to dimension tables, and to create N:N between levels at higher aggregation we should use logical joins as you said in your blog. I need to extract metadata concepts from UDML exportation language, and about N:N I can do it only with bridge table analysis, I can't extract and identify a N:N level relationship from a multiple join schema as in your blog... this is the limit of your solution for our project, only this!
    PS: sorry for my english, I'm italian!
    thanks for the replies!

  • Before  and After Aggregation

    Hi Gurus'
    Where can we find the options "Before Aggregation" and "After Aggregation"?What is the basic use of these aggregations?Why we should not use the option called Before Aggregation in a  Multiprovider?
    Could any one please let me know the above..
    Thanks!
    James

    Hi James,
    Aggregation
    This function is only available for formulas and calculated key figures.
    You can make settings for aggregation and calculation time (for the detail level of the calculated key figure or formula) here. By default, the data is first aggregated to the display level; the formula is then calculated (= standard aggregation). The exception aggregation setting allows the formula to be calculated before aggregation using a reference characteristic, and to be then aggregated with the exception aggregation.
    You can select the following settings in the Exception Aggregation field.
    &#9679;      Use Standard Aggregation: You use this setting to specify that aggregation is to take place before calculating the formula. Therefore, you do not use exception aggregation.
    &#9679;      Sum
    &#9679;      Maximum
    &#9679;      Minimum
    &#9679;      Exception If More Than One Record Occurs
    &#9679;      Exception If More Than One Value Occurs
    &#9679;      Exception If More Than One Value <> 0 Occurs
    &#9679;      Average of All Values
    &#9679;      Average of All Values <> 0
    &#9679;      Average Weighted with Calendar Days
    &#9679;      Average Weighted with Working Days
    You can specify the ID of the factory calendar in Customizing. For more information see the SAP Reference IMG  ® SAP Customizing Implementation Guide ® SAP NetWeaver ® Business Intelligence ® Settings for Reporting and Analysis ® General Settings for Reporting and Analysis Set F4 Help and Hierarchies for Time Characteristics/OLAP Settings.
    &#9679;      Counter for All Detailed Values
    &#9679;      Counter for All Detailed Values That Are Not Zero, Null, or Error
    &#9679;      First Value
    &#9679;      Last Value
    &#9679;      Standard Deviation
    &#9679;      Variance
    If you use exception aggregation, you must select a characteristic from the Reference Characteristic field, using which the system can calculate the formula before aggregation. In the Reference Characteristic field, you can select from all characteristics that are available in the InfoProvider.
    For more information about the setting for exception aggregation, see the documentation for InfoObject maintenance under Tab Page: Aggregation.
    Calculate After Aggregation: This field is only displayed for calculated key figures; it is not displayed for formulas. It is used for display and stipulates that the formula of the calculated key figure is calculated after aggregation. If you use calculated key figures that you defined in SAP BW 3.5, you can use this field to specify whether the formula is to be calculated before or after aggregation.
    Calculating before aggregation results in poor performance because the database reads the data at the most detailed level and the formula is calculated for every record. In formula calculations, often the single record information for just one or two specific characteristics is required; the remaining data from the InfoProvider can be aggregated.
    We recommend that you control the calculation level using an exception aggregation with the corresponding reference characteristic. If you require multiple exception aggregations or reference characteristics, you can nest formulas and calculated key figures in one another and specify an exception aggregation for each formula or calculated key figure.
    For more information,
    Examples of Exception Aggregation Last Value (LAS) and Average (AV0)
    Tab Page: Aggregation
    Hope it Helps
    Srini

Maybe you are looking for

  • Flat file not created

    when i run the procedure i created, the flat file is not created in the specified directory....can someone tell me why? the procedure is below. thanks PROCEDURE po_creation IS fHandle UTL_FILE.FILE_TYPE; sc_shipcode varchar2(3); -- XX sc_po_no varcha

  • Windows 8 won't work in Boot Camp, help?

    I have a late 2011 MackBook Pro 15" and I have tried multiple ways to get Windows 8 on it via Boot Camp.  I don't want to use a Virtual Machine because I need the dual booting ability.  When I try to burn the Windows 8 .ISO 32 or 64 Bit onto the DVD

  • Partitions lost after ghost installation

    Hi Everybody ! My friend has  problem with his iMac. He wanted to install windows 7 on a separate partition. So, he created a second partition, and he use GHOST system to install windows 7 on the partition he created earlier. When windows installatio

  • Penalty as a deduction

    Hi friends, when I want to create contract for service in SAP my customer wants me to add some deduction(e.g. penalties) items to other items. these deduction items help them to deduct vendor's price in Service Entry Sheet (ML81N) if vendor does not

  • How to Find G/L or Cost Element When Plant is given

    Hi Guru The problem is that I know the plant Now I want to know G/L A/c which is associated with this plant.I have tried table csks but plant coloumn was blank. I tried out OKB 9 and KSB1 but it also does not provide me the appropriate answer immedia