OLAPTRAIN - Measure Aggregation Problem

Hi all,
i've a great problem and i ask your help.
I've installed OLAPTRAIN schema and I've correctly generated repository file thru AWM plugin.
Once uploaded repository i've tried to make analysis with BIEE.
I've choosen Region as dimesion and Sell as misure but here i've the problem.
Here is an example of result:
Region Sell
Africa
123
456
789
123
Europe 565
575
342
As you can see, Region is correct but Sell is not aggregated correcly while in AWM data are all correct.
I've also tried to chose only dimension and they are correct.
Example:
Region Product Channel
Europe Computer Direct
Camera Direct
Indirect
So i think the problem is with Sales table facts.
Could you help me to find the problem?
Aplication Versions:
oracle BIEE 11.1.1.5.0
Administration TOOLS 11.1.1.5.0
AWM patch 11.2.0.2.0B
Thank you all very much in advance.

It could be a problem with versions compatibility?
Edited by: 896514 on 15-nov-2011 15.59

Similar Messages

  • Nested Aggregation Problem

    I am using OBI 11g.
    Nested aggregation is not supported in some forms in the BI Server (RPD), but appears to be possible by putting the second aggr rule in an Answers column formula or pivot view column. However, I cannot get this to work. Looks like can even be done in the RPD with aggregation based on dimensions, as long as there is a standard aggregation function on the outside of the expression.
    The biggest problem with any of the above techniques is the BI Server does not push the outer aggregation rule to the DB engine (the generated SQL).
    In my case, consider a Referral Fact with Customer Dim and Referral Dim. I need to get # of Referrals per customer, filter that with a case statement to "bin" 1 Referral and >1 Referral, and then get # of Customers in each bin. So the first measure aggregation looks like:
    Other: COUNT (DISTINCT "Referral Key")
    Customer: "SUM( CASE WHEN "Referral Key" = 1 THEN 1 ELSE 0 END )"
    Or the logical measure just has the COUNT DISTINCT aggregation rule and an Answers column has the CASE statement with a SUM aggregation rule. Or use CASE WHEN "Referral Key" = 1 THEN "Customer Key" END and use COUNT DISTINCT instead of SUM.
    All these appear to return correct results, but they all perform the outer aggregation in the BI Server or Pivot engine instead of pushing to the generated SQL (DB engine).
    I can't find any problem in the DB Features. We are using SQL Server 2010.
    Thanks in advance for help.

    Hi AL,
    here is my requirement what i have been asked to get this output result.
    i have keyfigures KF1, KF2 and total KF.
    three characteristics dist,inch,load.
    dist-inch-load--KF1-KF2-Total KF                         
      5---A--010-0-----10=10                         
      5---A--120-20----20+10=30                         
    10--B---050-0-----50                         
    12--C---160-60----60                         
    13--D---270-70----70                         
    14--E---080-0-----80                         
    15--E---120-20----20+80=100                         
    15--E---230-30----302080=130     
    KF1 is the initial volume coming from the file directily.based on this keyfigure i have to calculate KF2,Total KF.     
    In order to calcuate KF2 and Total KF i have some conditions.which are mentioned below;
    KF2---> if  load=0 then KF2=0 elseif load>0 then KF2=KF1 ;
    Total KF--->if load=0 then Total KF=KF2KF1 elseif load>0 then KF2=KF2KF1
    How to achieve this dynamic summation.Do i have to do nested exception aggregation based on the above three characteristics. what would be the open options.please do help me.

  • Measure Aggregated at Logical Level causes poor performance

    OBIEE 11g
    We've recently implemented a new measure which involves aggregating a Period expenditure figure up to the Fiscal Year level.
    I've duplicated the existing Period_Expenditure rename to FY_Expenditure, and then changed the logical level for the Time dimension to Fiscal Year.
    I get the figures I expected, however the OBI SQL generated wasn't what I expected.
    We now have the main SQL as before but now its left outer joining to a 2nd SQL statement that does the aggregation, which is pretty expensive in terms of Logical IO. Elapsed time going from 2 seconds to 20 seconds.
    I would have expected the BI server to be clever enough to apply analytics to solve this problem - has anyone else had similar issues when using aggregates like this?
    Thanks in advance,
    Matt

    I'm going to answer my own question here as I've managed to build a simplified solution, it may help others.
    Basically the Analyses this is running for is:
    select measures, FY_Measure
    from fact_table
    where period = 'MMYYYY';
    Because we don't have the full set of data within this SQL to enable coverage for the FY_Measure (ie. we're at period level granularity) OBIEE is smart enough to create a 2nd query to get the data required for the year.
    If we change the analyses to be:
    select measures, FY_Measure
    from fact_table
    where FY = '2012';
    It will quite happily create analytics to satisfy the measure - fairly straightforward really.

  • First time to build OLAP Cube - aggregation problem

    I am new to OLAP technology and i want to build first OLAP cube. i use scott schema doing:
    1- build dimension on "DEPT" table with one level deptno.
    2- build cube with one measure sal on "EMP" table.
    Table Emp
    DEPTNO     SAL
    20     800
    30           1600
    30          1250
    20          2975
    30          1250
    30          2850
    10          2450
    20          3000
    10          5000
    30          1500
    20          1100
    30          950
    20          3000
    10          1300
    table DEPt
    DEPTNO     DNAME
    10          ACCOUNTING
    20          RESEARCH
    30          SALES
    40          OPERATIONS
    when i use maintain wizard and then view data the sum of salary is not accurate it looks like
    sum
    all depts 5550
    accounting 1300
    researches. 3000
    sales 1250
    operations 0
    values should be
    sum
    all depts 29025
    accounting 8750
    researches. 10875
    sales 9400
    operations 0
    why the aggregation values for departments is no accurate??

    Problem is visible in your below table.
    Table Emp
    DEPTNO SAL
    20 800
    30 1600
    30 1250
    20 2975
    30 1250
    30 2850
    10 2450
    20 3000
    10 5000
    30 1500
    20 1100
    30 950
    20 3000
    10 1300
    There are multiple of dept no with different sal. In OLAP when you load such record then the last record comes wins. This is why if you observe then you will see the value you see is the last value getting loaded.
    To resolve this you should do a group by (in emp) deptno and sum(sal). Load the records and you will see correct result.
    Hope this helps.
    Thanks,
    Brijesh

  • Aggregation problem in Qery design

    Hi folks,
    I have the following situation:
    - Characteristics as line items, shown as a hierarchie.
    - KFs and Formulars in columns
    I want the formulars to be calculated on the basis of a combination of two of the characteristics (the bottom two in the lines hierarchy) and then aggregated.
    Exception aggregation allows me to choose "total" based on one characteristic, but not a combination of two.
    One solution would be, of course, to create a new characteristic in the cube that is a combination of the two existing characteristics in question and then use this for exception aggregation. This would be a bit complicated though, as the two characteristics are in fact navigational attributes of characteristics that appear in the data source and the cube.
    Therefore, I'd prefer a solution that is in the scope of query designer rather than cube design.
    Any ideas?
    PS: The formulars show correct values on the level of the bottom most characteristic in the lines hierarchy if I choose that characteristic for exception aggregation. The results are not correct one level up from there as I can not get the system to sum the results up along the hierarchy. Instead, they are calculated for the aggregated values of the KFs used in the formular.
    Apparently the "show lines as hierarchy" function really only affects display of the lines, not calculations.

    Hi John,
    I understood your problem. Solution is easy:
    Make formula 1, exception aggregation as total , ref char as char 1
    Then make another formula 2 on formula 1, exception aggregation as total , ref char as char 2.
    And in the report show formula 2 and hide formula 1.
    Best Wishes,
    Mayank

  • CostCenter Group Aggregation problem. High Priority Issue.

    Hi Experts,
    I have very crital issue in the report, Help needed.
       I had some kind of requirement in one of the Overhead cost report, need to combine the COPA data and Costcenter(Over Head cost) data.  I have a problem with Costcenter Group aggregation on the report.
    I have Restricted 2-Keyfigures  want to derive the 3rd Keyfigure values. I given the below exmaple of the data. In my report I just want to display the KeyFigure(KF-A,B,C). I dont want to Display the Cost Center group/Costcenter Number in my report. My report ouput will be one line. But If I removed the costcenter Group I am seeing different values of the Keyfigure C, If I display the Costcenter comng with differnt Values. I am expecting the values in the column C is 1400 but it is coming like 3600.
    Please guide me how to design the Keyfigures if I want to get the value 1400 in theKeyFigre C with out Showing the Costcenter In the report.
    Cost-Center Group  R00048                         
                      Costcenter     KF-A     KF-B           KF-C
         10113     10     10          100
         10114     20     20          400
         10115     30     30          900
              60     60          1400(Insted of this its comming like 3600)
    Appreciate the response.
    Thanks,
    Venkata

    Hi,
    Check the display properties of key figure in Query Designer/Info object level.
    Set the property accordingly.
    I hope it will help.
    Thanks,
    S

  • Read from measurement file problem

    Hi all,
    I am using the "Read from Measurement File" VI that is built into LV8. I am reading in a .lvm file consisting of 2 columns. I cannot seem to figure out how to index each column. When tested using a a single column data file, I am able to use the index array vi to  access the data successfully. The problem occurs when I have 2 columns. To get around this problem I split my initial data file into 2 single column data file, but I would prefer not having to do this. Is there anyway to avoid this?
    When I read the 2D file into an array I can only index the data located in the 1st row 1st col and the data in the last row 2nd col.
    The array size function results in the value "2" yet each column has 201 entries.

    Attachments:
    Write to LVM.vi ‏128 KB
    To read LVM.vi ‏70 KB

  • Measurement & Automation Problem I think?

    Hello, have two seperate computers running the same software, one has vb6 installed.
    I have a labview program and vb6 program that does pretty close to the same thing.
    I set up a task in M&A with 2 temperatures. Voltages are the problem.
    They run fine in vb6  with a voltage of around .25 on one computer.
    I switch the usb cable of a daqpad6015 and the voltages changed to .14 and .09 on the other computer inside m&a and labview.
    I am also taking 2 pressure and flows in the same task which have equivelent voltages between computers.
    Whats happening.
    thanks.
    The funny thing is if I unplug the usb cable and then plug it back in, the correct voltage is displayed in m&A for a split second and then drops to the errored voltage.
    Message Edited by j_es on 08-22-2006 02:51 PM
    Attachments:
    B3Controller5.vi ‏807 KB

    Hi Jes,
    I just want to make sure I understand the situation:  You have a single DAQPad-6015 for USB with two different cables, and you are connecting it to different computers (only one at a time, back and forth).  You have a VB6 program on one computer and a LabVIEW program on the other computer, each performing the same tasks.  This task is comprised of 2 AI Temperature channels, 2 AI Pressure channels, and 1 AI Flow channel. When reading from the VB6 computer with cable #1, you read .25V on temperature channel 1.  When reading from the LabVIEW computer with cable #2, you read .14V on temperature channel 1.  Is that correct?
    I would suggest reading from Measurement & Automation explorer on both computers from the DAQPad-6015 with cable #1, then doing the same thing (reading from Measurement & Automation explorer on both computers from the DAQPad-6015), but this time with cable #2.  This will eliminate the variable of the software application being used.  If both cables are working properly, you should see the same readings on each computer.  If they are different, it sounds like one of the cables could be faulty. 
    Also note that if you are taking temperature readings, slight variations in temperature will cause changes in the voltage read in.  So if these computers are in different rooms, the difference in reading could be due to the difference in temperature between rooms. 
    Please verify that I understand your setup and give that a try.  Let me know if you are still having trouble...
    Regards,
    Nicholas B, Applications Engineer, National Instruments

  • Aggregation problem

    Hi experts.
    I've got 3 tables A, B and C.
    Columns in A: "Konto", "Klient", "Saldo"
    Columns in B: "Konto", "Klient", "Rach", "Saldo"
    Columns in C: "Konto", "Rach", "Kwota"
    Physical diagram A -<-B ->- C
    Business model diagram A -<- B ->- C
    Logical table source: A for table A, B for table B and C for table C.
    I add logical table source C to table B, and add one column "Suma Kwota" with aggregation rule: sum.
    Know when i create report with columns
    C."Konto", B."Suma Kwota"
    everything works ok, but when i add column from table A or B for example
    B."Klient", C."Konto" , B."Suma Kwota"
    then column "Suma Kwota" return null values for all report.
    Why have i null values?
    Next I add table B to logical source of table C, so i have table C and B in LTS C using inner join. Know on report column B."Suma Kwota" is not null but return bad values (too high). I have joined table B and C through the columns "Konto" and "Rach".
    In table C i have distinct values for column "Klient" but in table B not. So for example when i have one row for "Klient" in C then i can have 10 rows for that same "Klient" in table B. So when i have one row for table C and 10 rows for B then "Suma Kwota" is 10 time bigger.
    How can I avoid this?
    I can use not aggregated column "Kwota" from table C and use sum in the presentation service but i want to know why this is not working with measure.
    Edited by: Ultrecht on 2009-07-30 14:15

    Hi thanks for quick response. I have no time to check this today but i can write here querry that BI sends to database.
    When BI returned null values.
    select distinct D1.c1 as c1,
    D1.c2 as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    cast(NULL as DOUBLE PRECISION ) as c5
    from
    (select distinct T135268.ODDZIAL as c1,
    T135352.KLIENT as c2,
    T135352.KONTO as c3,
    T135352.RACH as c4
    from
    TABLE_C T135352,
    TABLE_B T135268
    where ( T135268.KLIENT = T135352.KLIENT and T135268.KONTO = T135352.KONTO and T135268.RACH = T135352.RACH )
    ) D1
    When BI returned bad values (too high).
    select distinct D1.c2 as c1,
    D1.c3 as c2,
    D1.c4 as c3,
    D1.c5 as c4,
    D1.c1 as c5
    from
    (select sum(T135352.KWOTA) as c1,
    T135268.ODDZIAL as c2,
    T135352.KLIENT as c3,
    T135352.KONTO as c4,
    T135352.RACH as c5
    from
    TABLE_B T135268,
    TABLE_C T135352
    where ( T135268.KLIENT = T135352.KLIENT and T135268.KONTO = T135352.KONTO and T135268.RACH = T135352.RACH )
    group by T135268.ODDZIAL, T135352.KLIENT, T135352.KONTO, T135352.RACH
    ) D1

  • Unit of measure language problem

    Hi experts,
    I'm selecting matnr and meins from MARA. In SE11 I see the meins in my language (DB), but when I'm executing my program than I see the Unit of measure as 'ST' (Stück).
    Could you help me in how can I get the unit of measure in a certain language?

    Wenonah that won't work since the meins in MARA cannot be found in T006.
    But finally I found a function module which solves my problem:
    CONVERSION_EXIT_CUNIT_OUTPUT
    Thanks everybody to trying to help me, I appreciate it.

  • SQL: Aggregating Problem

    Hello,
    i have following problem with aggregating values with one of our tables:
    For each day there is a row which contains the change to the previous day.We need now a view containing for each date of the year a subtotal from the beginning of the year . Is there any aggregating function with this functionality? Is it possible to solve this without using PL/SQL?
    Example:
    create table example (datum date, diff number);
    Rows in the table:
    insert into example (to_date('01-01-2005','dd-mm-yyyy'), -1);
    insert into example (to_date('02-01-2005','dd-mm-yyyy'),1);
    insert into example (to_date('03-01-2005','dd-mm-yyyy'),2);
    insert into example (to_date('04-01-2005','dd-mm-yyyy'), -1);
    We need following result:
    '01-01-2005', -1 --(sum from 01-01-2005 to 01-01-2005)
    '02-01-2005', 0 --(sum from 01-01-2005 to 02-01-2005)
    '03-01-2005', 2 --(sum from 01-01-2005 to 03-01-2005)
    '04-01-2005', 1 --(sum from 01-01-2005 to 04-01-2005)
    Thanks for your help
    Philipp
    Message was edited by:
    [email protected]

    The windowing clause may not be required, but depending on how the O/P feelas about multiple months, or years, something like this may be more accurate.
    DATUM             DIFF
    01-JAN-2005         -1
    02-JAN-2005          1
    03-JAN-2005          2
    04-JAN-2005         -1
    29-DEC-2004          1
    30-DEC-2004          2
    31-DEC-2004          3
    SQL> SELECT datum, SUM (diff) OVER (ORDER BY datum) diff
      2    FROM example;
    DATUM             DIFF
    29-DEC-2004          1
    30-DEC-2004          3
    31-DEC-2004          6
    01-JAN-2005          5
    02-JAN-2005          6
    03-JAN-2005          8
    04-JAN-2005          7
    SQL> SELECT datum, SUM (diff) OVER (PARTITION BY TRUNC(datum,'YEAR')
      2                                 ORDER BY datum) diff
      3  FROM example;
    DATUM             DIFF
    29-DEC-2004          1
    30-DEC-2004          3
    31-DEC-2004          6
    01-JAN-2005         -1
    02-JAN-2005          0
    03-JAN-2005          2
    04-JAN-2005          1John

  • Aggregator Problems

    I have three swfs/projects that are daisy chained and also put into the aggregator. The aggregated file is here.
    When it gets to the end of one swf and starts the next, instead of showing the next swf correctly, it plays the audio only for a few seconds while showing a white screen and then stalls out. At that point, the controls for the movie no longer work. It looks like this when it stalls:
    This behavior is the same whether the files/projects are daisy chained or not. It also did it when I used a button to launch the next file.
    If I refresh the browser when it is stuck, it starts working again, but the TOC controls no longer work. You can click on a TOC entry and nothing happens.
    I routinely clear the browser and flash caches when I test.
    I am struggling to get the project titles to save correctly, have tried most of the save as tricks suggested here. Don't know if that is part of the problem, but suspect it is not, because the file starts playing again correctly when refreshed. Also deleted the Adobe Captivate Course Companion widget in the swf that stalls out (the only plugin), but that didn't help.
    Been working all day on this one, would love some fresh ideas. Thank you!

    When publishing projects that will be aggregated, don't try to daisy chain them. Aggregator will look after that.  Just set the Project End option in Preferences to Stop Project.  After each SWF is aggregated, as soon as one project finishes playing, the next one will start.
    Another tip I find useful: Don't have any objects on the very first slide in each SWF.  Just make it blank and about 1 second long.  Start your animation or audio etc on the second or third slide in the movie.  If you want a fade-in effect, use the first blank slide as a fade in slide via the Project Start option, not the slide's own fade-in transition.  The project fade in seems cleaner to me.

  • Ragged Hierarchy - aggregation problem

    I build dimension with Ragged Hierarchy as posted in [|http://oracleolap.blogspot.com/2008/01/olap-workshop-4-managing-different.html]
    in "*Skip, Ragged and Ragged-Skip Level Hiearchies*" section.
    I use scott schema for test
    _1- build dimension emp with 4 levels using this data_
    ==> these data come from relation between EMPNO and MGR columns of EMP table
    LVL1_CODE, LVL1_DESC, LVL2_CODE, LVL2_DESC, LVL3_CODE, LVL3_DESC, LVL4_CODE, LVL4_DESC
    7839, KING                              
    7839, KING     , 7566, JONES                    
    7839, KING, 7566, JONES, 7788, SCOTT          
    7839, KING, 7566, JONES, 7788, SCOTT, 7876, ADAMS
    7839, KING, 7566, JONES, 7902, FORD          
    7839, KING     , 7566, JONES, 7902, FORD, 7369, SMITH
    7839, KING     , 7698, BLAKE               
    7839, KING     , 7698, BLAKE, 7499, ALLEN          
    7839, KING     , 7698, BLAKE, 7521, WARD          
    7839, KING     , 7698, BLAKE, 7654, MARTIN          
    7839, KING     , 7698, BLAKE, 7844, TURNER          
    7839, KING     , 7698, BLAKE, 7900, JAMES          
    7839, KING     , 7782, CLARK               
    7839, KING     , 7782, CLARK, 7934 MILLER          
    _2- build cube salary cube using this data_
    EMPNO     SAL
    7369     800
    7499     1600
    7521     1250
    7566     2975
    7654     1250
    7698     2850
    7782     2450
    7788     3000
    7839     5000
    7844     1500
    7876     1100
    7900     950
    7902     3000
    7934     1300
    The total sum of salary on the top of hierarchy "KING" is 9,750$ and the correct value must be 29,025$.
    I Notice that, in any node in hierachy that has childern the value of salary sum is the summation of its chiildern only without its value.
    so what is the problem??

    EMPNO SAL
    7369 800
    7499 1600
    7521 1250
    7566 2975
    7654 1250
    7698 2850
    7782 2450
    7788 3000
    7839 5000
    7844 1500
    7876 1100
    7900 950
    7902 3000
    7934 1300I can see the above data and looks like you are loading some values at higher level i.e. for emp no 7566. In DWH you will be loading data at leaf level and OLAP engine does aggregation(solve) and store data at higher level. What you are seeing is correct as any node's value is equal to the sum of values of its children.
    Thanks,
    Brijesh

  • Streaming data and writing data to measurement file problem

     Hi everyone,
    I found something wrong about my code but i don't know exactly what it is first i have a program for acquiring 15 analog signals(NI PXI6259) after acquiring the data i use network stream to stream data from my NI PXI8186 to the host pc these steps are in target VI, after streaming data to host PC
    , i tried to write down the data to the spreadsheet using Express write to measurement VI.
    Problems : 1. After i run the host Vi and target Vi the number available to read is always 0 after sometimes the data displaying on the chart incdicator on host Vi is stopped and start counting the numberavailable to read also the time indicated on X axis of the chart doesn't grow.
                         2. Write to measurement file generate the file that record the data counting the data from 0 to 99 and start over and over again
     Please help i really have no idea what is the cause for each problems.
    Attachments:
    target - single rate.vi ‏83 KB
    Host UI.vi ‏36 KB

    Dear Crossrulz
            Thank you for your prompt reply i have removed the input to "Samples Per Channel" on the DAQmx Timing VI already but still didn't arrange the 
    channels input i will do it but i want to keep it this way first. Now the data acquiring looks okay but i have found new problem.
            After running the program for acquiring the data for about 5 to 6 minutes the program will stop acquiring the data or
    sometimes the target (NI pxi8186) will reboot itself and displaying message "Reboot due to system error" "System state: Safe Mode (System error)
    and on the Host PC there will be a window pop-up displaying
    "Waiting for the target (NI-PXI8186-2F0a597C) to respond"  "Stop Waiting and Disconnect"
     i didn't run the host Vi just run the target Vi only. If this problem caused from the program or the hardware ? Please help.
    Attachments:
    Host UI.vi ‏141 KB
    target - single rate.vi ‏83 KB

  • OLAPTRAIN schema installation problem

    Hi,
    I am trying to create OLAPTRAIN schema but it hangs when trying to import dump file. I am not sure how long it takes to import the file. I am running the script from toad. After 20 minutes nothing happens. looks like toad is hanged, not sure it is still running in the background. I have already install awm and created the connection successfully.
    can you please guide?
    Thanks,

    It could be a problem with versions compatibility?
    Edited by: 896514 on 15-nov-2011 15.59

Maybe you are looking for