Aggregation challenge

Hi experts
having a table with the following columns and sample values:
year, month, registration_id, user, profile, task_group, task, department, number of registrations (= fact)
2009, jan, 22002, andy, reception, tax, tax registered, ABC, 15
2009, jan, 22002, andy, reception, tax, tax other tasks, ABC, 15
2009, jan, 22002, andy, reception, holiday tasks, holiday registered, XYZ, 15
2009, mar, 54543, tom, mail, social security, new number registered, XYZ, 32
2009, mar, 54543, tom, mail, tax, tax registered, ABC, 32
I was wondering how I could achieve the following totals:
All totals within the same registration_id should be the same as 'number of registrations':
for registration_id 22002 all totals should be 15 (e.g. total number of registration for task group = 15)
for registration_id 54543 all totals should be 32
totals accross the registration_ids should be 15+32 = 47
Any idea how to achieve this?
I do have the option to change the physical source tables if necessary - if you do have an idea what the source should look like to achieve the above, please share your thoughts with me.
Thanks for your help
Regards
Andy

Hi,
How meny dimmensions do you have in this example?
What is the agg function for number of registrations ?
At what level of your dimmensions is the fact table?
What is the meaning of your fact? (Number(count) of registrations for one user in one department in one mounth...)?
Maybe you can use different aggregation function based on dimmensions..
ex:
Sum() for time
sum() for registration_ids
and count() or distinct count() for others
Maybe if you answer this questions the solution will come...
Regards
Nicolae
Edited by: Nicolae Ancuta on 22.02.2010 15:26

Similar Messages

  • Getting top 9, aggregating the rest - challenge with SUM

    Dear,
    I'd like to get the top 9 customers and aggregate the remaining to one number.
    My query is:
    select k.kundenid, k.KDNR, k.FIRMA_1, sum(a.auftrag_total)
    from auftrag a, kunden k
    where a.kunde_id=k.kundenid and
    TO_CHAR(a.DATUM,'YYYY')=TO_CHAR(sysdate,'YYYY')
    group by k.kundenid, k.KDNR, k.FIRMA_1
    order by 4 desc
    I have tried with DENSE_RANK() but it gets complicated to combine this with SUM()
    Can someone help?
    Kind regards..........Lorenz

    I seem to have hit a bug in XE on windows XP. The following query should work but the 10th record only includes the 10th ranked total, not the total of 10 thru 16.
    If I change the with statement to a create table, the query works. Also, if I just use the second half of the union all I get the correct results. The combination of with and union/union all seems to fail.
    create table auftrag(kunde_id number, kdnr varchar2(20), datum date, auftrag_total number);
    insert into auftrag (select 1  kunde_id, 'cust1'  kdnr, sysdate datum, 1200 auftrag_total from dual);
    insert into auftrag (select 1  kunde_id, 'cust1'  kdnr, sysdate datum, 600  auftrag_total from dual);
    insert into auftrag (select 2  kunde_id, 'cust2'  kdnr, sysdate datum, 1600 auftrag_total from dual);
    insert into auftrag (select 2  kunde_id, 'cust2'  kdnr, sysdate datum, 700  auftrag_total from dual);
    insert into auftrag (select 3  kunde_id, 'cust3'  kdnr, sysdate datum, 500  auftrag_total from dual);
    insert into auftrag (select 3  kunde_id, 'cust3'  kdnr, sysdate datum, 1300 auftrag_total from dual);
    insert into auftrag (select 4  kunde_id, 'cust4'  kdnr, sysdate datum, 200  auftrag_total from dual);
    insert into auftrag (select 5  kunde_id, 'cust5'  kdnr, sysdate datum, 1500 auftrag_total from dual);
    insert into auftrag (select 6  kunde_id, 'cust6'  kdnr, sysdate datum, 800  auftrag_total from dual);
    insert into auftrag (select 7  kunde_id, 'cust7'  kdnr, sysdate datum, 1400 auftrag_total from dual);
    insert into auftrag (select 8  kunde_id, 'cust8'  kdnr, sysdate datum, 300  auftrag_total from dual);
    insert into auftrag (select 9  kunde_id, 'cust9'  kdnr, sysdate datum, 2000 auftrag_total from dual);
    insert into auftrag (select 10 kunde_id, 'cust10' kdnr, sysdate datum, 1900 auftrag_total from dual);
    insert into auftrag (select 11 kunde_id, 'cust11' kdnr, sysdate datum, 400  auftrag_total from dual);
    insert into auftrag (select 12 kunde_id, 'cust12' kdnr, sysdate datum, 1800 auftrag_total from dual);
    insert into auftrag (select 13 kunde_id, 'cust13' kdnr, sysdate datum, 900  auftrag_total from dual);
    insert into auftrag (select 14 kunde_id, 'cust14' kdnr, sysdate datum, 1000 auftrag_total from dual);
    insert into auftrag (select 15 kunde_id, 'cust15' kdnr, sysdate datum, 1700 auftrag_total from dual);
    insert into auftrag (select 16 kunde_id, 'cust16' kdnr, sysdate datum, 100  auftrag_total from dual);
    create table kunden(kundenid number, firma_1 varchar2(20));  
    insert into kunden (select 1  kundenid, 'desc1'  firma_1 from dual);
    insert into kunden (select 2  kundenid, 'desc2'  firma_1 from dual);
    insert into kunden (select 3  kundenid, 'desc3'  firma_1 from dual);
    insert into kunden (select 4  kundenid, 'desc4'  firma_1 from dual);
    insert into kunden (select 5  kundenid, 'desc5'  firma_1 from dual);
    insert into kunden (select 6  kundenid, 'desc6'  firma_1 from dual);
    insert into kunden (select 7  kundenid, 'desc7'  firma_1 from dual);
    insert into kunden (select 8  kundenid, 'desc8'  firma_1 from dual);
    insert into kunden (select 9  kundenid, 'desc9'  firma_1 from dual);
    insert into kunden (select 10 kundenid, 'desc10' firma_1 from dual);
    insert into kunden (select 11 kundenid, 'desc11' firma_1 from dual);
    insert into kunden (select 12 kundenid, 'desc12' firma_1 from dual);
    insert into kunden (select 13 kundenid, 'desc13' firma_1 from dual);
    insert into kunden (select 14 kundenid, 'desc14' firma_1 from dual);
    insert into kunden (select 15 kundenid, 'desc15' firma_1 from dual);
    insert into kunden (select 16 kundenid, 'desc16' firma_1 from dual);
    commit;
    with t_rank as (
       select to_char(kunde_id) kunde_id, kdnr, firma_1, sum(auftrag_total) sum_at,
          row_number() over (order by sum(auftrag_total) desc) rn
       from auftrag, kunden
       where kunde_id = kundenid
       and trunc(datum,'y') = trunc(sysdate,'y')
       group by kunde_id, kdnr, firma_1)     
    select *
    from t_rank
    where rn <= 9
    union all
    select 'Rest', 'Rest', 'Rest', sum(sum_at), 10
    from t_rank
    where rn > 9
    order by 5

  • Is it possible to load data at aggregated level directly

    Hi All,
    My question may sound vague but still would like to clarify this.
    Is it possible to load data at some higher level than the leaf level? The reason for asking this question is that we are facing severe performance issues while loading the cube.
    We are trying to reduce the number of members which inturn can reduce the loading time. On this attempt to reduce the number of members client said that there are 3 million leaf members(out of 4.3 million total members) for which they do not bother if they appear on report or not but they want to see the total value correct. i.e. the dimension will be only loaded with 1.3 million(4.3 - 3) members but the total can still be correct(including the 3 million leaf members).
    Is it possible to have two loads one at leaf level only and other at parents and above levels.
    DB - 10.2.0.4
    Also I want to know when do we use allocmap? how does this work?
    Regards,
    Brijesh
    Edited by: BGaur on Feb 13, 2009 3:33 PM

    Hi Carey,
    Thanks for your response.
    I worked on your suggestion and I could able to load data at higher level for a value based hierarchy. But I met with a other problem while doing this.
    I am describing this as a level based but while loading we are converting to value based hierarchy.
    we have following levels in the customer dimension having two hierarchy
    hier1
    lvl1
    lvl2
    lvl3
    lvl4
    lvl5
    prnt
    leaf
    hier2
    level1
    level2
    level3
    level4
    prnt
    leaf
    so prnt and leaf is common levels are common in both the hierarchies but we were facing multipath issues in second hierarchy and we work around it by concatenating the level name.
    In attemp to decrease the number of this dimension member(currrently 4.3 million) business suggested that there are some kind of customer for which they dont want to see the data at leaf and prnt level instead the level4 or lvl5 should have the total consisting of those members. So by this way we need not to load those members and they were making up 2.4 million out of 4.3
    To implement above I did following.
    1. Created six hierarchies.
    one to have members from level4 till level1
    second to have members from lvl5 till lvl1
    third to have all members of hier1
    fourth to have all members of hier2
    fifth will have leaf and prnt of hier1
    sixth will have leaf and prnt of hier2
    In fifth and sixth hierarchy leaf is common but prnt is different due to the concatenation.
    In the cube I am selecting the aggregation hierarhies as first,second,fifth and sixth. hierarchies third and fourth will be used for viewing the data.
    Fact will have data corresponding for leaf level data,level4 and lvl5.
    The challenge I am facing is that if there is some fact value for leaf level loaded through relational fact but no value being loaded through fact for lvl5 or level4 I am not seeing the total value as leaf value is aggregating till prnt level and no value at level4 or lvl5 so same is propagated till lvl1 and level1.
    I tried to be as clear as possible but still if there is any confussion pls update the thread. I understand that the approach is vague but I am not seeing any other way.
    Thanks
    Brijesh

  • BT Infinity 2,taking up the challenge.

    I have decided to take up the challenge to try  to fight the software and hardware that is used in the cabinet DLM that persists on lowering my profile and governing the throughput.
    I own everything that is connected on my side of the OR master socket so the way I see it is that there is nothing to stop me from building circuits that can operate at my end the same as the DLM at the cab [they say that knowledge can be dangerous].
    And it will be fun building the hardware and programing the software.
    It might not work but it will be fun trying,

    tellboy,
    Firstly I have to do a lot of research into the nature and full circuit components of a DLM system,this will take a long time.
    But as you can see from the below it will definately be a fun trip.
    DYNAMIC LINE MANAGEMENT - A management device for use in an access network including a plurality of data connections between end user devices and an aggregation transceiver device where the connections are aggregated for onward connection through the access network, the access network storing in association with each data connection a Dynamic Line Management, DLM, profile which specifies a set of values for a plurality of parameters associated with the respective data connection, together with a stability level specifying a desired level of stability for the data connection. The device includes means for receiving monitoring data specifying the stability of each respective data connection over a predetermined period of time; means for selecting a DLM profile to be applied to the connection in dependence on both the monitoring data and the stored stability level associated with the data connection; and means for requesting an OSS system of the access network to apply the selected profile to the data connection. The DLM profile selection means disregards any resynchronizations or errors estimated to have occurred as a result of an area wide event such as a thunderstorm. It performs this estimation by detecting a large number of retrains and or errors occurring within a predetermined short test period.

  • BPEL Challenges

    Hi,
    I'm considering Oracle's BPEL for a strategically important SOA initiative. I would like to know if BPEL and/or Oracle's engine can tackle the following challenges:
    1. A business process which receives large SOAP payload and potentially returns something larger, ~300MB. Any alternatives in making the data move across faster? Chunking?
    2. Receive an FTP file stream initiated by an async callback? On what thread would the stream be on (i.e would it block the initiating web-service call)
    3. A client invokes a web service, which in turn invokes a BPEL script. The BPEL makes calls out to multiple other web services. How is the return of multiple calls aggregated into one SOAP message back to the client?
    Thanks

    Jamiel,
    Regarding 1),
    the use case was BPEL process invokes a catalog service asking for meta data about satelite images. The response can vary between 500K and 20MB.
    The large size of the response messages introduces a few challenges:
    a) first it was stretching the limit of our BPEL console so we had to do bunch of things to do async loading of large variables into the console.
    b) second if the process is asynchronous, then the BPEL server needs to save that variable to it state repository (Oracle DB) as a blob.
    c) third, we usually zip the content of the state before we save if in the state repository. Zipping 20MB can take up to 10-15s and is very CPU intensive.
    As a result, sizing of the infrastructure (DB server and BPEL PM servers) ended up being a key step to the success of the project. The lesson learned hear is that this is something you want to think about very early in the process because it might have an important impact on your design. For example in the same project, in some use case, the service actually pushes some content to a shared FTP server and then only passes the URL of that resource to the BPEL PM server. This is a good solution in the BPEL Process does not need to look into the content because he can in turn just pass the url to the application that initiated the process.
    Regarding 2
    Let me try to understand the use case:
    client initiate bpel flow
    bpel flow execute invoke activity in which it open HTTP connection to service, do you mean that at time, service leverages client socket to ftp result to bpel flow?
    BPEL PM uses WSIF as a binding framework so one we better understand how you would like to see this work, we could potential collaborate on writing a special binding to achieve the use case.
    Also streaming and binary attachments are things that we are in the process of adding for one of our existing customers so depending on your timeframe, if you end up doing a small proof of concept, we might be able to give us early access to those features and leverage your use case to hammer out the design.
    Regarding 3,
    I am afraid it is not 100% automatic:
    the <flowN> activity gives you the ability to call N instance of the same service in parallel (where N is defined at design or runtime).
    then you have to use an assign activity with an merge XPATH expression to get each response element and combine them in an XML list.
    I hope this addresses your questions. This sounds like an interesting use case. If you end up doing a proof of concept, let us know and we can exchange ideas on how to plan for it.
    Edwin

  • IR: aggregation result in report footer

    Hello!
    This quote from "Beginning Oracle Application Express 4.2" at page 174 (about aggregation in IR):
    "The results are displayed at the end of the report."
    There is a simple method to print the results of aggregation to the report footer on each page?

    I asked - and I answered
    For making this I create on-Demand process for calculation results of aggregation (using APEX_IR_PKG) and make ajax request to him in "After Refresh" event of IR.
    There is the detailed manual with sample http://devsonia.ru/2013/11/14/oracle-apex-aggregation-in-interactive-report-on-each-page-en/.

  • How can I set a right link Aggregations?

    I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s. As far as I know, no patches were applied to the server and no changes were made to the switch that it's connected to (Nortel Passport 8600 Series) and the total amount of backup data sent to the server has stayed fairly constant. I have tried setting up the aggregation multiple times and in multiple ways to no avail. (LACP enabled/disabled, different policies, etc.) I've also tried using different ports on the server and switch to rule out any faulty port problems. Our networking guys assure me that the aggregation is set up correctly on the switch side but I can get more details if needed.
    In order to attempt to better troubleshoot the problem, I run one of several network speed tools (nttcp, nepim, & iperf) as the "server" on the T5220, and I set up a spare X2100 as a "client". Both the server and client are connected to the same switch. The first set of tests with all three tools yields roughly 600 Mb/s. This seems a bit low to me, I seem to remember getting 700+ Mb/s prior to this "issue". When I run a second set of tests from two separate "client" X2100 servers, coming in on two different Gig ports on the T5220, each port also does ~600 Mb/s. I have also tried using crossover cables and I only get maybe a 50-75 Mb/s increase. After Googling Solaris network optimizations, I found that if I double tcp_max_buf to 2097152, and set tcp_xmit_hiwat & tcp_recv_hiwat to 524288, it bumps up the speed of a single Gig port to ~920 Mb/s. That's more like it!
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!
    Regards,
    sundy
    Output of several commands on the T5220:
    uname -a:
    SunOS oitbus1 5.10 Generic_137111-07 sun4v sparc SUNW,SPARC-Enterprise-T5220
    ifconfig -a (IP and broadcast hidden for security):
    lo0: flags=2001000849 mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    aggr1: flags=1000843 mtu 1500 index 2
    inet x.x.x.x netmask ffffff00 broadcast x.x.x.x
    ether 0:14:4f:ec:bc:1e
    dladm show-dev:
    e1000g0 link: unknown speed: 0 Mbps duplex: half
    e1000g1 link: unknown speed: 0 Mbps duplex: half
    e1000g2 link: up speed: 1000 Mbps duplex: full
    e1000g3 link: up speed: 1000 Mbps duplex: full
    dladm show-link:
    e1000g0 type: non-vlan mtu: 1500 device: e1000g0
    e1000g1 type: non-vlan mtu: 1500 device: e1000g1
    e1000g2 type: non-vlan mtu: 1500 device: e1000g2
    e1000g3 type: non-vlan mtu: 1500 device: e1000g3
    aggr1 type: non-vlan mtu: 1500 aggregation: key 1
    dladm show-aggr:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) device address speed
    duplex link state
    e1000g2 0:14:4f:ec:bc:1e 1000 Mbps full up attached
    e1000g3 1000 Mbps full up attached
    dladm show-aggr -L:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) LACP mode: active LACP timer: short
    device activity timeout aggregatable sync coll dist defaulted expired
    e1000g2 active short yes yes yes yes no no
    e1000g3 active short yes yes yes yes no no
    dladm show-aggr -s:
    key: 1 ipackets rbytes opackets obytes %ipkts %opkts
    Total 464982722061215050501612388529872161440848661
    e1000g2 30677028844072327428231142100939796617960694 66.0 59.5
    e1000g3 15821243372049177622000967520476 64822888149 34.0 40.5

    sundy.liu wrote:
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!If you're only running a single stream, that's all you'll see. Teaming/aggregating doesn't make one stream go faster.
    If you ran two streams simultaneously, then you should see a difference between a single 1G interface and an aggregate of two 1G interfaces.
    Darren

  • Resetting Aggregated Cleared document

    Hi All,
    Does any one have any idea how do i reset a aggregated posting document which is cleared.
    I have tried doing this with the standard Tcode iueedpplotaalc4 by providing Aggregated Payment Document but it only allows me to reverse the payment.
    Thanks
    Satyajeet

    Hi,
    you may use program REDEREG_ETHI_REV.
    Best regards
    Harald

  • Can any one tell me how can I move to a different folder pictures, that I've cloned, without them staying aggregated? They all come together to the other folder and I don't want that- thanks

    Can any one tell me how can I move to a different folder pictures, that I've cloned, without them staying aggregated? They all come together to the other folder and I don't want that… thanks

    There's more to it than that.
    Folders in Aperture do not hold Images.  They hold Projects and Albums.  You cannot put an Image in a Folder without putting it in a Project or an Album inside that Folder.
    The relationship between Projects and Images is special:  every Image must in a Project, and can be in only one Project.
    Images can be in as many Albums you want.  Putting an Image in an Album does not move it from the Project that holds it.
    You can make as many Versions from a Master as you want.
    What you want to do may appear simple to you, but it still much adhere to how Aperture works.  I still can't tell exactly what you are trying to do (specifically: Images don't live in Folders; moving an Image from a Folder is non-sensical).
    It can be very confusing (and frustrating) to get going with Aperture -- but it does work, and can be enormously helpful.  If you haven't, take a look at the video tutorials on Apple's Aperture support site.
    I feel as though I haven't helped you much -- but we need to be using the same names for interface items in order to get anything done -- and my sense is that you still haven't learned the names of the parts.

  • Setting aggregation content for logical level in 11g

    Hi Guys,
    When working on with horizontal and vertical federation in OBIEE 11g with multiple data sources here in my case it is essbase and RDBMS.
    1) pulled the columns and dragged into the concerened table.
    2) The related heirarchies have been defined.
    3) when trying to go to one of the LTS and trying to set the logical level aggregation im not able to see the levels columns corresponding nor im getting the get levels option to get them. where am i going wrong?
    when im trying to join a fact by pulling it on to the fact...i can see the levels in content tab,but when i try to define levels and check it its giving me error "There are no levels matching the BI algorithm"
    Any answers wud be appreciated.
    TIA,
    KK
    Edited by: Kranthi.K on Sep 5, 2011 2:52 AM

    It is autocreated,i dint customize it.....Im dropping the RDBMS table onto the Essbase cube dimension table and im not getting the RDBMS content levels that should be defined in the LTS of the table,and the RDBMS table has an level based hierarchy but still no sucess.
    Any more ideas
    UPDATED POST
    Deepak,it was not helpful as i have gone through tht document before....Im trying it in all scenerios to figure out where actually it is going wrong.
    If i dont find the path,i will let you kne what im trying to do so you can help me out.
    UPDATED POST-2
    Any more pointers from the experts.
    Edited by: Kranthi.K on Sep 6, 2011 7:01 AM

  • Data in the Cube not getting aggregated

    Hi Friends
    We have Cube 1 and Cube 2.
    The data flow is represented below:
    R/3 DataSource>Cube1>Cube2
    In Cube1 data is Stored according to the Calender Day.
    Cube2 has Calweek.
    In Transformations of Cube 1 and Cube 2 Calday of Cube 1 is mapped to Calweek of Cube 2.
    In the CUBE2 when i upload data from Cube1.Keyfigure Values are not getting summed.
    EXAMPLE: Data in Cube 1
    MatNo CustNo qty calday
    10001  xyz     100  01.01.2010
    10001  xyz      100  02.01.2010
    10001  xyz      100   03.01.2010
    10001  xyz     100  04.01.2010
    10001  xyz      100  05.01.2010
    10001  xyz      100   06.01.2010
    10001  xyz      100   07.01.2010
    Data in Cube 2:
    MatNo CustNo qty calweek
    10001  xyz     100  01.2010
    10001  xyz      100  01.2010
    10001  xyz      100   01.2010
    10001  xyz     100   01.2010
    10001  xyz      100   01.2010
    10001  xyz      100   01.2010
    10001  xyz      100   01.2010
    But Expected Output Should be:
    MatNo CustNo qty calweek
    10001  xyz     700  01.2010
    How to acheive this?
    I checked in the transformations all keyfigures are maintained in aggregation summation
    regards
    Preetam

    Just now i performed consisyency check for the cube:
    I a getting following warnings:
    Time characteristic 0CALWEEK value 200915 does not fit with time char 0CALMONTH val 0
    Consistency of time dimension of InfoCube &1
    Description
    This test checks whether or not the time characteristics of the InfoCube used in the time dimension are consistent. The consistency of time characteristics is extremely important for non-cumulative Cubes and partitioned InfoCubes.
    Values that do not fit together in the time dimension of an InfoCube result in incorrect results for non-cumulative cubes and InfoCubes that are partitioned according to time characteristics.
    For InfoCubes that have been partitioned according to time characteristics, conditions for the partitioning characteristic are derived from restrictions for the time characteristic.
    Errors
    When an error arises the InfoCube is marked as a Cube with an non-consistent time dimension. This has the following consequences:
    The derivation of conditions for partitioning criteria is deactivated on account of the non-fitting time characteristics. This usually has a negative effect on performance.
    When the InfoCube contains non-cumulatives, the system generates a warning for each query indicating that the displayed data may be incorrect.
    Repair Options
    Caution
    No action is required if the InfoCube does not contain non-cumulatives or is not partitioned.
    If the Infocube is partitioned, an action is only required if the read performance has gotten worse.
    You cannot automatically repair the entries of the time dimension table. However, you are able to delete entries that are no longer in use from the time dimension table.
    The system displays whether the incorrect dimension entries are still being used in the fact table.
    If these entries are no longer being used, you can carry out an automatic repair. In this case, all time dimension entries not being used in the fact table are removed.
    After the repair, the system checks whether or not the dimension is correct. If the time dimension is correct again, the InfoCube is marked as an InfoCube with a correct time dimension once again.
    If the entries are still being used, use transaction Listcube to check which data packages are affected.  You may be able to delete the data packages and then use the repair to remove the time dimension entries no longer being used. You can then reload the deleted data packages. Otherwise the InfoCube has to be built again.

  • Difference between  aggregation and calculation tab in BEx Query Designer

    HI,
    I am using BEx Query Designer for my report, for the key figures in the coloumn area i slected one numeric key figures, in  the properties tab i found aggregation tab and calculation tab.
    I need to sum up the total values for that particualar coloumn, when i used calculation tab i found to sum all the values for a particular coloumn, then what is the use the aggreagation tab?
    I not able to used that Aggregation tab it is showing as a hidden fields...
    can any one tell me whats the exact difference between these tabs and when we need to use which tab?
    With Regards,
    Thanesh Kumar.

    Hi Thanesh Kumar,
    I moved this thread from forum Data Warehousing to Business Explorer since it is a query related question (as SDN moderator).
    I could explain to you the difference between these two tabs.
    For "calculation" tab, it changes the display of result and does not change the calculation logic.
    It means that, if this key figure is used further in formula, still the original number (without "calculation" tab setting)  is used for further formula calculation.
    For "aggregation" tab, it changes the real calculation logic.
    The system takes the setting as the aggregation rule for records.
    The most common aggregation rule is of course summation. If you set to e.g. Average here, the system does the
    Average instead of summation when aggregating records. And the Average value will be taken for calculation
    in further formulas or other calculations.
    For "aggregation" tab, you could only use it for CKF (calculated key figure) or formula and you could not use it for
    a basic key figure. That should be the reason why you see it greyed-out.
    Regards,
    Patricia

  • Member Formula: IF ... ELSE do outline aggregation

    Hi experts,
    How to write a formula for a parent entity member like this:
    IF (@ISMBR("Account member"))
    do something
    ELSE
    do default outline aggregation from its descendants
    ENDIF
    Because I just want the "Do something" execute for some account member. If there is not ELSE statement, the formula will override default outline aggregation. The problem is I can not find any function that manually do default aggregation.
    Please ask if my question not clear.
    Many thanks!

    Huy Van
    I tried to replicate it in Sample Basic, I loaded sample data and below is the result
         Cola     Actual                              
         East     East     East     East     New York     New York     New York     New York
         Sales     Margin     Profit     Measures     Sales     Margin     Profit     Measures
    Jan     1812     1213     837     837     678     407     262     262I've a script where I've fixed on East (Parent member of Market)
    FIX(East,Actual,"100-10")
    Jan(
    IF(@ISMBR(Sales))
    100;
    ENDIF)
    ENDFIXBelow are the results after running the script
         Cola     Actual                              
         East     East     East     East     New York     New York     New York     New York
         Sales     Margin     Profit     Measures     Sales     Margin     Profit     Measures
    Jan     100     -499     -875     -875     678     407     262     262I don't see anything else changes (Only Sales of East is changing).
    Now that you are writing to Parent member, then aggregation from Parent1's descendants will overwrite what you script just populated.
    Regards
    Celvin
    http://www.orahyplabs.com
    Please mark the responses as helpful/correct if applicable

  • Challenges while Upgrading from OBIEE 10g to 11g

    Hi Gurus,
    This is Kiran again. This time I am back with the upgradation issues. We have a client which is currently on OBIEE 10g. All components are already configured in OBIEE 10g with client on RPD, Catalog and implemented all security levels for those.
    Now the challenge is to upgrade all activities which we have configured in OBIEE 10g. I have seen lot of stuff online and understood how to upgrade from OBIEE 10g to OBIEE 11g by simply clicking on ua.bat file. But I am unable to understand the process how it is upgrading the securities and environments.
    Can anyone help me out by providing necessary stuff I need to take care of before taking the decision to upgrade from OBIEE 10g to 11g.
    And also please clear me whether we will have any challenges in upgrading process. If yes, then what kind of challenges we will face and what kind of necessary steps we need to implement to resolve it.
    What kind of Architecture we need to follow?
    How Schedulers will get upgraded?
    Please help me out with your real experiences (Not the online websites). Your response will be highly appreciated and it will help lot of OBIEE professionals who are in the process of upgradation.
    Thanks in Advance to all OBIEE GURUS.
    Kiran

    Hi Valli,
    There is really great information available in this link. But I would like to know whether any one faced any issues while upgrading from OBIEE 10g to 11g. Please do share us few issues....
    Edited by: 949144 on Nov 27, 2012 9:44 AM

  • Aggregating Slowly Changing Dimension

    Hi All:
    I have a problem with whole lot of changes in the dimension values (SCD), need to create a view or stored procedure:
    Two Tables within the Oracle db are joined
    Tbl1: Store Summary consisting of Store ID, SUM(Sales Qty)
    Tbl2(View): Store View created which consists of Store ID, Name, Store_Latest_ID
    Join Relationship: Store_summary.Store_ID = Store_View.Store_ID
    If I’m pulling up the report its giving me this info
    Ex:
    Store ID: Name, Sales_Qty , Store_Latest_ID
    121, Kansas, $1200, 1101
    1101, Dallas, $1400, 1200
    1200, Irvine, $ 1800, Null
    141, Gering, $500, 1462
    1462, Scott, $1500, Null
    1346,Calif,$1500,0
    There is no effective date within the store view, but can be added if requested.
    Constraints in the Output:
    1)     If the Store Latest ID = 0 that means the store id is hasn’t been shifted (Ex: Store ID = 1346)
    2)     If the Store Latest ID = ‘XXXX’ then that replaces the old Store ID and the next records will be added to the db to the new Store ID ( Ex: 121 to 1101, 1101 to 1200, 141 to 1462)
    3)     Output Needed: Everything rolled up to the New Store ID irrespective of the # of records or within the view or store procedure whenever there is a Store Latest ID that should be assigned to the Store ID (Ex: the Max Latest Store ID Record for all the changing Store ID Values) and if the value of Latest Store ID is 0 then no change of the record.
    I need the output to look like
    Store ID: Name, Sales_Qty , Store_Latest_ID
    1200,Irvine,$4400,Null
    1462,Scott,$2000,Null
    1346,Calif,$1500,Null or 0
    The Query I wrote for the view creation:
    Select ss.Store_ID, ss.Sales_Qty, 0 as Store_Latest_ID
    From Store_Summary ss, Store_Details sd
    Where ss.Store_ID=sd.Store_ID and sd.Store_Latest_ID is null
    union
    Select sd.Store_Latest_ID, ss.Sales_Qty, null
    From Store_Summary ss, Store_Details sd
    Where ss.Store_ID=sd.Store_Latest_ID and sd.Store_Latest_ID is not null
    And placing a join to the created view to Store Summary ended up getting the aggreagation values without rolling up and also the Store ID's which are not having latest ids are ending up with a value 0 and the ss quantity aggregated, and if there are changes within store id for more than two times then its not aggreagating the ss quatity to the latest and also its not giving the store name of the latest store id.
    I need help to create a view or stored procedure
    Please let me know if you have any questions, Thanks.
    Any suggestions would be really Grateful.
    Thanks
    Vamsi

    Hi
    Please see the following example
    ID- Name -Dependants
    100 - Tom - 5
    101 - Rick -2
    102 - Sunil -2
    See the above contents...assume the ID represents employee ID and the dependants include parents, spouse and kids....
    After sometime, dependants may increase over a period of time but noone is sure when exactly it will increase.....assume in case of a single get married and increase in dependants
    So the attributes of the Employee had a slow chance of changing over the time
    This kind of dimensions are called slowly changing dimensions
    Regards
    N Ganesh

Maybe you are looking for