Aggregation in 3X

Friends;
I am having four charcteristics, Ley say A, B, C, D. Here D not Necessary in report.
Three KF. Let Say E,F,G
Dividing E/F will Serve G.
Here this E/F will completely depend with Characteristic D which i don't want to show in report.
Example:
A    B    C   D   E   F
A1  B1  C1  D1  5  1
A1  B1  C1  D1  5  1
A1  B1  C1  D2  3  1
A1  B1  C1  D2  3  1
A1  B1  C1  D3  7  1
Generated Report with Characteristic "D" Look Like This: (Requirement is D should not be presented)
A    B    C   D   E   F
A1  B1  C1  D1  5  1
A1  B1  C1  D2  3  1
A1  B1  C1  D3  7  1
Hence Final Result would be
A1  B1  C1  4.6.   But actual result should be 15.
4.6 will be calculating based on 23/5.
But it should be 10/2 + 6/2 + 7 = 15  ( A1  B1  C1  15 )
I tried with all the Exception Aggregations.
This requirement is based on 3x.  But It is very well behave with 7.x and goal has been achieved.
But in 3.X No Luck. If D is inside in report, Aggregation behaves good.
Can any body ...?
Regards
Raju Saravanan

Hi,
If you are reporting out of DSO then you can make use of the field 0ROWNUM for this thing. You can make a calculated keyfigure for E. For example let it be calcE. Its formula should be
calcE = E/0ROWNUM.
If you are reporting out of cube you need to include a newfield like Zrowcount into cube and in transformation you can make that as a constant 1. Then you can use the above formula like E/Zrowcount.
It should solve your problem.
Regards
Githen

Similar Messages

  • IR: aggregation result in report footer

    Hello!
    This quote from "Beginning Oracle Application Express 4.2" at page 174 (about aggregation in IR):
    "The results are displayed at the end of the report."
    There is a simple method to print the results of aggregation to the report footer on each page?

    I asked - and I answered
    For making this I create on-Demand process for calculation results of aggregation (using APEX_IR_PKG) and make ajax request to him in "After Refresh" event of IR.
    There is the detailed manual with sample http://devsonia.ru/2013/11/14/oracle-apex-aggregation-in-interactive-report-on-each-page-en/.

  • How can I set a right link Aggregations?

    I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s. As far as I know, no patches were applied to the server and no changes were made to the switch that it's connected to (Nortel Passport 8600 Series) and the total amount of backup data sent to the server has stayed fairly constant. I have tried setting up the aggregation multiple times and in multiple ways to no avail. (LACP enabled/disabled, different policies, etc.) I've also tried using different ports on the server and switch to rule out any faulty port problems. Our networking guys assure me that the aggregation is set up correctly on the switch side but I can get more details if needed.
    In order to attempt to better troubleshoot the problem, I run one of several network speed tools (nttcp, nepim, & iperf) as the "server" on the T5220, and I set up a spare X2100 as a "client". Both the server and client are connected to the same switch. The first set of tests with all three tools yields roughly 600 Mb/s. This seems a bit low to me, I seem to remember getting 700+ Mb/s prior to this "issue". When I run a second set of tests from two separate "client" X2100 servers, coming in on two different Gig ports on the T5220, each port also does ~600 Mb/s. I have also tried using crossover cables and I only get maybe a 50-75 Mb/s increase. After Googling Solaris network optimizations, I found that if I double tcp_max_buf to 2097152, and set tcp_xmit_hiwat & tcp_recv_hiwat to 524288, it bumps up the speed of a single Gig port to ~920 Mb/s. That's more like it!
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!
    Regards,
    sundy
    Output of several commands on the T5220:
    uname -a:
    SunOS oitbus1 5.10 Generic_137111-07 sun4v sparc SUNW,SPARC-Enterprise-T5220
    ifconfig -a (IP and broadcast hidden for security):
    lo0: flags=2001000849 mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    aggr1: flags=1000843 mtu 1500 index 2
    inet x.x.x.x netmask ffffff00 broadcast x.x.x.x
    ether 0:14:4f:ec:bc:1e
    dladm show-dev:
    e1000g0 link: unknown speed: 0 Mbps duplex: half
    e1000g1 link: unknown speed: 0 Mbps duplex: half
    e1000g2 link: up speed: 1000 Mbps duplex: full
    e1000g3 link: up speed: 1000 Mbps duplex: full
    dladm show-link:
    e1000g0 type: non-vlan mtu: 1500 device: e1000g0
    e1000g1 type: non-vlan mtu: 1500 device: e1000g1
    e1000g2 type: non-vlan mtu: 1500 device: e1000g2
    e1000g3 type: non-vlan mtu: 1500 device: e1000g3
    aggr1 type: non-vlan mtu: 1500 aggregation: key 1
    dladm show-aggr:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) device address speed
    duplex link state
    e1000g2 0:14:4f:ec:bc:1e 1000 Mbps full up attached
    e1000g3 1000 Mbps full up attached
    dladm show-aggr -L:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) LACP mode: active LACP timer: short
    device activity timeout aggregatable sync coll dist defaulted expired
    e1000g2 active short yes yes yes yes no no
    e1000g3 active short yes yes yes yes no no
    dladm show-aggr -s:
    key: 1 ipackets rbytes opackets obytes %ipkts %opkts
    Total 464982722061215050501612388529872161440848661
    e1000g2 30677028844072327428231142100939796617960694 66.0 59.5
    e1000g3 15821243372049177622000967520476 64822888149 34.0 40.5

    sundy.liu wrote:
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!If you're only running a single stream, that's all you'll see. Teaming/aggregating doesn't make one stream go faster.
    If you ran two streams simultaneously, then you should see a difference between a single 1G interface and an aggregate of two 1G interfaces.
    Darren

  • Resetting Aggregated Cleared document

    Hi All,
    Does any one have any idea how do i reset a aggregated posting document which is cleared.
    I have tried doing this with the standard Tcode iueedpplotaalc4 by providing Aggregated Payment Document but it only allows me to reverse the payment.
    Thanks
    Satyajeet

    Hi,
    you may use program REDEREG_ETHI_REV.
    Best regards
    Harald

  • Can any one tell me how can I move to a different folder pictures, that I've cloned, without them staying aggregated? They all come together to the other folder and I don't want that- thanks

    Can any one tell me how can I move to a different folder pictures, that I've cloned, without them staying aggregated? They all come together to the other folder and I don't want that… thanks

    There's more to it than that.
    Folders in Aperture do not hold Images.  They hold Projects and Albums.  You cannot put an Image in a Folder without putting it in a Project or an Album inside that Folder.
    The relationship between Projects and Images is special:  every Image must in a Project, and can be in only one Project.
    Images can be in as many Albums you want.  Putting an Image in an Album does not move it from the Project that holds it.
    You can make as many Versions from a Master as you want.
    What you want to do may appear simple to you, but it still much adhere to how Aperture works.  I still can't tell exactly what you are trying to do (specifically: Images don't live in Folders; moving an Image from a Folder is non-sensical).
    It can be very confusing (and frustrating) to get going with Aperture -- but it does work, and can be enormously helpful.  If you haven't, take a look at the video tutorials on Apple's Aperture support site.
    I feel as though I haven't helped you much -- but we need to be using the same names for interface items in order to get anything done -- and my sense is that you still haven't learned the names of the parts.

  • Setting aggregation content for logical level in 11g

    Hi Guys,
    When working on with horizontal and vertical federation in OBIEE 11g with multiple data sources here in my case it is essbase and RDBMS.
    1) pulled the columns and dragged into the concerened table.
    2) The related heirarchies have been defined.
    3) when trying to go to one of the LTS and trying to set the logical level aggregation im not able to see the levels columns corresponding nor im getting the get levels option to get them. where am i going wrong?
    when im trying to join a fact by pulling it on to the fact...i can see the levels in content tab,but when i try to define levels and check it its giving me error "There are no levels matching the BI algorithm"
    Any answers wud be appreciated.
    TIA,
    KK
    Edited by: Kranthi.K on Sep 5, 2011 2:52 AM

    It is autocreated,i dint customize it.....Im dropping the RDBMS table onto the Essbase cube dimension table and im not getting the RDBMS content levels that should be defined in the LTS of the table,and the RDBMS table has an level based hierarchy but still no sucess.
    Any more ideas
    UPDATED POST
    Deepak,it was not helpful as i have gone through tht document before....Im trying it in all scenerios to figure out where actually it is going wrong.
    If i dont find the path,i will let you kne what im trying to do so you can help me out.
    UPDATED POST-2
    Any more pointers from the experts.
    Edited by: Kranthi.K on Sep 6, 2011 7:01 AM

  • Data in the Cube not getting aggregated

    Hi Friends
    We have Cube 1 and Cube 2.
    The data flow is represented below:
    R/3 DataSource>Cube1>Cube2
    In Cube1 data is Stored according to the Calender Day.
    Cube2 has Calweek.
    In Transformations of Cube 1 and Cube 2 Calday of Cube 1 is mapped to Calweek of Cube 2.
    In the CUBE2 when i upload data from Cube1.Keyfigure Values are not getting summed.
    EXAMPLE: Data in Cube 1
    MatNo CustNo qty calday
    10001  xyz     100  01.01.2010
    10001  xyz      100  02.01.2010
    10001  xyz      100   03.01.2010
    10001  xyz     100  04.01.2010
    10001  xyz      100  05.01.2010
    10001  xyz      100   06.01.2010
    10001  xyz      100   07.01.2010
    Data in Cube 2:
    MatNo CustNo qty calweek
    10001  xyz     100  01.2010
    10001  xyz      100  01.2010
    10001  xyz      100   01.2010
    10001  xyz     100   01.2010
    10001  xyz      100   01.2010
    10001  xyz      100   01.2010
    10001  xyz      100   01.2010
    But Expected Output Should be:
    MatNo CustNo qty calweek
    10001  xyz     700  01.2010
    How to acheive this?
    I checked in the transformations all keyfigures are maintained in aggregation summation
    regards
    Preetam

    Just now i performed consisyency check for the cube:
    I a getting following warnings:
    Time characteristic 0CALWEEK value 200915 does not fit with time char 0CALMONTH val 0
    Consistency of time dimension of InfoCube &1
    Description
    This test checks whether or not the time characteristics of the InfoCube used in the time dimension are consistent. The consistency of time characteristics is extremely important for non-cumulative Cubes and partitioned InfoCubes.
    Values that do not fit together in the time dimension of an InfoCube result in incorrect results for non-cumulative cubes and InfoCubes that are partitioned according to time characteristics.
    For InfoCubes that have been partitioned according to time characteristics, conditions for the partitioning characteristic are derived from restrictions for the time characteristic.
    Errors
    When an error arises the InfoCube is marked as a Cube with an non-consistent time dimension. This has the following consequences:
    The derivation of conditions for partitioning criteria is deactivated on account of the non-fitting time characteristics. This usually has a negative effect on performance.
    When the InfoCube contains non-cumulatives, the system generates a warning for each query indicating that the displayed data may be incorrect.
    Repair Options
    Caution
    No action is required if the InfoCube does not contain non-cumulatives or is not partitioned.
    If the Infocube is partitioned, an action is only required if the read performance has gotten worse.
    You cannot automatically repair the entries of the time dimension table. However, you are able to delete entries that are no longer in use from the time dimension table.
    The system displays whether the incorrect dimension entries are still being used in the fact table.
    If these entries are no longer being used, you can carry out an automatic repair. In this case, all time dimension entries not being used in the fact table are removed.
    After the repair, the system checks whether or not the dimension is correct. If the time dimension is correct again, the InfoCube is marked as an InfoCube with a correct time dimension once again.
    If the entries are still being used, use transaction Listcube to check which data packages are affected.  You may be able to delete the data packages and then use the repair to remove the time dimension entries no longer being used. You can then reload the deleted data packages. Otherwise the InfoCube has to be built again.

  • Difference between  aggregation and calculation tab in BEx Query Designer

    HI,
    I am using BEx Query Designer for my report, for the key figures in the coloumn area i slected one numeric key figures, in  the properties tab i found aggregation tab and calculation tab.
    I need to sum up the total values for that particualar coloumn, when i used calculation tab i found to sum all the values for a particular coloumn, then what is the use the aggreagation tab?
    I not able to used that Aggregation tab it is showing as a hidden fields...
    can any one tell me whats the exact difference between these tabs and when we need to use which tab?
    With Regards,
    Thanesh Kumar.

    Hi Thanesh Kumar,
    I moved this thread from forum Data Warehousing to Business Explorer since it is a query related question (as SDN moderator).
    I could explain to you the difference between these two tabs.
    For "calculation" tab, it changes the display of result and does not change the calculation logic.
    It means that, if this key figure is used further in formula, still the original number (without "calculation" tab setting)  is used for further formula calculation.
    For "aggregation" tab, it changes the real calculation logic.
    The system takes the setting as the aggregation rule for records.
    The most common aggregation rule is of course summation. If you set to e.g. Average here, the system does the
    Average instead of summation when aggregating records. And the Average value will be taken for calculation
    in further formulas or other calculations.
    For "aggregation" tab, you could only use it for CKF (calculated key figure) or formula and you could not use it for
    a basic key figure. That should be the reason why you see it greyed-out.
    Regards,
    Patricia

  • Member Formula: IF ... ELSE do outline aggregation

    Hi experts,
    How to write a formula for a parent entity member like this:
    IF (@ISMBR("Account member"))
    do something
    ELSE
    do default outline aggregation from its descendants
    ENDIF
    Because I just want the "Do something" execute for some account member. If there is not ELSE statement, the formula will override default outline aggregation. The problem is I can not find any function that manually do default aggregation.
    Please ask if my question not clear.
    Many thanks!

    Huy Van
    I tried to replicate it in Sample Basic, I loaded sample data and below is the result
         Cola     Actual                              
         East     East     East     East     New York     New York     New York     New York
         Sales     Margin     Profit     Measures     Sales     Margin     Profit     Measures
    Jan     1812     1213     837     837     678     407     262     262I've a script where I've fixed on East (Parent member of Market)
    FIX(East,Actual,"100-10")
    Jan(
    IF(@ISMBR(Sales))
    100;
    ENDIF)
    ENDFIXBelow are the results after running the script
         Cola     Actual                              
         East     East     East     East     New York     New York     New York     New York
         Sales     Margin     Profit     Measures     Sales     Margin     Profit     Measures
    Jan     100     -499     -875     -875     678     407     262     262I don't see anything else changes (Only Sales of East is changing).
    Now that you are writing to Parent member, then aggregation from Parent1's descendants will overwrite what you script just populated.
    Regards
    Celvin
    http://www.orahyplabs.com
    Please mark the responses as helpful/correct if applicable

  • Aggregating Slowly Changing Dimension

    Hi All:
    I have a problem with whole lot of changes in the dimension values (SCD), need to create a view or stored procedure:
    Two Tables within the Oracle db are joined
    Tbl1: Store Summary consisting of Store ID, SUM(Sales Qty)
    Tbl2(View): Store View created which consists of Store ID, Name, Store_Latest_ID
    Join Relationship: Store_summary.Store_ID = Store_View.Store_ID
    If I’m pulling up the report its giving me this info
    Ex:
    Store ID: Name, Sales_Qty , Store_Latest_ID
    121, Kansas, $1200, 1101
    1101, Dallas, $1400, 1200
    1200, Irvine, $ 1800, Null
    141, Gering, $500, 1462
    1462, Scott, $1500, Null
    1346,Calif,$1500,0
    There is no effective date within the store view, but can be added if requested.
    Constraints in the Output:
    1)     If the Store Latest ID = 0 that means the store id is hasn’t been shifted (Ex: Store ID = 1346)
    2)     If the Store Latest ID = ‘XXXX’ then that replaces the old Store ID and the next records will be added to the db to the new Store ID ( Ex: 121 to 1101, 1101 to 1200, 141 to 1462)
    3)     Output Needed: Everything rolled up to the New Store ID irrespective of the # of records or within the view or store procedure whenever there is a Store Latest ID that should be assigned to the Store ID (Ex: the Max Latest Store ID Record for all the changing Store ID Values) and if the value of Latest Store ID is 0 then no change of the record.
    I need the output to look like
    Store ID: Name, Sales_Qty , Store_Latest_ID
    1200,Irvine,$4400,Null
    1462,Scott,$2000,Null
    1346,Calif,$1500,Null or 0
    The Query I wrote for the view creation:
    Select ss.Store_ID, ss.Sales_Qty, 0 as Store_Latest_ID
    From Store_Summary ss, Store_Details sd
    Where ss.Store_ID=sd.Store_ID and sd.Store_Latest_ID is null
    union
    Select sd.Store_Latest_ID, ss.Sales_Qty, null
    From Store_Summary ss, Store_Details sd
    Where ss.Store_ID=sd.Store_Latest_ID and sd.Store_Latest_ID is not null
    And placing a join to the created view to Store Summary ended up getting the aggreagation values without rolling up and also the Store ID's which are not having latest ids are ending up with a value 0 and the ss quantity aggregated, and if there are changes within store id for more than two times then its not aggreagating the ss quatity to the latest and also its not giving the store name of the latest store id.
    I need help to create a view or stored procedure
    Please let me know if you have any questions, Thanks.
    Any suggestions would be really Grateful.
    Thanks
    Vamsi

    Hi
    Please see the following example
    ID- Name -Dependants
    100 - Tom - 5
    101 - Rick -2
    102 - Sunil -2
    See the above contents...assume the ID represents employee ID and the dependants include parents, spouse and kids....
    After sometime, dependants may increase over a period of time but noone is sure when exactly it will increase.....assume in case of a single get married and increase in dependants
    So the attributes of the Employee had a slow chance of changing over the time
    This kind of dimensions are called slowly changing dimensions
    Regards
    N Ganesh

  • InfoSet in SAP BI 7.10 and Key figure aggregation

    HI SAP Gurus,
    I am new in SAP BI area. I have my first problem.
    I want to create a report for the profit of goods. 
    The cost of goods(cogs) are constant for each material for one month.
    The formula to calculate the profit of goods = sales turn over u2013 cogs of month *sales amount.
    I have defined in BW time dependent infoObejct with attribute cogs.
    I have 2 info Sources.  InfoCube for transactional sales data from R/3 and material cogs master data loaded from csv file each month to infoObject.
    The info Provider for report is InfoSet (transactional Cube and cogs infoObject) .
    My problems are
    1) When I create an InfoSet, SAP BW create automatically new technical name for all characteristics and key figures and the first technical name should be alias fr each InfoCube and InfoObject in the InfoSet.
    2) The new technical name infoSet erased my aggregation references characteristic (=calmonth)
    3) In the report the key figure cogs was aggregated for each customer sales and customers,    that means the value of cogs is not constant, when it is aggregated according to customer sales order.
    Thanks a lot for your support
    Solomon Kassaye
    Munich Germany

    Solomon find some code below for the start routine, change the fields and edit code to suit your exact structure and requirements but the logic is all there.
    4) Create a Start Routine on the transformation from sales DSO to Profit of Goods InfoCube.
    Use a lookup from the the COG DSO to populate the monthly COG field in the COG DSO.
    **Global Declaration
    TYPES: BEGIN OF I_S_COG,
    /BIC/GOODS_NUMBER TYPE /BIC/A<DSO Table name>-/BIC/GOODS_NUMBER,
    /BIC/GOODS_NAME TYPE /BIC/A<DSO Table name>-/BIC/GOODS_NAME,
    /BIC/COG TYPE /BIC/A<DSO Table name>-/BIC/COG,
    /BIC/PERIOD TYPE /BIC/A<DSO Table name>-/BIC/PERIOD,
    END OF I_S_COG.
    DATA: I_T_COG type standard table of I_S_COG,
    wa_COG like line of i_t_COG.
    *Local Declaration
    data: temp  type _ty_t_SC_1.
    *move SOURCE_PACKAGE[] to temp[].
    temp[] = SOURCE_PACKAGE.
    select /BIC/GOODS_NUMBER /BIC/GOODS_NAME /BIC/COG /BIC/PERIOD  from
    /BIC/A<DSO Table name>
    into corresponding fields of table i_t_COG for all entries in
    temp where /BIC/GOODS_NUMBER = temp-/BIC/GOODS_NUMBER.
    sort i_t_COG by /BIC/GOODS_NUMBER.
    loop at SOURCE_PACKAGE assigning <source_fields>.
    move-corresponding <source_fields> to wa.
    loop at i_t_COG into wa_COG where /BIC/GOODS_NUMBER =
    <source_fields>-/BIC/GOODS_NUMBER and /BIC/PERIOD =
    <source_fields>-/BIC/PERIOD.
    modify SOURCE_PACKAGE from wa transporting /bic/COG.
    endloop.
    endloop.
    5) Create an End Routine which calculates Profit using the formula and updates the result set with the value in the Profit column.
    Given your requirement for the profit calculation
    profit of goods = sales turn over u2013 cogs of month * sales amount
    Write a simple end routine yourself
    *Local Declaration
    loop at RESULT_PACKAGE.
    <result_fields>-profit = <result_fields>-sales turn over - <result_fields>-COG * <result_fields>-sales amount.
    modify RESULT_PACKAGE from <result_fields> transporting profit.
    endloop.
    As the above start and end routines are used to enhance your sales DSO, your fields for customer number and the sales order should already be in your DSO for drilldown.
    Let me know how you get on.

  • Help with Aggregation Summation into DSO

    Hi, I have a question about Key Figure Aggregation Summation in transformation rules into a DSO from 2LIS_11_VAITM.
    Currently had an old order for Order Qty of 600 pcs.  Recent request came in to change it to 400.  After Delta our Order Qty was -200.  The Rule is for Summation and I figure it should work like 600 + -600 +400 = 400, but that is not what happened.  It's almost like the rule considered the original order qty to be 0 and when the -600 and +400 delta came in they get summarized to get the -200.  Does it have anything to do with the change logs only having recent last 30 days available?
    Can anyone tell me what is wrong here?

    Kennet:
        Could you please provide more details? For example:
    - Is the problem (differences on the Key Figure values) at the DSO level or at the Cube level?
    - Does your DataSource version have DSO capability? (please refer to SAP Note 440416 - "BW OLTP: Correction report for change of delta process").
    - If your DataSource supports "ABR" extraction, Does the Data on the PSA looks ok? (After / Before and Reverse images).
    - Have you enhanced the DataSource to include Custom Fields? If so, Does the ABAP Routine uses the SORT command?
    - Do you update the DSO with the 2LIS_11_VAITM DataSource only? or Do you use another DataSource to send data to the same DSO?
    - Have you considered changing the Rules to "Overwrite" instead of "Summation"?
    - What fields are included as part your DSO Key?
    - Do you have the ROCANCEL field mapped to 0STORNO / 0RECORDMODE InfoObjects?
    Regards,
    Francisco Milán.
    Edited by: Francisco Milan on Jul 1, 2010 9:13 AM

  • Aggregation script is taking long time - need help on optimization

    Hi All,
    Currently we are working to build a BSO solution (version 11.1.2.2) for a customer where we are facing performance issue in aggregating the database. The most common activity of the solution will be to generate data on different scenario from Actual and Budget (Actual Vs Budget difference data in one scenario) and to be used for reporting purpose mainly.
    We are aggregating the data to top level using AGG command for Sparse dimensions. While doing this activity, we found that it is creating a lot of page files and thereby filling up the present physical memory of the drive (to the tune of 70GB). Moreover it is taking a long time to aggregate. The no. of stored members that is present are as follows:
    Dimension - Type - Stored member (Total members)
    Account - Dense- 1597 (1845)
    Period - Dense - 13 (19)
    Year - Sparse - 11 (12)
    Version - Sparse - 2 (2)
    CV - Sparse- 5 (6)
    Scenario - Sparse - 94 (102)
    EV - Sparse - 120 (122)
    FC - Sparse- 118 (121)
    CP - Sparse - 1887 (2049)
    M1 - Sparse - 4873 (4874)
    Entity - Sparse - 12020 (32349) - Includes two alternate hierarchies for rolling up the data
    The other properties are as follows:
    Index Cache - 152000
    Data File Cache - 32768
    Data cache - 153600
    ACR = 0.65
    We are using Buffered I/O
    The level 0 datafile is about 3 GB.( 2 year budget and 1 year 2 months Actuals data)
    Customer is going to use SmartView to retrieve the data and having Planning Plus License only. So could not go for an ASO solution. We could not reduce the members of huge Sparse dimensions M1 and CP as well. To improve the data retrieval time, we had to make upper level members as stored which resolved data retrieval issue
    I am seeking for help on the following:
    1. How can we optimize the time taken? Currently each dimension is taking about an hour to aggregate. Calc Dim is taking even longer time. Hence opted for AGG
    2. Will change of dense and sparse setting help our cause? ACR is ona lower side. Please note that most calculations are either on Period dimensions or FC. There is no such calculation on Account dimension
    3. Will change of a few non-level 0 members from store to dynamic-calc help? Will this slow down calculations in the cube?
    4. What should be the best performance order for this cube?
    Appreciate your help in these regard,
    Regards,
    Sukhamoy

    Please provide following  information
    1)  Block size  and other statistic
    2)  Aggreagation script
    >>Index Cache - 152000
    >>Data File Cache - 32768
    >>Data cache - 153600
    Try this settings
    Index Cache - 1120000
    Data cache - 3153600

  • Keyfigure aggregation problem in BEX

    Dear gurus,
    I am creating a vendor performance report. In the report I will have SLA derived from formula "Goods Issued / PO Requested Qty"
    The output of my report as follow
    Vendor     |     SLA   |   Goods Issued  |  PO Qty Requested

    Dear gurus,
    I am creating a vendor performance report. In the report I will have SLA derived from formula "Goods Issued / PO Requested Qty"
    The output of my report as follow (drill down by Vendor Level)
    Site....Vendor......SLA.......Goods Issue....PO Qty
    101.....90203.......100%......800 CV..........800 CV
    101.....90202........80%......160 CV..........200 CV
    102.....90201........50%........20 CV............40 CV
    102.....90199........33%........30 CV..........100 CV
    Result............... (   A    )
    I would like to know what is the output for result row A ?
    I have tried several methods of aggregation, but the output is not relevant.

  • Aggregator suddenly doesn't play all of the projects after updates

    I finished an aggregator project, checked it, etc.  It was working fine.  Then I downloaded some Adobe updates (Flash, Shockwave, Flash Player and Captivate 5), next thing I know, people are telling me that the project no longer works.  I open it up, test it, and the files from the 3rd on won't start.  They can be accessed from the Table of Contents, but they don't play. The playbar either doesn't move, or jumps back after a second. Nothing was changed in the files, I checked for pauses, Preferences options, nothing is set to pause/restrict playback.  Also, the 3rd file's skin shows options that were unchecked.  You can't move the playbar or skip ahead, but you can go back or forward to the other "frozen" files using the TOC.
    Another wierd thing is that sometimes when I reopen the browser, it starts from the 1st frozen file and the TOC doesn't work at all.  The swfs play fine when tested individually.  I ended up having to make separate .exe files and linking them.  Problem with this is that we need the htmls and when they are programed to open another project at the end, the original project doesn't close.  Because of the nature of our work, we need one file to move on to the other without creating clutter of multiple browser windows open.
    If anyone has encountered this, I would really appreciate some help. I have checked/changed internet settings, trusted site settings, system restored to before the updates, but nothing.  I even had a co-worker republish the files from his non-updated computer and it still doesn't work.

    You said you updated Flash player.  What major and minor version number did you update to?
    Were these projects originally created in Captivate 5 or an earlier version (e.g. Cp4)?
    When the files don't play at all, what do you see?  Is it a black screen?

  • PO Qty is getting Aggregated.

    Hi All,
    I am having a requrement like i need Invoice Accouting Doc No, GR Accouting Doc No,GR Qty.IR Qty ,PO no and PO Qty.
    I have created one cube which is getting updated from FI ODS
    for the PO No and PO Qty i have written a routine in Update rules which will update from PO ODS.
    My Problem was while updating the PO Quantity from PO ODS to Cube The PO qty is displaying for each Accounting document number (WE and RE) and in report level its getting aggregated.
    PO Number |  Item  | Accounting Doc No | Type  | PO Qty
      2520555        10       45465454                WE     100
      2520555         10      43546546                 RE      100
      2520555         20         465464                 RE       200
    this is how its coming which should not come ..
    for the PO the PO Qty is 300 but is aggregating and showing 400 in the report..
    Kindly provide me the logic how to handle this one..

    Hi All,
    We have created one Z programfor fetching the PO Qty only once for all the accouting document nos per PO and its working fine .So we copied the same code in update rules for PO Qty and its not working there its taking ZERO's for the entire PO's.
    Pls find the below Code and kindly let me know how to correct the code in the update rules .
    iam doing a lookup for PO qty.
    Code in Zprogram.
    DATA : begin of itab occurs 0,
              Doc_Num type /BI0/OIOI_EBELN,
              AC_DOC_NO type /BI0/OIAC_DOC_NO,
              Item_Num type /BI0/OIOI_EBELP,
              qty type /BI0/OIORDER_QUAN,
              end of itab.
             DATA : Temp type /BI0/OIOI_EBELN,
             Temp1 type /BI0/OIOI_EBELP.
            SELECT DISTINCT
            OI_EBELN
            AC_DOC_NO
            OI_EBELP
            ORDER_QUAN
            FROM /BIC/AYSPND_O300 INTO table itab WHERE /BIC/YSPNDIND = 'IT'.
    SORT itab BY Item_Num.
    Loop at itab.
    IF  Temp <> itab-Doc_Num OR Temp1 <> itab-Item_Num.
    write : / itab-Doc_Num, itab-AC_DOC_NO, itab-Item_Num, itab-qty.
    ELSE. write : / itab-Doc_Num, itab-AC_DOC_NO, itab-Item_Num, '0'.
    ENDIF.
    Temp = itab-Doc_Num.
    Temp1 =  itab-Item_Num.
    endloop.
    CLEAR itab.
    Thnx

Maybe you are looking for

  • Ipod Touch no longer shows up on my OS and I cannot update

    Nothing i do is changing this: Every time I attpemt to try to update my Ipod, i get an error message that the "ipod was disconnected" When it clearly wasn't...Not only that, but now, I can't even sync my Ipod with my iTumes...If anyone can gimmie a h

  • How to check vendor schema group for info record for third party vendor

    Hi Expert, How to check vendor schema group that assigned in info record for third party vendor? Thanks

  • Imessage using apple id and not phone number

    When using imessage it is being sent from my apple id and not from my phone number, when I go in to the message settings my phone number is greyed out. Then I went into the called id and my phone number is the one listed as the caller ID yet evertyin

  • [solved][C] input validation with strtod()

    I'm using strtod() in an RPN implementation, and I'm working on input validation. As per strtod(3), I've written the following conditional to catch input overflow. errno = 0; op1 = strtod(token, &endPtr); if ((errno == ERANGE && (op1 == HUGE_VALF ||

  • BDBXML query and index

    Hi , I have some problems in using BDBXML: 1 how the indexes are stored in BDBXML? are they stored in B+ tree? How to encode the indexes? Does it use interzone code? 2 the documents in BDB XML are also stored in B+ tree? How to encode the nodes which