Performance of cube

Hi experts,
We have created the data flow upto cubes.
How can i maintain the performance of my objects. I mean to say by creating aggregates etc.
Anyone please explain how to create aggregates for the cubes.
Regards,
Nishuv.

Hi Nishuv,
Well there are various was to enhance performance of the cube, but first of all you should make sure your design is fine. Then what you are planning to enhance the loading performance or the query performance.
Aggregates, BIA, caching are few steps of improving the query performance. Kindly please search the forum there there n number of threads floating around on these.
For your help here are a few :
http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
Aggregates:
http://help.sap.com/saphelp_nw70/helpdata/en/44/70f4bb1ffb591ae10000000a1553f7/frameset.htm
Filling Aggregates:
http://help.sap.com/saphelp_nw70/helpdata/en/4f/187d3bce09c874e10000000a11402f/frameset.htm
Hope this helps.
Regards
Dennis
§§ Assign some Points if found Helpful §§

Similar Messages

  • Need help troubleshooting poor performance loading cubes

    I need ideas on how to troubleshoot performance issues we are having when loading our infocube.  There are eight infopackages running in parallel to update the cube.  Each infopackage can execute three datapackages at a time.  The load performance is erractic.  For example, if an infopackage needs five datapackages to load the data, data package 1 is sometimes the last one to complete.  Sometimes the slow performance is in the Update Rules processing and other times it is on the Insert into the fact table. 
    Sometimes there are no performance problems and the load completes in 20 mins.  Other times, the loads complete in 1.5+ hours.
    Does anyone know how to tell which server a data package was executed on?  Can someone tell me any transactions to use to monitor the loads while they are running to help pinpoint what the bottleneck is?
    Thanks.
    Regards,
    Ryan

    Some sugegstions:
    1. Collect BW statistics for all the cubes. Goto RSA1 and go to the cube and on tool bar - tools - BW statistics. Check thed boxes to collect both OLAP and WHM.
    2. Activate all the technical content cubes and reports and relevant objects. You will find them if you search with 0BWTC* in the business content.
    3. Start loading data to the Technical content cubes.
    4. There are a few reports out of these statistical cubes and run them and you will get some ideas.
    5. Try  to schedule sequentially instead of parallel loads.
    Ravi Thothadri

  • Performance tuning in Cubes in Anlytic Workspace Manager 10g

    Hi,
    Can anyone tell me or suggest anything how i should i improve the performance of cube maintainance in Analytic Workspace Manager..

    generate statspack/AWR reports
    HOW To Make TUNING request
    https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360003

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • X4500 with xf86-xorg-intel, extreme low performance

    Hi there!
    I got a T400 with intel x4500 onboard and installed the xorg intel module from testing. Everything is fine so far, got decent Kwin performance, desktop cube rotates smooh. But when I try to play warosw, or quake3 i get around 5 fps. I think q2 and q3 should be easily playable on that graphics card. Any ideas how to improve graphics power?
    I already use INTEL_BATCH=1
    thx
    issue

    Vamp898 wrote:
    INTEL_BATCH=1 does improve my performence about a double
    what the hell does INTEL_BATCH do?
    This is most puzzling. First off, this does not work for me on  i915. Nor should it. Essentially, INTEL_BATCH=1 should do nothing, since the code for this has been removed:
    https://bugs.launchpad.net/xserver-xorg … comments/7
    https://bugs.launchpad.net/ubuntu/+sour … comments/8
    I also did not find a mention of it on bugs.freedesktop.org or intel-gfx mailing list. If anything, this is an obsolete hack.
    How did you measure performence? Please try for example openarena (http://dri.freedesktop.org/wiki/Benchmarking).
    Last edited by fijam (2009-10-11 09:25:45)

  • Please guide me to develop cube for HR Analysis

    Hi All,
    My Client wants to build a cube for their HR Analysis. They have data in Peoplesoft and Kronos systems.
    I have never worked on any cubes for HR. I needed some guidance regarding that.
    If somebody has done any such implementation,some document or presentation which can be used as a base,then please forward it to me [email protected]
    Please guide me about what kind of analysis we can perform using cubes and in general how it looks like.Any suggestion will be highly appreciated.
    Thanks,
    Vikash
    Please direct me to the correct forum if i am at wrong place

    My guess is you would want to be in the Essbase or Planning forums. But It more sounds like you need to bring in a consultant that understands Essbase and HR reporting. The cube design will rely on the data and reporting and analysis needs

  • Oracle OLAP cube build question

    Hello,
         I am trying to build a reasonably large cube (around 100
    million rows from the underlying relational fact table). I am using
    Oracle 10g Release 2. The cube has 7 dimensions, the largest of which
    is TIME (6 years of data with the lowest level day). The cube build
    never finishes.
    Apparently it collapses while doing "Auto Solve". I'm assuming this
    means calculating the aggregations for upper levels of the hierarchy
    (although this is not mentioned in any of the documentation I have).
    I have two questions related to this:
    1. Is there a way to keep these aggregations from being performed at
    cube build time on dimensions with a value-based hierarchy? I already
    have the one dimension designated as level-based unchecked in the
    "Summarize To" tab in AW manager (TIME dimension).
    2. Are there any other tips that might help me get this cube built?
    Here is the log from the olapsys.xml_load_log table:
    RECORD_ID LOG_DATE AW XML_MESSAGE
    1. 09-MAR-06 SYS.AWXML 08:18:51 Started Build(Refresh) of APSHELL Analytic Workspace.
    2. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Attached AW APSHELL in RW Mode.
    3. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Started Loading Dimensions.
    4. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members.
    5. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    6. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ACCOUNT.DIMENSION. Added: 0. No Longer Present: 0.
    7. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    8. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for CATEGORY.DIMENSION. Added: 0. No Longer Present: 0.
    9. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for DATASRC.DIMENSION (3 out of 9 Dimensions).10. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for DATASRC.DIMENSION. Added: 0. No Longer Present: 0.
    11. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ENTITY.DIMENSION (4 out of 9 Dimensions).
    12. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ENTITY.DIMENSION. Added: 0. No Longer Present: 0.
    13. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    14. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INPT_CURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    15. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INTCO.DIMENSION (6 out of 9 Dimensions).
    16. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INTCO.DIMENSION. Added: 0. No Longer Present: 0.
    17. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RATE.DIMENSION (7 out of 9 Dimensions).
    18. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RATE.DIMENSION. Added: 0. No Longer Present: 0.
    19. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    20. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RPTCURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    21. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for TIME.DIMENSION (9 out of 9 Dimensions).
    22. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Members for TIME.DIMENSION. Added: 0. No Longer Present: 0.
    23. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Dimension Members.
    24. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies.
    25. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    26. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Hierarchies for ACCOUNT.DIMENSION. 1 hierarchy(s) ACCOUNT_HIERARCHY Processed.
    27. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    28. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for CATEGORY.DIMENSION. 1 hierarchy(s) CATEGORY_HIERARCHY Processed.
    29. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for DATASRC.DIMENSION (3 out of 9 Dimensions).
    30. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for DATASRC.DIMENSION. 1 hierarchy(s) DATASRC_HIER Processed.
    31. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for ENTITY.DIMENSION (4 out of 9 Dimensions).
    32. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for ENTITY.DIMENSION. 2 hierarchy(s) ENTITY_HIERARCHY1, ENTITY_HIERARCHY2 Processed.
    34. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INPT_CURRENCY.DIMENSION. No hierarchy(s) Processed.
    36. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INTCO.DIMENSION. 1 hierarchy(s) INTCO_HIERARCHY Processed.
    37. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RATE.DIMENSION (7 out of 9 Dimensions).
    38. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RATE.DIMENSION. No hierarchy(s) Processed.
    39. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    40. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RPTCURRENCY.DIMENSION. No hierarchy(s) Processed.
    41. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for TIME.DIMENSION (9 out of 9 Dimensions).
    42. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for TIME.DIMENSION. 2 hierarchy(s) CALENDAR, FISCAL_CALENDAR Processed.
    43. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies.
    44. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes.
    45. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    46. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ACCOUNT.DIMENSION. 6 attribute(s) ACCTYPE, CALC, FORMAT, LONG_DESCRIPTION, RATETYPE, SCALING Processed.
    47. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    48. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for CATEGORY.DIMENSION. 2 attribute(s) CALC, LONG_DESCRIPTION Processed.
    49. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for DATASRC.DIMENSION (3 out of 9 Dimensions). 50. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for DATASRC.DIMENSION. 3 attribute(s) CURRENCY, INTCO, LONG_DESCRIPTION Processed.
    51. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ENTITY.DIMENSION (4 out of 9 Dimensions).
    52. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ENTITY.DIMENSION. 3 attribute(s) CALC, CURRENCY, LONG_DESCRIPTION Processed.
    53. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    54. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INPT_CURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    55. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INTCO.DIMENSION (6 out of 9 Dimensions).
    56. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INTCO.DIMENSION. 2 attribute(s) ENTITY, LONG_DESCRIPTION Processed.
    57. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for RATE.DIMENSION (7 out of 9 Dimensions).
    58. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RATE.DIMENSION. 1 attribute(s) LONG_DESCRIPTION Processed.
    59. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    60. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RPTCURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    61. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for TIME.DIMENSION (9 out of 9 Dimensions).
    62. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes for TIME.DIMENSION. 3 attribute(s) END_DATE, LONG_DESCRIPTION, TIME_SPAN Processed.
    63. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes.
    64. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Dimensions.
    65. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Started Updating Partitions.
    66. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Updating Partitions.
    67. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Loading Measures.
    68. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE.
    69. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Finished Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE. Processed 100000001 Records. Rejected 0 Records.
    70. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Started Auto Solve for Measures: SIGNEDDATA from Cube FINANCE.CUBE.

    Hi, I've taken a few minutes to do a quick analysis. I just saw in your post that this isn't "real data", but some type of sample. Here is what I'm seeing. First off, this is the strangest dataset I've ever seen. With the exception of TIME, DATASOURCE, and RPTCURRENCY, every single other dimension is nearly 100% dense. Quite truthfully, in a cube of this many dimensions, I have never seen data be 100% dense like this (usually with this many dimensions its more around the .01% dense max, usually even lower than that). Is it possible that the way you generated the test data would have caused this to happen?
    If so, I would strongly encourage you to go back to your "real" data and run the same queries and post results. I think that "real" data will produce a much different profile than what we're seeing here.
    If you really do want to try and aggregate this dataset, I'd do the following:
    1. Drop any dimension that doesn't add analytic value
    Report currency is an obvious choice for this - if every record has exactly the same value, then it adds no additional information (but increases the size of the data)
    Also, data source falls into the same category. However, I'd add one more question / comment with data source - even if all 3 values DID show up in the data, does knowing the data source provide any analytical capabilities? I.e. would a business person make a different decision based on whether the data is coming from system A vs. system B vs. system C?
    2. Make sure all remaining dimensions except TIME are DENSE, not sparse. I'd probably define the cube with this order:
    Account...........dense
    Entity..............dense
    IntCo...............dense
    Category.........dense
    Time...............sparse
    3. Since time is level based (and sparse), I'd set it to only aggregate at the day and month levels (i.e. let quarter and year be calculated on the fly)
    4. Are there really no "levels" in the dimensions like Entity? Usually companies define those with very rigid hierarchies (assuming this means legal entity)
    Good luck with loading this cube. Please let us know how "real" this data is. I suspect with that many dimensions that the "real" data will be VERY sparse, not dense like this sample is, in which case some of the sparsity handling functionality would make a huge benefit for you. As is, with the data being nearly 100% dense, turning on sparsity for any dimension other than TIME probably kills your performance.
    Let us know what you think!
    Thanks,
    Scott

  • Storage Parameter Settings for Cube

    Hi  , Thank alot for such a wonderful suggestions, We have 25 millions of records going to cube so for high load performance on cube Can you check and help me Data Base Storage Parameter Settings. can you tell if
    Fact Table
    <b>1. Data Class for fact tables is DFACT or should i change  to any other
    2. Size 4 >150 MB or should i change  to any other</b>
    Dimension Table
    <b>1. Data Class DDIM or should i change  to any other
    2. Size 0 <500 K or should i change  to any other</b>
    Help me with this.
    Thanks
    Poonam Roy

    Hi,
    I don't think that we need to some extra settings at cube , if want to go with Acceletor.
    I am not able give good info on it,because I have not yet worked on it. But if you spend some time to go through all the mentioed links,Definately You would not have any doubts on it.
    Also refer the help link:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/a8/48c0417951d117e10000000a155106/frameset.htm
    If you donot have chance to use acceletors, then I suggest you think about Data marts (no of cubes with same structures and with ssame datasource) which holds physically have different range of data.And then use a Multi provider upon it.
    And also use Cache methodology for the reports built on these cubes.And use precalculattion to heat up the cache.
    Also check the 1025307 note.
    With rgds,
    Anil Kumar Sharma .P
    Message was edited by:
            Anil Kumar Sharma
    Message was edited by:
            Anil Kumar Sharma

  • Cube Validation - with Direct I/O

    Hi Team,
    Could you provide the information on below:
    Can we perform the cube validation with Direct I/O?
    What will be the impact w.r.t memory?Thanks,
    SK.

    Can we perform the cube validation with Direct I/O?^^^Why would you think you could not?
    What will be the impact w.r.t memory?^^^Is this a question to a test? Try it. I would imagine that if the db wasn't loaded, it would load it. That would use memory. As it validated I would guess that it would fill the data file cache. That would use memory. I honestly don't know but if I were super afraid that I would eat all the memory on my server when I ran a cube validation I would try it on very small database and see what happens.
    Sorry to be snippy, but for goodness' sake, this is such an easy thing to try and test. You can get the memory information directly from the server or via EAS and of course MaxL will tell you that can or cannot validate a Direct I/O database.
    Regards,
    Cameron Lackpour
    P.S. Also, while I hardly expect to be rewarded after the above, you might think about marking your (15 as of the writing of this post) questions as answered if indeed they are and assigning points to those who have helped you.
    P.P.S. 811829, I'm sorry if I am getting snippy, but I have noticed a series of open-ended questions on this board that, while the sign of someone who wants to learn stuff, also suggests that you are not trying these things out yourself. Go on and try, Essbase won't bite. :)

  • SmartView 11.1.2 - Essbase Error 1020043 - You don't have sufficient access

    We have accounting users that have a SmartView Excel file to retrieve data from a BSO cube. This Excel file is set up as a shared file. Sometimes the sheet refreshes ok and sometimes the user gets this error - Error 1020043 - You do not have sufficient access to perform a lock on this database. The file doesn't necessarily have to be in use by more than 1 person when a user receives this error message.
    Any idea what is causing this message?
    Thanks.
    Terri Taylor

    This may occur if the user having privileges less than that of application manager tries to manipulates the cube. The cube might be locked for other operations like restructuring. You need to check what are other activities are performed on cube at the times when you received such error.
    Edited by: vikasnaidu on Sep 7, 2011 6:56 AM

  • PCM 10 Line Item details (BOM model)

    Hi there,
    What is the purpose of the line item details. has anyone already used this dimension and can explain what for?
    Regards.

    Line Item Details are exactly that: details of your line items.
    e.g. You decided to spread all salary costs by the driver nr. of FTE's to your Activities.
    But the salary cost has multiple components: base salary, bonus, pension, etc.
    Now, LineItemDetails allows you to keep that detail out of the main PCM cube. This limits the amount of driver assignments you have to maintain, and is also good for performance (smaller cube!).
    However, you lose the traceability (CostObjectValues can only be viewed by LineItemValue, and not LineItemDetailValue). But then again, it's not allocated differently, so you can always lookup the components of salary in a different report (on LineItemDetailValue table).
    You need a LineItemValue rule to push the detail out of the LineItemDetailValue table to LineItemValue in order for this to work, e.g. :
    Function CellValue
         RestrictDimension("Line Items","Salary")
         CellValue = LineItemDetailValue(,,,"Salary")
    End Function

  • Diff.Between Class.GL and NewGL

    We are implementing BCS 6.0 based on Old GL ( Class.GL) Suppose in future we migrated to New GL, what is the impact on BCS?
    Do we need to re-implement BCS ?
    Thank you very in advance.
    Anna

    Hi Anna,
    The following is an architecture of BCS implementations I use:
    - ERP (or other source systems)
    I
    - BW layer with
    -- cube for extracting data from ERP
    -- intermediate cube
    -- BCS totals cube
    I
    - SEM-BCS system.
    In update rules of intermediate cube I perform data transformation (adding analytical characteristics, transformation of key figure model (debit - credit - balance) into account model (item - movement type), mapping of accounts (from flat chart of accounts in ERP to multidimensional in BCS), etc.).
    This cube will be used for reading data from datastream in BCS. For better performance this cube is made as much similar to BCS totals cube as possible.
    So, if you have the same architecture, you'll just modify extractors and, probably, the structure of the 1st cube, and probably, update rules in the 2nd one (intermediate).
    All the rest (including ALL settings in SEM-BCS) is going to left intact.
    If the architecture is different -- everything will be dependent on it.
    Hope this helps.
    Edited by: Eugene Khusainov on Jan 23, 2008 11:48 AM

  • Will Oracle OLAP handle our case(s).

    We are about to build a cube to roughly handle following dimensions and facts:
    15 dimensions ranging from a couple of members to 40,000+ members.
    A fact table holding 200,000,000+ rows
    So my question is: Does anybody has a sense of whether OLAP has a chance of handling this data? We are pretty certain that the data will be sparse.
    A second item relates to whether Oracle OLAP cubes give us the ability to compute what we refer to as "Industry" data. We serve a number of companies and we compute metrics that applies to their data. In order to allow these companies to see what they are doing against the other companies we provide the metrics for every other company; this metrics are considered Industry. So my question is: Do OLAP cubes have any structure or mechanism that allows to compute these metrics within the same cube or do we have to create a cube to hold Industry metrics?
    Thanks,
    Thomas

    Thomas,
    I cannot advise you for or against based on the small amount of information I have. I will not deny that at 15 dimensions you are at the upper limit of what you can achieve using the current (11.1) OLAP technology, but I have seen cubes of this size built and queried, so I know it is possible.
    The answer would depend on many things: hardware, query tools, expectations for query and build performance, and whether you have a viable alternative technology (which will determine how hard you will work to get past problems). It even depends on your project time frames, since release 11.2 is currently in beta and will, we hope, handle greater volumes of data than 11.1 or 10g.
    One important factor is how you partition your cube. At what level do you load the data (e.g. DAY or WEEK)? What is your partition level (e.g. MONTH or QUARTER)? A partition that loads, say, 10 million rows, is going to be much easier to build than a partition with 50 million rows. To decide this you need to know where your users will typically issue queries, since queries that cross partitions (e.g. ask for data at the YEAR level but are partitioned by WEEK) are significantly slower than those that do not cross partitions (e.g. ask for data at WEEK when you are partitioned by MONTH).
    Cube-based MVs can offer advantages for cubes of this size even if you define a single, 15-dimensional, cube. One nice trick is to only aggregate the cube up to the partition level. Suppose, for example, that you load data at the DAY level and partition by QUARTER. Then you would make QUARTER be the top level in you dimension instead of YEAR or ALL_YEARS. The trick is to make YEAR be an ordinary attribute of QUARTER so that it appears in the GROUP BY clause of the cube MV. Queries that ask for YEAR will still rewrite against the cube, but the QUARTERS will be summed up to YEAR using standard SQL. The result will generally be faster than asking the cube to do the same calculation. This technique allows you to lower your partition level (so that there are fewer rows per partition) without sacrificing on query performance.
    Cube-based MVs also allow you to raise the lowest level of any dimension (to PRODUCT_TYPE instead of PRODUCT_SKU say). Queries that go against the upper levels (PRODUCT_TYPE and above) will rewrite against the cube, and those that go against the detail level (PRODUCT_SKU) will go direct to the base tables. This compromise is worth making if your users only rarely query against the detail level and are willing to accept slower response times when they do. It can reduce the size of the cube considerably and is definitely worth considering.
    David

  • Backend design

    Hi,
    The scenario we have is that we have backend design done in two ways.
    Let me explain both the structures completely.
    1.We have 2 layers.In the 1st layer we have 3 ODS with full update and then on top of it 3 Infocubes with delta update from the underlying ODS's.Then the data from the cubes in the lower layer goes to the 3 cubes in the upper layer.And then a multiprovider is built on these upper layer cubes on which reporting is done.
    2. Here also we have 2 layers.In the lower layer we have 3 ODS with full update.then another layer with 3 ODS with delta update and then these 3 ODS's feed 3 cubes in the upper layer .rest all remain same.
    in short,we have cubes with delta update in the first structure and ODS with delta update in the second structure.
    Can some one please explain which of this is better and why?
    Please reply.
    Regards,
    Suchitra

    Hi,
    As per your scenario we are using cube as the reporting layer using multiprovider in first case and ODS as the reporting layer in the second case.Now the diffrence in both cases can be categorized in 2 ways:
    1.Architecture Wise
    2.Reporting Wise.
    Architecture Wise:
    we will have the following diffrences
    one major difference is the manner of data storage. In ODS, data is stored in flat tables. By flat we mean to say ordinary transparent table whereas in a CUBE, it composed of multiple tables arranged in a STAR SCHEMA joined by SIDs. The purpose is to do MULTI-DIMENSIONAL Reporting
    Another difference is In ODS, you can update an existing record given the KEY. In CUBES, theres no such thing. It will accept duplicate records and during reporting, SUM the keyfigures up. Theres no EDIT previous record contents just ADD. With ODS, the procedure is UPDATE IF EXISTING (base from the Table Key) otherwise ADD RECORD.
    Reporting Wise:
    basically you use ods-objects to store the data on a document/item/schedule line level whereas in the cube you will have only more aggregated data (on material, customer ...). So you can do your reporting on the already aggregated data and if necessary do a detailed reporting on the ods object. Addionally ods objects will provide you a delta in case your datasource doesn't provide it. Just use overwrite mode for all (characteristics and keyfigures) in the update rules and the ods will take care about the rest
    Infocubes are Multi dimensional objects that contains fact table and dimension table are available whereas ODS is not a multi dimensional object there are no fact tables and dimension tables. It consists of flat transparent tables.
    In infocubes there are characteristics and keyfigures but in ods key fields and data fields. we can keep non key characteristics in data fields.
    Some times we need detailed reports we can get through ODS. ODS are used to store data in a granular form i.e level of detail is more. The data in the infocube is in aggregated form.
    from reporting point of view ods is used for operational reporting where as infocubes for multidimensional reporting.
    ODS are used to merge data from one or more infosources but infocubes does not have that facility.
    The default update type for an ODS object is overwrite for infocube it is addition. ODS are used to implement delta in BW. Data is loaded into the ODS object as new records or updating existing records in change log or overwrite existing records in active data table using 0record mode.
    You cannot load data using Idoc transfer method in ODS but u can do in infocube.
    you cannot create aggregate on ODS. you cannot create infosets on infocube.
    ODS objects can be used
    when u want to use the facility of overwrite.if u want to overwrite nonkey characteristics and key figures
    If u want detailed reports u can use ODS.
    If u want to merge data from two or more infosources you can use ODS.
    IT allows u to drill down from infocube to ODS through RRI interface.
    Moreover to conclude reporting performance wise cubes are better as compared to ODS.
    As per your requirement you can have ur model with various advantages and disadvantages as mentioned above.
    Note:all these information is availabe in various threads and if you would have checked thoroughly you would have got it.
    With Regrads:
    Prafulla Singh.
    Edited by: prafulla singh on Mar 27, 2008 12:47 PM

  • BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Hi Masters,
    I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
    Some of the querys is taking more 15 to 20 minutes to execute the report.
    This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
    Current service pack we are using is
    SAP_BW 350 0016 SAPKW35016
    FINBASIS 300 0012 SAPK-30012INFINBASIS
    BI_CONT 353 0008 SAPKIBIFP8
    SEM-BW 400 0012 SAPKGS4012
    Best of Luck
    Chris
    BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Ravi,
    I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
    they resolved it.
    Regards,
    Chris

Maybe you are looking for

  • Open sale order

    In a report i am in need of obtaining open sale order for a material. open sale order means sale orders for which the PGI is not yet done. can anybody tell me how to achieve this..

  • Going from video card w/ micro hdmi & 2 dvi into monitor w/ 2 hdmi & 1 vga; ??

    trying to determine which would give me the best resolution.  video card has 2 dvi and 1 micro hdmi outs; monitor has 2 regular hdmi and 1 vga.  would the micro hdmi to a regular hdmi work best or would a dvi to hdmi give better resolution.  getting

  • BOM explosion in VA01

    Hi, I am working with IS-retail and I have created a BOM for commercial use. When I enter the material in the item level of the sales order the system triggers the explosion of the BOM. Everything is ok except the number of items created. The BOM has

  • How can i remove background of a picture and inser a new one online?

    how can i remove background of a picture and inser a new one online?

  • 845E - What is the best way to clone a Hard Drive.

    I have a very noisy WD drive (drive A).  I purchased another drive - the exact same model and size (drive B).   What is the best was to move the entire contents from drive A to drive B, make drive B bootable and then to send the drive A back to the m