Cube question

Hi Experts,
I have installed a cube from business contenet and loaded data into the cube, but i discovered that some of the objects needed were not transferred along with the cube. How do i install the cube from business content with all the objects i.e. transfer rules, updates rules, infosources, queries e.t.c.
Also if i reinstall the cube from BCT will i loose the data that i have already loaded? Thanks

hi,
you can use 'in dataflow before' for infosource etc and 'afterward' for queries etc.
rsa1->business content->choose 'collection mode' 'manual', click 'source system assignment' make sure your source system (r/3) is marked, then with cube transfered in right frame, choose grouping 'in dataflow before' and execute. you will get objects followig infocube, expand to check. click install->install.
do the same for 'afterward' you will get queries etc.
probably you won't lose data and can be done if you did no changes to infocube.
hope this helps.

Similar Messages

  • Multiple cubes questions

    Dear BW expert,
    I have question about Multiple Cubes. What is the advantage and disadvantage to use multiple cubes for reporting performance?
    I have following scenarios:
    Cube2005, Cube2006, ...... Cube2xxx. (total growing )
    Vs.
    Cube1, ..... Cube12 (Total 12 cubes)- Based on Fiscal period. If I use this model, how about I want a report for a special year, so in this case I have to access 12 cubes every time. If I build multiprovider on top of this for each year, is that possible?
    Pros &  cons?
    Any suggestions is greatly appreciated and points awarded!
    Weidong

    Hi there,
    Thanks for your reply!
    My case is that we have a huge volume of data for each year, so we decided to build cube per year, but we have to create a lot of cubes in advance and make the maintenance work harder. Then we come up with the idea by using cube per fiscal period/month, so we have fix number of cubes(12), and on top of the 12 cubes, we build multiprovider per year, and then multiprovider on top of the multipovider per year - is that possible?
    The main reason is to keep the data size for each cube/ODS small. Anyone has such experience with a large data size cube?
    Any comments? Thanks in advance!
    Weidong

  • Star Schema/Cube Question

    I am fairly new to OLAP and cubes, yet have the task of creating one from a star schema.
    The schema has 2 fact tables, instead of most examples I see online with 1 fact table. Should this be 2 cubes? Can it be 1? Any information could prove helpful...thanks!

    Assuming the fact tables have related dimensions, then I would recommend one Analytic Workspace with multiple cubes inside that AW. That would allow them to share one or more dimensions. There is an example of this in the Oracle By Example lesson: http://www.oracle.com/technology/obe/obe10gdb/bidw/awm/awm.htm
    I am fairly new to OLAP and cubes, yet have the task
    of creating one from a star schema.
    The schema has 2 fact tables, instead of most
    examples I see online with 1 fact table. Should this
    be 2 cubes? Can it be 1? Any information could
    prove helpful...thanks!

  • Cube Question: slot loading mechanism slips so how to clean?

    I carefully took the combo out of the machine, took its cover off, and figured out carefully how the mechanism works for pulling in discs and ejecting them. I found the 5 inch long bar with the long "skinny in the middle" rubber cover. The rubber cover slips and it looks like it is supposed to by design, so I left that alone. But I cleaned the rubber cover spotlessly and it is in good quality still, but after reassembly, the discs still slip.
    I figure the problem is maybe with the tension device that pushes the disc against the rubber cover but don't know.
    Anyone fix this or know how to?

    Michael,
    Have you tried other forums specific to the cube?
    http://www.cubeowner.com/forums/index.php?showtopic=12874
    http://discussions.apple.com/category.jspa?categoryID=107
    http://www.welovemacs.com/mosepapog4cu1.html
    Ji˜m

  • Using my Mac mini with my old Cube - questions

    Hey, all-
    It was my hope to use my new mini along with my old Cube (the Cube to be an additional hard drive more than anything else) but I can't seemto figure out how to make them work together. The firewire cable I have connecting the two isn't letting the mini recognize the Cube for whatever reason. And I can't see how both of them can be used in the Target Disk Mode since they connect to my Apple flat screen with different connectors.
    Hopefully, this is an easy fix somehow - I'm willing to look like a newb just as long as I can make this work.
    Any help is appreciated, thanks in advance!
    Cheers,
    Kevin

    I wondered if I hard lined my Cube to it, it would be
    recognized on the desktop, yes? Or is the Target
    method the only way I would be able to "see" the
    Cube.
    When connected together with an ethernet cable, you would then need to ensure that the cube was set to allow 'sharing' (in the Sharing preference pane) and give the Cube a share name and then on the mini, in the Finder, go to Network in the Go menu, or Connect to Server in the Go menu and type in the share name given to the Cube. It may ask you for an ID and password that is valid on the Cube, but once entered, will mount the Cube's drive on the mini's desktop.
    Firewire connection is going to be a bit faster than ethernet, though for anything but large files you probably won't notice much, but the advantage of using ethernet is that you don't need to do anything to the Cube but start it normally.

  • Re: MSI Cubi questions

    you can reach MSI tech here: >>How to contact MSI.<<

    Hi all!
    I have read aboubt common issues that have ALL GS60 2QE Laptops (As I read on formus and in different sites).
    So I have a question does new Laptot "GS60 2QE Ghost Pro-606" has the same issues becouse it is critical to me?
    Can anyone help?
    Ki...

  • Essbase Cube question

    Users want to know the last time when the cube was run. Do you know of any other easier way other than checking the Essbase and EIS logs. Is there any MAXL or essbase command that can be used for this purpose? Thanksande

    You can go to 'Database'->'Information'-> 'Modifications'in the database menu and see when the cube has been loaded last time, calculated and any outline changes.Is that what you are looking for or you want to know when a user has loged in last time?

  • Inability to questioning CUBE when you move it between servers.

    Hello everyone ,
    I  having a problem in the transition between given source cube questioning about USER and ADMIN , and my backup cube to which I refer clients while
    that I make the process of the original cube . In switching between user settings ( impersonate ) to ADMIN I can not make multiple choice questioning of the cube . However on another server I can retrieve data and perform multiple selection at the level of
    sub-category . Cube has been copied from the server where I could drill it to the server has problems making multiple selection mode impersonate . The difference between the servers is mainly at the level of the update , this is where a problem is the level
    of update 10.50.4000.0 (SP2 ) . And a server without a problem level of the update is 10.50.2789.0 . Servers are 2008R2 servers . I think this involves different authorizations at the grand, Does anyone have a different direction or idea what to do, To allow
    my clients to continue working at the same level of questioning without restrictions . Thank's Doron

    Thank's a lot!
    The only difference is the transition from the server to the server, basically it's the same cube of the same customer during all weekend passes to backup to another server .
    So far not been a problem as well as during all started a week all customers move to the BKP cubes until we Process the new cubes and current customers.
    The test was opening two windows of Visual Studio 2008 and trying to slice the data at the level of sub-category, the same cube which is server A and server B for that matter (the cube
    was duplicated using a process of BKP & RESTORE and made the experience of SCRIPT CREATE) server A can perform impersonate with details using a random customer and making the slice and get the data.
    Replication of the cube, case B we get the data only when we connect the cube as ADMIN, the version of the build server B is 10.50.4000.0.
    It is important to note the attempt to 've run on a third server C for with characteristics identical as well as same build version ( 10.50.2789.0 ) was again possible to drill with
    Multiple selection  at all sub Category disorder when done impersonate .
    In light of all the above tests , we assumed that there is a different permission level server settings or something that is not related to the structure and definitions cube itself
    as the third server as a control we had no problem despite simulation the case accurately.
    It is important to note that the customers are querying the cube through an additional server configured as PAMORAMA , when we got back we realized slowly disappearing weekly data interrogation
    at customers moving into cubes backup .
    Throughout all the process was not an error As if we did not make multiple selection data were presented in every case and in every server.

  • 0FI-CA module, cube 0FC_C08 & 0FC_C09 dataflow? question abt filling tables

    Hello all,
    We are currently working on BI7.0 and ERP6.0. I am trying to understand the data flow for these two cubes 0FC_C08 and 0FC_C09.
    When I see the data flow in BW, I see there is an infosource 0FC_BP_ITEMS, which feeds to DSO 0FC_DS05 which feeds to DSO's 0FC_DS06 (Open Items) and 0FC_DS07 (Cleared Items) which feed to 0FC_C08 and (Open items) and 0FC_C09 (Cleared items).
    0FC_BP_ITEMS -> 0FC_DS05 -> 0FC_DS06 and 0FC_DS07 -> 0FC_C08 and 0FC_C09...
    Now what I am looking for is what do these two datasources feed to ?
    0FC_CI_01    FICA Cleared Items for Interval
    0FC_OP_01  FI-CA Open Items at Key Date
    Also I have another question like:
    1.      Run tcode FPBW that fills data to table DFKKOPBW, and then you will see data in RSA6 for datasource 0FC_OP_01.
    2.     Run tcode FPCIBW that fills data to table DFKKCIBW, and then you will see data in RSA6 for datasource 0FC_CI_01.
    My question is do we have to do this on periodic basis or do we have to do it every time before we run infopackage to load data in BW? What are the key dates or date interval we can use for those two tcodes?
    Please anyone who has worked on it can give some ideas
    Thanks all for your help in advance.
    Kiran
    Edited by: Kiran Mehendale on May 16, 2008 4:40 PM

    0FC_CI_01 FICA Cleared Items for Interval
    --This Data Source will feed InfoCube: 0PSCD_C01
    and
    0FC_OP_01 FI-CA Open Items at Key Date
    --This Datasource will feed infoCube: InfoCube: 0PSCD_C02
    http://help.sap.com/saphelp_nw70/helpdata/EN/41/3d30113dfe6410e10000000a114b54/frameset.htm
    From this link you will be able to check out the data sources as well as any information on the infocubes.
    hope this helps.
    Matt

  • Questions regarding aggregates on cubes

    Can someone please answer the following questions.
    1. How do I check whether someone is re-bilding aggregates on a cube?
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
                            A. activating an aggregate?
                            B. switching off/on an aggregate?
                            C. rebuilding an aggregate?
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Regards,
    Srinivas.

    1. How do I check whether someone is re-bilding aggregates on a cube?
    If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
    A. activating an aggregate?
    this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
    B. switching off/on an aggregate?
    Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
    C. rebuilding an aggregate?
    Reloading data into the aggregate
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance.

  • Memory Question for G4 Cube

    In accordance with recommendations on a previous question here I have ordered OS 10.4.6 from Apple to install on my Cube. The Apple rep said it would work with the 384 MB ram I now have in the computer. I think I will add more memory though and the Apple rep advised I visit <www.crucial.com> to get it as Apple no longer stocks memory for the Cube. The spec for memory in the Cube manual says to use "PC-100" synchronous DRAM (SDRAM). The crucial.com site recommended a 512MB chip that says it is "PC-133" and implies it will work, guarantied. Will it?
    It will be 2 weeks at least before Apple will ship the OS 10.4.6 package as it is on backorder. I know I will have some questions on installation when I get it.
    JTB

    PC133 RAM is usually less expensive than PC100. The cube has 3 slots, and the max you can install is 3 x 512 megs. If all three slots are already filled, you will have to remove one of the modules and replace it with the 512 you plan to buy. I would personally stick with PC100, particularly if the modules that will remain are PC100.
    I usually buy my RAM from Ramdirect. Their prices are competitive, and they have always stood behind what they sell - there is a lifetime guarantee. I will recommend them without reservation.
    http://www.ramdirect.com/vcom/index.php?cPath=1723
    (I have no relationship with the company, other as a satisfied customer.)
    Installing the RAM is dead easy, but you might want to download a service manual.
    http://cubeowner.com/manuals/cubesvc_man02.pdf

  • Inventory Cube Loading - Questions....

    This is the process I intend to follow to load InfoCube 0IC_C03 and the questions therein. I have the "how to handle inventory scenarios" document, so please don't suggest I read that.
    1A) Delete Set-up tables for application 03
    1B) Lock Users
    1C) Flush LBWQ and delete entries in RSA7 related to application 03 (optional - only if needed)
    2A) Fill set up tables for 2LIS_03_BX. Do Full load into BW Inventory Cube with "generate initial status" in InfoPackage.
    2B) Compress request with a marker update
    3A) Fill set up table for 2LIS_03_BF with posting date 01/01/2006 to 12/31/2007 (Historical) and load to Inventory Cube with a full update
          QUESTION1: Does this need a marker update?
          QUESTION2: Is this a good strategy  - do movement documents that old get updated once posted?
          QUESTION3: Does the posting dates restriction depend on on how far back in history we want to go and look at stock values?
    3B) Fill set up table for 2LIS_03_BF with posting date 01/01/2008 to 9/9/999  (Current) and load to Inventory Cube with a delta init with data transfer.
    3C) Compress load in 3B without a marker update
    4A) Fill set up table for 2LIS_03_UM  and load to Inventory Cube with a delta init
    4B) Compress load in 4A without a marker update
          QUESTION4: How should we select the posting date criteria? Do I need to load from 01/01/2006 to 9/9/9999 since that's the range used for BF?
    5) Start V3 update jobs via Job Control
    6) Intiate subsequent delta loads from BF and UM and compress with marker update
    QUESTION 5: Is the sequence of loading BX first, then BF and UM fixed? or can I do BF and then BX, UM
    QUESTION 6: Any tips on minimizing downtime in this particular scenario? Please don't suggest generic steps. If you can suggest something specific to this situation, that'd be great.
    I hope you can help with the 6 questions I asked above.
    Regards,
    Anita S.

    Hi Anita,
    Please find my answers below. I have worked enough with this scenario and hence feel that these would be worth considering for your scenario.
    3A) Fill set up table for 2LIS_03_BF with posting date 01/01/2006 to 12/31/2007 (Historical) and load to Inventory Cube with a full update
    QUESTION1: Does this need a marker update?
    In this step we dont need marker update while compressing.
    QUESTION2: Is this a good strategy - do movement documents that old get updated once posted?
    I am able to get the question quite clearly.
    QUESTION3: Does the posting dates restriction depend on on how far back in history we want to go and look at stock values?
    Yes. We need to start from latest and then go back as backwards as we want to see the stock values. This holds true when we are using non cumulative key figures
    4B) Compress load in 4A without a marker update
    QUESTION4: How should we select the posting date criteria? Do I need to load from 01/01/2006 to 9/9/9999 since that's the range used for BF?
    No need to provide any selection criteria for UM while loading to BW, as this would fetch the same data filled in setup of revaluations.Unless you are looking for only some history other wise you can fill that for company code list and bring the whole data to BW  by a single full load, as the data wont be that huge as compared to BF
    6) Intiate subsequent delta loads from BF and UM and compress with marker update
    QUESTION 5: Is the sequence of loading BX first, then BF and UM fixed? or can I do BF and then BX, UM
    This is fixed in terms of compression with marker updates.
    QUESTION 6: Any tips on minimizing downtime in this particular scenario? Please don't suggest generic steps. If you can suggest something specific to this situation, that'd be great.
    *Yes. Most of time consuming activity is for filling BF history. Negligable for BX and UM comparitively.
    Either try having multiple BF selective setup filling based on posting date based on available background processes or filli open months setup during downtime and rest history once you unlock the system for postings. The reason for this being we dont expect any posting for history*
    Feel free to ask any further questions about this.
    Naveen.A

  • Basic Questions on CUBE & DSO size

    Hi Experts
    I had a few basic questions with Regards to Cube and DSO.
    I had a CUBE  & ODS in Live with Historical Data from 2000 to tell date.
    1) How can i check the No of Data Records avaliable in Cube (2000 to till date)
    2) How can i check the No of Data Records avaliable in ODS (2000 to till date)
    AND
    3)When i check the Data Load request in Cube (Context Menu Manage) in request tab i can see
    Request ID,Avaliable for reporting...Transfered Records & Added Records
    What is the difference between Transfered Records & Added Records in a request
    Thanks in advance

    Hi,
    1 & 2. to check ODS/Cube data goto SE11/SE16-->give the name of the ODS/Cube as /BIC/A(ODS name)
    and for the cube as /BI0/F(cube name) (before compression) and /BI0/E(cube name) (after compression).
    3. Transferred records--> are the records loaded from Source.
        Added records--> these are the records which are added to the cube/ods. This added records will depends on the Transfer/update rules of cube and ODS and also key fields in the ODS. This added records may be less or greater than transferred records.
    Added to FCI the program: SAP_INFOCUBE_DESIGNS is to display how many rows per dimension table.
    Hope you understood ......
    Edited by: Ravi kanth on May 19, 2009 5:25 PM

  • Oracle OLAP cube build question

    Hello,
         I am trying to build a reasonably large cube (around 100
    million rows from the underlying relational fact table). I am using
    Oracle 10g Release 2. The cube has 7 dimensions, the largest of which
    is TIME (6 years of data with the lowest level day). The cube build
    never finishes.
    Apparently it collapses while doing "Auto Solve". I'm assuming this
    means calculating the aggregations for upper levels of the hierarchy
    (although this is not mentioned in any of the documentation I have).
    I have two questions related to this:
    1. Is there a way to keep these aggregations from being performed at
    cube build time on dimensions with a value-based hierarchy? I already
    have the one dimension designated as level-based unchecked in the
    "Summarize To" tab in AW manager (TIME dimension).
    2. Are there any other tips that might help me get this cube built?
    Here is the log from the olapsys.xml_load_log table:
    RECORD_ID LOG_DATE AW XML_MESSAGE
    1. 09-MAR-06 SYS.AWXML 08:18:51 Started Build(Refresh) of APSHELL Analytic Workspace.
    2. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Attached AW APSHELL in RW Mode.
    3. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Started Loading Dimensions.
    4. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members.
    5. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    6. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ACCOUNT.DIMENSION. Added: 0. No Longer Present: 0.
    7. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    8. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for CATEGORY.DIMENSION. Added: 0. No Longer Present: 0.
    9. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for DATASRC.DIMENSION (3 out of 9 Dimensions).10. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for DATASRC.DIMENSION. Added: 0. No Longer Present: 0.
    11. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ENTITY.DIMENSION (4 out of 9 Dimensions).
    12. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ENTITY.DIMENSION. Added: 0. No Longer Present: 0.
    13. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    14. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INPT_CURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    15. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INTCO.DIMENSION (6 out of 9 Dimensions).
    16. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INTCO.DIMENSION. Added: 0. No Longer Present: 0.
    17. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RATE.DIMENSION (7 out of 9 Dimensions).
    18. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RATE.DIMENSION. Added: 0. No Longer Present: 0.
    19. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    20. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RPTCURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    21. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for TIME.DIMENSION (9 out of 9 Dimensions).
    22. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Members for TIME.DIMENSION. Added: 0. No Longer Present: 0.
    23. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Dimension Members.
    24. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies.
    25. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    26. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Hierarchies for ACCOUNT.DIMENSION. 1 hierarchy(s) ACCOUNT_HIERARCHY Processed.
    27. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    28. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for CATEGORY.DIMENSION. 1 hierarchy(s) CATEGORY_HIERARCHY Processed.
    29. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for DATASRC.DIMENSION (3 out of 9 Dimensions).
    30. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for DATASRC.DIMENSION. 1 hierarchy(s) DATASRC_HIER Processed.
    31. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for ENTITY.DIMENSION (4 out of 9 Dimensions).
    32. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for ENTITY.DIMENSION. 2 hierarchy(s) ENTITY_HIERARCHY1, ENTITY_HIERARCHY2 Processed.
    34. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INPT_CURRENCY.DIMENSION. No hierarchy(s) Processed.
    36. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INTCO.DIMENSION. 1 hierarchy(s) INTCO_HIERARCHY Processed.
    37. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RATE.DIMENSION (7 out of 9 Dimensions).
    38. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RATE.DIMENSION. No hierarchy(s) Processed.
    39. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    40. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RPTCURRENCY.DIMENSION. No hierarchy(s) Processed.
    41. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for TIME.DIMENSION (9 out of 9 Dimensions).
    42. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for TIME.DIMENSION. 2 hierarchy(s) CALENDAR, FISCAL_CALENDAR Processed.
    43. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies.
    44. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes.
    45. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    46. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ACCOUNT.DIMENSION. 6 attribute(s) ACCTYPE, CALC, FORMAT, LONG_DESCRIPTION, RATETYPE, SCALING Processed.
    47. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    48. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for CATEGORY.DIMENSION. 2 attribute(s) CALC, LONG_DESCRIPTION Processed.
    49. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for DATASRC.DIMENSION (3 out of 9 Dimensions). 50. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for DATASRC.DIMENSION. 3 attribute(s) CURRENCY, INTCO, LONG_DESCRIPTION Processed.
    51. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ENTITY.DIMENSION (4 out of 9 Dimensions).
    52. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ENTITY.DIMENSION. 3 attribute(s) CALC, CURRENCY, LONG_DESCRIPTION Processed.
    53. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    54. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INPT_CURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    55. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INTCO.DIMENSION (6 out of 9 Dimensions).
    56. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INTCO.DIMENSION. 2 attribute(s) ENTITY, LONG_DESCRIPTION Processed.
    57. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for RATE.DIMENSION (7 out of 9 Dimensions).
    58. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RATE.DIMENSION. 1 attribute(s) LONG_DESCRIPTION Processed.
    59. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    60. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RPTCURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    61. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for TIME.DIMENSION (9 out of 9 Dimensions).
    62. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes for TIME.DIMENSION. 3 attribute(s) END_DATE, LONG_DESCRIPTION, TIME_SPAN Processed.
    63. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes.
    64. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Dimensions.
    65. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Started Updating Partitions.
    66. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Updating Partitions.
    67. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Loading Measures.
    68. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE.
    69. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Finished Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE. Processed 100000001 Records. Rejected 0 Records.
    70. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Started Auto Solve for Measures: SIGNEDDATA from Cube FINANCE.CUBE.

    Hi, I've taken a few minutes to do a quick analysis. I just saw in your post that this isn't "real data", but some type of sample. Here is what I'm seeing. First off, this is the strangest dataset I've ever seen. With the exception of TIME, DATASOURCE, and RPTCURRENCY, every single other dimension is nearly 100% dense. Quite truthfully, in a cube of this many dimensions, I have never seen data be 100% dense like this (usually with this many dimensions its more around the .01% dense max, usually even lower than that). Is it possible that the way you generated the test data would have caused this to happen?
    If so, I would strongly encourage you to go back to your "real" data and run the same queries and post results. I think that "real" data will produce a much different profile than what we're seeing here.
    If you really do want to try and aggregate this dataset, I'd do the following:
    1. Drop any dimension that doesn't add analytic value
    Report currency is an obvious choice for this - if every record has exactly the same value, then it adds no additional information (but increases the size of the data)
    Also, data source falls into the same category. However, I'd add one more question / comment with data source - even if all 3 values DID show up in the data, does knowing the data source provide any analytical capabilities? I.e. would a business person make a different decision based on whether the data is coming from system A vs. system B vs. system C?
    2. Make sure all remaining dimensions except TIME are DENSE, not sparse. I'd probably define the cube with this order:
    Account...........dense
    Entity..............dense
    IntCo...............dense
    Category.........dense
    Time...............sparse
    3. Since time is level based (and sparse), I'd set it to only aggregate at the day and month levels (i.e. let quarter and year be calculated on the fly)
    4. Are there really no "levels" in the dimensions like Entity? Usually companies define those with very rigid hierarchies (assuming this means legal entity)
    Good luck with loading this cube. Please let us know how "real" this data is. I suspect with that many dimensions that the "real" data will be VERY sparse, not dense like this sample is, in which case some of the sparsity handling functionality would make a huge benefit for you. As is, with the data being nearly 100% dense, turning on sparsity for any dimension other than TIME probably kills your performance.
    Let us know what you think!
    Thanks,
    Scott

  • Question related to building cube

    Hi, we are in the stage of developing the cube by understanding the report requirements.
    we have 7 dimensions in our cube in which for some reports the metadata exists from all the dimensions and for some reports(expenses),the source data is through the 5 dimensions only.
    My question is
    1)Is it required to maintain seperste cube for expense cube?
    2)If we maintain in the same cube,after loading data through the 5 dimensions,how the aggregation happend towards other 2 dimensions?

    Hi,
    1. My understanding is : YOu have got a cube with 7 dimensions and their is some sort of reporting which is happening as of now.
    2. You have another requirement, i.e expense type, where your report needs only data from 5 dimensions ,where as your cube has 7 dimensions in total.
    3. If this is the scenario, then you need not create a new cube to cater to the needs of your expense requirement. Cube works in combinations,which are unique.
    Let me explain you with an example
    YOu have got 5 dimension, but your expense has source data only for time, accounts and type dimensions. You can load the data into 7 dimension by adding unknown as the members to other 2 non-required dimensions ,and pull out a report by selecting only 3 dimensions.
    Time
    -jan
    -feb
    -march
    Accounts
    -manpowercost
    Type
    -Expense
    -xyz.
    Region
    -A
    -B
    service offering
    -L
    -M
    How does it consolidate?
    Lets take few combinatinos from above outline
    jan,manpowercost,expense,unknown,unknown.,100
    jan,manpowercost,xyz,A,L,200
    ..etc
    these 2 values will never overlap ,as they are all together different combinatins and the same way aggregations also will not impact and provide dubious data
    Hope it helps, pls revert for further clarity
    Sandeep Reddy Enti
    HCC
    http://hyperionconsutlancy.com/

Maybe you are looking for

  • Help converting .wma to iTunes on a Mac.

    I need help in converting thousands of .wma tracks to iTunes on my new MacBook Pro. Can anyone help me with this problem. I have a desktop at home which is IBM and my laptop is now Mac.

  • Link b/w agreement and SD documents

    Hi, I have a "agreement number" and the VBELN(SD document number) and i need to know the accounting document number linked to them....is there some table/FM using which I can find that out. also,do i need some thing more than a agreement number and V

  • Does ucf.jar handle spaces in file names?

    As far as I can see, it does not: humber:zxpdir williamg$ find . ./foo ./foo/foo bar ./foo/foobar humber:zxpdir williamg$ java -Djsse.enableSNIExtension=false -jar ../../../adobesigningtoolkit/ucf.jar -package ../foo.zxp -C /Users/williamg/Projects/z

  • Can we run a java class in a external server through abap program?

    Dear Experts, Can we run a java class in an external server ( other than application server and client system )? I have tried running it in the client system through a program and it works. But i have the requirement of running it in another system.

  • Jspf is getting only statically included! please help!

    Hello, I am using glassfishv2ur2; I have a jsp page, that uses ... <jsp:include flush="true" page="WEB-INF/jspf/rightpanel.jspf"/> to rope in a jspf (jsp segment or fragment). I noticied via view source on my browser, the jspf is included in only sta