Cube partition question

Dear All,
I have tried to partition a cube in my dev system (deleted the data first) on 0CALMONTH. When I try to activate the cube, it is failing (telling me it cannot activate the cube as 0CALMONTH is included in other cubes which have data)
I try to rollback my change (remove the parittion) - and guess what - I get the same errors --> I cannot activate my cube anymore!!!
So, I have 2 questions:
1 - I have understood that the cube can be partitioned if it has no data, but it should not be linked to other cubes
2 - How can I rollback my change?
Any help highly appreciated
Thanks
Ioan

Not really. I get a R7757 (not all objects could be activated) and then a long list of Oracle logs (warnings and errors). The errors are ' ... table XXXX cannot be activated ...' (I will not include it here, it is too long)
I didn't find anything on OSS, I already looked
Thanks
Ioan

Similar Messages

  • Data Warehouse Partitioning question

    Hi All,
    I have a data warehousing partitioning question - I am defining partitions on a fact table in OWB and have range partitioning on a contract number field. Because I am on 10gR2 still, I have to put the contract number field into the fact table from its dimension in order to partition on it.
    The tables look like
    Contract_Dim (dimension_key, contract_no, ...)
    Contract_Fact(Contract_Dim, measure1,measure2, contract_no)
    So my question:
    When querying via reporting tools, my users are specifying contract_no conditions on the dimension object and joining into the contract_fact via the dimension_key->Contract_dim fields.
    I am assuming that the queries will not use partition pruning unless I put the contract_fact.contract_no into the query somehow. Is this true?
    If so, how can I 'hide' that additional step from my end-users? I want them to specify contract numbers on the dimension and have the query optimizer be smart enough to use partition pruning when running the query.
    I hope this makes sense.
    Thanks,
    Mike

    I am about to start a partitioning program on my dimension / fact tables and was hoping to see some responses to this thread.
    I suggest that you partition the tables on the dimension key, not any attribute. You could partition both fact and dimension tables by the same rule. Hash partitions seem to make sense here, as opposed to range or list partitions.
    tck

  • LPAR - LOGICAL PARTITION QUESTION -

    Hello SDN Experts.
    LPAR (LOGICAL PARTITION QUESTION)
    Our current Production Environment is running in Distributed Installation on
    IBM System P5 570 Servers AIX ver 5.2, each node is running two Applications: SAP ERP 2005  SR1 (ABAP + JAVA)  and CSS. (Customer Service System)
    Node One
    u2022     SAP Application (Central Instance, Central Services)
    u2022     Oracle 9i Instance for CSS Application.
    Node Two.
    u2022     Oracle 10G Instance for SAP Application
    u2022     CSS Application.
    To improve performance we are planning to create a new LPAR for SAP.
    According to the IBM HW Partner LPAR is logically isolated with different HW/SW resource(CPU/Memory /Disk resource, IP/hostname/mount point)...
    Question:
    I have this two possible solutions to copy SAP instances (app + db)  to new LPAR, can I apply SCENARIO 2, which in my opinion is easier than SCENARIO 1.
    SCENARIO 1.
    In order to migrate application and database instances to the new LPAR do I need to follow the procedure explained in the guide:
    (*) System Copy for SAP Systems Based on SAP NetWeaver 2004s SR1 ABAP+Java Document version: 1.1 ‒ 08/18/2006
    SCENARIO 2.
    After create all file systems (required in AIX) to copy data from Applications and Database Instances to their respective LPARs and change the ip address and hostnames in parameter files according to the following SAP Notes:
    Note 8307 - Changing host name on R3 host
    Note 403708 - Changing an IP address
    Which is the best scenario SAP recommends in this case ?
    Thanks for your comments.

    If your system is a combined ABAP + Java instance you can´t manually change the hostname. It´s not only those places that are listed in that note but much more, partially on filesystems in .properties files, partially in the database.
    Doing that manually may work but since the process is not documented anywhere and since it depends on the applications running on top of the J2EE instance it´s not supported.
    For ABAP + Java instances you must use the "sapinst-way" to get support in case of problems.
    See note 757692 - Changing the hostname for J2EE Engine 6.40/7.0 installation
    Markus

  • Get cube partition details

    I want cube partition detail by executing sql query or mdx ? Can some one give this answer ?

    What we do is to have a control table and store details of dimensions and measure group partitions inside it. Then we use a SSIS package which will have a for each loop to iterate through table records and process the dimension/ measure groups. We use
    Analysis Services Processing task for processing the partitions/ dimensions. The command would be a XMLA script which would be generated dynamically based on Partition/diemsnion ID. If its start of new period we would also have a step to add a new partition
    to cube using AS DDL task and also add details to the table.
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • View Cube Partitioned Data...

    Hi
    I partitioned my Cube from Jan2010 to Dec2010. Now i want to see its data individually for each partition. Is it possible?
    Thanks...

    check the links in this post.
    Display of data after cube partition
    bottom line : partition are logical features of the cube, not physical. they are not new tables.
    M.

  • Cube partitioning

    Hi Gurus,
    I have more than 2 lakh record in my infocube , now to optimize the performance i read about cube partitioning, can anyone explain me how to do cube partitioning , steps and on what basic we do cube partitioning.
    Thanks & Regards,
    Lakshmi Rajkumar.

    Hi Lakshmi,
    You can only partition on 0FISCPER or 0CALMONTH InfoObject. For Partioning you need to maintain atleast one of the mentioned InfoObject in your InfoProvider.
    For more info. please check the below help file:
    http://help.sap.com/saphelp_nw70/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/frameset.htm
    Hope it helps.
    Regards,
    Raghu

  • I see 'enq: JI - contention' when building multiple cubes/partitions

    Version 11.2.0.3
    I can successfully build multiple partitions of a cube simultaneously by supplying the degree of parallelism that I want. I can also build multiple cubes and multiple partitions of multiple cubes by submitting separate jobs (one per cube) with parallelism set in the job (for number of partitions per job/cube).
    My goal was to refresh 2 cubes simultaneously, 2 partitions in parallel each, so that 4 partitions total were refreshing simultaneously. There were sufficient hardware resources (memory and processes) to do this. I tried to submit 2 jobs, one for each cube, with parallel 2 on each.
    What happens is that 3 partitions start loading, not 4. The smaller of the 2 cubes loads 2 partitions at a time, but the larger of the cubes starts loading only 1 partition and the other partition process waits with JI - contention.
    I understand that JI contention is related one materialized view refresh blocking another refresh of the same MV. Yet simultaneous refresh of different partitions is supported for cube MVs.
    Because I see the large cube having the problem but not the smaller one, I wonder if adding more hash partitions to the AW$ (analytic workspace) table would allow more concurrent update processes. We have a high enough setting for processes and job_queue_processes, and enough available threads, etc.
    Will more hash subpartitions on the AW$ table allow for more concurrency for cube refreshes?

    It looks like the JI contention was coming from having multiple jobs submitted to update the SAME cube (albeit different partitions). Multiple jobs for different cubes (up to one job/cube each) seems to avoid this issue. I thought there was only one job per cube, but that was not true.
    Still, if someone has some insight into creating more AW hash subpartitions, I'd like to hear it. I know how to do it, but I am not sure what the impact will be on load or solve times. I have read a few sources online indicating that it is a good idea to have as many subpartitions as logical cube partitions, and that it is a good idea to set the subpartition number to a power of two to ensure good balance.

  • Allowing parallel processing of cube partitions using OWB mapping

    Hi All,
    Iam using an OWB mapping to load a MOLAP cube partitioned on TIME dimension. I configured the OWB mapping by checking the 'Allow parallel processing' option with the no.of parallel jobs to be 2. I then deployed the mapping.The data loaded using the mapping is spread across multiple partitions.
    The server has 4 CPU's and 6 GB RAM.
    But, when i kick off the mapping, i can see only one partition being processed at a time in the XML_LOAD_LOG.
    If i process the same cube in AWM, using parallel processing, i can see that multiple partitions are processed.
    Could you pls suggest if i missed any setting on OWB side.
    Thanks
    Chakri

    Hi,
    I have assigned the OLAP_DBA to the user under which the OWB map is running and the job started off.
    But, it failed soon with the below error:
    ***Error Occured in __XML_MAIN_LOADER: Failed to Build(Refresh) XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace. In __XML_VAL_MEASMAPS: In __XML_VAL_MEASMAPS_VAR: Error Validating Measure Mappings. In __XML_FND_PRT_TO_LOAD: In __XML_SET_LOAD_STATUS: In ___XML_LOAD_TEMPPRG:
    Here is the log :
    Load ID     Record ID     AW     Date     Actual Time     Message Time     Message
    3973     13     SYS.AWXML     12/1/2008 8:26     8:12:51     8:26:51     ***Error Occured in __XML_MAIN_LOADER: Failed to Build(Refresh) XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace. In __XML_VAL_MEASMAPS: In __XML_VAL_MEASMAPS_VAR: Error Validating Measure Mappings. In __XML_FND_PRT_TO_LOAD: In __XML_SET_LOAD_STATUS: In ___XML_LOAD_TEMPPRG:
    3973     12     XPRO_OLAP_NON_AGG.OLAP_NON_AGG     12/1/2008 8:19     8:12:57     8:19:57     Attached AW XPRO_OLAP_NON_AGG.OLAP_NON_AGG in RW Mode.
    3973     11     SYS.AWXML     12/1/2008 8:19     8:12:56     8:19:56     Started Build(Refresh) of XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace.
    3973     1     XPRO_OLAP_NON_AGG.OLAP_NON_AGG     12/1/2008 8:19     8:12:55     8:19:55     Job# AWXML$_3973 to Build(Refresh) Analytic Workspace XPRO_OLAP_NON_AGG.OLAP_NON_AGG Submitted to the Queue.
    Iam using AWM (10.2.0.3 A with OLAP Patch A) and OWB (10.2.0.3).
    Can anyone suggest why the job failed this time ?
    Regards
    Chakri

  • Oracle OLAP cube build question

    Hello,
         I am trying to build a reasonably large cube (around 100
    million rows from the underlying relational fact table). I am using
    Oracle 10g Release 2. The cube has 7 dimensions, the largest of which
    is TIME (6 years of data with the lowest level day). The cube build
    never finishes.
    Apparently it collapses while doing "Auto Solve". I'm assuming this
    means calculating the aggregations for upper levels of the hierarchy
    (although this is not mentioned in any of the documentation I have).
    I have two questions related to this:
    1. Is there a way to keep these aggregations from being performed at
    cube build time on dimensions with a value-based hierarchy? I already
    have the one dimension designated as level-based unchecked in the
    "Summarize To" tab in AW manager (TIME dimension).
    2. Are there any other tips that might help me get this cube built?
    Here is the log from the olapsys.xml_load_log table:
    RECORD_ID LOG_DATE AW XML_MESSAGE
    1. 09-MAR-06 SYS.AWXML 08:18:51 Started Build(Refresh) of APSHELL Analytic Workspace.
    2. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Attached AW APSHELL in RW Mode.
    3. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Started Loading Dimensions.
    4. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members.
    5. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    6. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ACCOUNT.DIMENSION. Added: 0. No Longer Present: 0.
    7. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    8. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for CATEGORY.DIMENSION. Added: 0. No Longer Present: 0.
    9. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for DATASRC.DIMENSION (3 out of 9 Dimensions).10. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for DATASRC.DIMENSION. Added: 0. No Longer Present: 0.
    11. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ENTITY.DIMENSION (4 out of 9 Dimensions).
    12. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ENTITY.DIMENSION. Added: 0. No Longer Present: 0.
    13. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    14. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INPT_CURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    15. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INTCO.DIMENSION (6 out of 9 Dimensions).
    16. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INTCO.DIMENSION. Added: 0. No Longer Present: 0.
    17. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RATE.DIMENSION (7 out of 9 Dimensions).
    18. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RATE.DIMENSION. Added: 0. No Longer Present: 0.
    19. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    20. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RPTCURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    21. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for TIME.DIMENSION (9 out of 9 Dimensions).
    22. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Members for TIME.DIMENSION. Added: 0. No Longer Present: 0.
    23. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Dimension Members.
    24. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies.
    25. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    26. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Hierarchies for ACCOUNT.DIMENSION. 1 hierarchy(s) ACCOUNT_HIERARCHY Processed.
    27. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    28. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for CATEGORY.DIMENSION. 1 hierarchy(s) CATEGORY_HIERARCHY Processed.
    29. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for DATASRC.DIMENSION (3 out of 9 Dimensions).
    30. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for DATASRC.DIMENSION. 1 hierarchy(s) DATASRC_HIER Processed.
    31. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for ENTITY.DIMENSION (4 out of 9 Dimensions).
    32. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for ENTITY.DIMENSION. 2 hierarchy(s) ENTITY_HIERARCHY1, ENTITY_HIERARCHY2 Processed.
    34. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INPT_CURRENCY.DIMENSION. No hierarchy(s) Processed.
    36. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INTCO.DIMENSION. 1 hierarchy(s) INTCO_HIERARCHY Processed.
    37. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RATE.DIMENSION (7 out of 9 Dimensions).
    38. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RATE.DIMENSION. No hierarchy(s) Processed.
    39. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    40. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RPTCURRENCY.DIMENSION. No hierarchy(s) Processed.
    41. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for TIME.DIMENSION (9 out of 9 Dimensions).
    42. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for TIME.DIMENSION. 2 hierarchy(s) CALENDAR, FISCAL_CALENDAR Processed.
    43. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies.
    44. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes.
    45. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    46. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ACCOUNT.DIMENSION. 6 attribute(s) ACCTYPE, CALC, FORMAT, LONG_DESCRIPTION, RATETYPE, SCALING Processed.
    47. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    48. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for CATEGORY.DIMENSION. 2 attribute(s) CALC, LONG_DESCRIPTION Processed.
    49. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for DATASRC.DIMENSION (3 out of 9 Dimensions). 50. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for DATASRC.DIMENSION. 3 attribute(s) CURRENCY, INTCO, LONG_DESCRIPTION Processed.
    51. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ENTITY.DIMENSION (4 out of 9 Dimensions).
    52. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ENTITY.DIMENSION. 3 attribute(s) CALC, CURRENCY, LONG_DESCRIPTION Processed.
    53. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    54. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INPT_CURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    55. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INTCO.DIMENSION (6 out of 9 Dimensions).
    56. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INTCO.DIMENSION. 2 attribute(s) ENTITY, LONG_DESCRIPTION Processed.
    57. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for RATE.DIMENSION (7 out of 9 Dimensions).
    58. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RATE.DIMENSION. 1 attribute(s) LONG_DESCRIPTION Processed.
    59. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    60. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RPTCURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    61. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for TIME.DIMENSION (9 out of 9 Dimensions).
    62. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes for TIME.DIMENSION. 3 attribute(s) END_DATE, LONG_DESCRIPTION, TIME_SPAN Processed.
    63. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes.
    64. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Dimensions.
    65. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Started Updating Partitions.
    66. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Updating Partitions.
    67. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Loading Measures.
    68. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE.
    69. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Finished Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE. Processed 100000001 Records. Rejected 0 Records.
    70. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Started Auto Solve for Measures: SIGNEDDATA from Cube FINANCE.CUBE.

    Hi, I've taken a few minutes to do a quick analysis. I just saw in your post that this isn't "real data", but some type of sample. Here is what I'm seeing. First off, this is the strangest dataset I've ever seen. With the exception of TIME, DATASOURCE, and RPTCURRENCY, every single other dimension is nearly 100% dense. Quite truthfully, in a cube of this many dimensions, I have never seen data be 100% dense like this (usually with this many dimensions its more around the .01% dense max, usually even lower than that). Is it possible that the way you generated the test data would have caused this to happen?
    If so, I would strongly encourage you to go back to your "real" data and run the same queries and post results. I think that "real" data will produce a much different profile than what we're seeing here.
    If you really do want to try and aggregate this dataset, I'd do the following:
    1. Drop any dimension that doesn't add analytic value
    Report currency is an obvious choice for this - if every record has exactly the same value, then it adds no additional information (but increases the size of the data)
    Also, data source falls into the same category. However, I'd add one more question / comment with data source - even if all 3 values DID show up in the data, does knowing the data source provide any analytical capabilities? I.e. would a business person make a different decision based on whether the data is coming from system A vs. system B vs. system C?
    2. Make sure all remaining dimensions except TIME are DENSE, not sparse. I'd probably define the cube with this order:
    Account...........dense
    Entity..............dense
    IntCo...............dense
    Category.........dense
    Time...............sparse
    3. Since time is level based (and sparse), I'd set it to only aggregate at the day and month levels (i.e. let quarter and year be calculated on the fly)
    4. Are there really no "levels" in the dimensions like Entity? Usually companies define those with very rigid hierarchies (assuming this means legal entity)
    Good luck with loading this cube. Please let us know how "real" this data is. I suspect with that many dimensions that the "real" data will be VERY sparse, not dense like this sample is, in which case some of the sparsity handling functionality would make a huge benefit for you. As is, with the data being nearly 100% dense, turning on sparsity for any dimension other than TIME probably kills your performance.
    Let us know what you think!
    Thanks,
    Scott

  • Benefits of Cube partitioning

    I am looking into the options of repartitioning a large cube (100M records in fact table).
    The partitioning will be based on 0CALMONTH, there is 8 years of history in the cube so I will end up with 96 partitions.
    I found several help documents, sdn posts and a helpful note (1008833) but still have some questions left.
    Can someone explain what the actual benefit is of having partitions?
    I can understand that it is quicker to search a smaller table than a bigger table, but there will also be some overhead in finding out which partition(s) should be accessed for a specific query.
    Or is the read access only quicker if calmonth is provided on the selection screen (or using a user exit)?
    Is SAP using more work processes so it can search several partitions at the same time? This would impact the number of available processes for other usage.
    Any help understanding the concept of partitioning better would be greatly appreciated.
    I think I have found all the relevant help documents how to implement repartitioning, so there is no need to reply with links to those.
    Thanks in advance,
    Jan.

    Hi,
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    http://www.dmreview.com/issues/20051001/1038109-1.html
    http://www.sap.com/platform/netweaver/pdf/BWP_AR_IDC_BI_Accelerator.pdf
    BI Performance http://www.xtivia.com/downloads/Xtivia_BIT_Performance%20Audit.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10564d5c-cf00-2a10-7b87-c94e38267742
    check this links....

  • Cube Partition and Date

    Hi,
    I have a cube which has data from 2000 till now. The data from 2000-2007 is very less.
    But for 2008 and till may 2009 we have huge data
    So we decided to partition the cube on Fiscal Year/Period with 1252( 12 for 2008, 5 for Jan 2009 till may 2009 and 1 for 2007 and below and 1 for June 2009 onwards)
    Now My question is how would we give the data range?
    Can anyone please tell me?
    Regards

    Hi AS,
    Suggest you to partition until October/December 2009, so that you wont have to repartition again immediately(reduce administrative effort).
    For your partitioning request.
    Fiscal year/period - 001/2008 to 005/2009.
    Maximum number of partitionins - 19.
    Check Features subtopic in this link for further details
    [Partitioning exmaple|http://help.sap.com/saphelp_nw70/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm]
    Hope it helps,
    Best regards,
    Sunmit.

  • Partitions questions V 11.1.1.3

    I am in the process of doing a end of the year conversation, I have a master cube that is loaded by (3) slave cubes, I am doing a level 0 dump to mirror the prodction cube with the Q&A Cubes.
    My questions is as followed
    If I load the master cube with the level 0 data will i have a conflict with the data coming from the Partitions
    Please advise
    Edited by: Next Level on Jan 30, 2013 3:52 AM
    Edited by: Next Level on Jan 30, 2013 4:28 AM

    Hi Siva,
    I think that these are questions that you should ask on the Infrastructure space since the guys there have way much more knowledge on installations. Additionally, I would suggest to discuss these questions with a consultant who is going to have broader understanding of your requirements.
    Regarding question number 1, generally speaking version 11.1.2.3 is considered to be an improved and more stable version of 11.2.2.2.
    Regarding question number 3, you need to provide more details like number of users per tool, number of concurrent users, volume of data per tool, requirements for High Availability or Disaster recovery etc…
    Regards,
    Thanos

  • Recovery Partition Questions

    I'm trying to install windows 7 via bootcamp but the disk utility cannot partition the drive because it is fragmented.
    the error message says to reformat the drive.
    I'm ok with doing this but i have a few quesitons and want some clarity:
    1) I'm fully backing up my computer with time machine
    2) I have Lion, so by restarting and holding command+r I'll access the Recovery Partition?
    3) HERE IS MY MAIN QUESTION, if I select "reinstall OS X" will it reformat my drive??? This is my main issue so I want to make sure it actually reformats the drive into a non fragmented drive.
    4) I do not have a bootable OS X disk, but because this is lion it will download and reinstall the os x over the internet correct (through my apple ID login)??
    5) To restore my back up, I wait until the new os x is finished installing, restart then access the Recovery Partition again and select "Restore From Time Machine Backup" correct??
    or do I:
    1) Back up with time machine,
    2) restart and hold " command+r"
    3) erase drive with disk utility,
    4) exit disk utility
    5) Select "reinstall OS X" which will reinstall Lion (i have a pre mountain lion comp)
    6) select "Restore From Time Machine Backup"
    Thanks -Ian

    1) Back up with time machine,
    2) restart and hold " command+r"
    3) erase Macintosh HD partition with disk utility,
    4) exit disk utility
    5) select "Restore From Time Machine Backup", without reinstalling
    6) Choose your last backup date.
    If you select "reinstall OS X",  you will use the Setup Assistant (the "Migration Assistant") at the first boot on the new system.

  • Simple UEFI GPT Dual boot with windows 8 boot partition question.

    Hi everyone,
    I think it's obvious from the quuestion that I'm a newbie here (and from the location of the post) but I have read (several times):
    https://wiki.archlinux.org/index.php/UEFI
    https://wiki.archlinux.org/index.php/UEFI_Bootloaders
    and the incredibly helpful:
    https://wiki.archlinux.org/index.php/Beginner%27s_Guide
    along with many forum posts. unfortunately this:
    https://wiki.archlinux.org/index.php/Wi … _Dual_Boot
    appears out of date and so I need to ask you fine people my question.
    If I want to dual boot Arch with my Windows 8 my question is on the boot partition. I have an existing windows EFI boot partition. should I mount this partition to my "/mnt/boot/efi" folder and then copy the files to this partition when I am setting up rEFInd (my chosen bootloader from wiki page, comments/suggestions are welcome) or should I setup a separate boot partition for my arch installation. I assume from reading about rEFInd that the former is how I should do it as this seems to be how refind would be able to "see" my windows bootloader.
    The reason I am double checking and asking here is I know that windows can be a temperamental beast and is very prone to not booting so I don't want to mess with the windows boot partition unduly.
    Thanks in advance guys, looking forward to getting my arch working!
    Last edited by crashandburn4 (2013-03-03 13:42:43)

    $esp = EFI System Partition?
    also, ok, gummiboot, I'm glad I can mount the esp as /boot (that was my original thought but reread the tutorial and wasn't sure) just double checking, it is the esp created by windows 8 that I mount?
    in addition, as I am slightly new to this is there any tutorial that can tell me how to set up gummiboot? I've looked here:
    http://freedesktop.org/wiki/Software/gummiboot
    but don't see anything in the way of detailed instructions.
    from your post: https://bbs.archlinux.org/viewtopic.php?id=159061
    I'm gonna guess it's something like this (please let me know if this is right)
    /mount $ESP /mnt/boot
    pacman -S gummiboot
    (after chrooting)
    //exit chroot
    gummiboot
    *stuff saying gummiboot is not configured*
    gummiboot install
    is it something like that? can anyone point me towards a manual
    Last edited by crashandburn4 (2013-03-03 14:58:53)

  • Mac OS X, User Folder and Case-sensitivity (plus a Partitioning Question)

    Hello everybody.
    Today, I'd like to start a new thread regarding the configuration of OS X and the formatting of the drive with a HFS+ (case-sensitive, journaled) file system.
    There is a problem that has been tormenting me for quite some time now, and so far I haven't been able to find a losution yet.
    Here is the issue. When I bought my MacBook Pro (early 2011), I changed the main dirve to an hybrid SSD destinating the original 750GB to a Time Machine backup disk. As a consequence I had to make a fresh install of OS X. When it came the time of formatting I've opted for:
    Creating two partitions: 1 for the OS and 1 for the user folder (I have only one admin-user). Back then, I was coming from a Win enviroment and having two partitions seemed to be the best option for me. (Less chanche of user-data corruption in case something goes wrong in the OS partition).
    Chose a HFS+ (case-sensitive, journaled) file system. It seemed to be the more complete alternative.
    Everything was absolutely for a couple of months then I encountered the first issue: Adobe products don't work on a case-sensitive fs.
    I managed to get Photoshop to work eventually (manually correcing folders name) and didn't care too much about the rest.
    In the meantime over a year has passed and I kept the mentioned configuration, uptdating to Lion and ML. Recently another couple of isses have appeared: AutoCAD presents the same problem as Adobe and there is an issue in the keyboard balcklight control (from forums seems that ML is not able to store any information in System Preferences regarding the backlight).
    Despite the fact that case-sensitive fs seems to be the future solution chosen by Apple, it is still premature to have it as OS fs unless you are an hard-core developer that don't care about tons of programs that would eventually miss from your application folder.
    Being decided to move back to a case-insesntive fs, I need wish I could calrify a couple of doubts before proceding:
    (Not relatete to fs) Is it a good practice to keep user folder on a separate volume? Does it generates any issue on the base of your experience?
    Is it possible to have the OS on case-insensitive fs and the user folder on a case-sensitive one? Does the OS have an issue with that?
    The second point is the most critical as my data are now on a case-sensitive volume. They mostly consists in documents, images and music which should be migrated on a case-insensitive volume seamlessy, however, I'm not 100% sure about what happened during the last year (i.e. if there has been the generation of not-unique names).
    Furhtermore, I wish I could keep a case-sens volume as I plan to be dealing with a Linux enviroment soon. If that could be the user volume, this would be amazing.
    I'd also like to ask personal opinion on advantages of having case-sens fs.
    I understand I asked lots of questions in a single post. I hope, however, that this thread could be a base to collect some of the quite dispersive topics related to case-sensitivity present on the web.
    Best Regards,
    Alexander

    alexanderxc wrote:
    Linc, in the provious post you said there might be issues having user data on a separte partition.
    Which kind of issues are you tihnking of? Have you ever encountered such problems?
    The biggest risk is that poorly written software will assume your home directory is at /Users/you and will fail (or worse) if it doesn't like what it finds there.
    I really appreciated if you could be more specific and don't worry about being too technical.
    There are two ways you can go about it.
    1) Set everything up normally as if you only had one partition. Create your user on that one parition. Then, copy all the real user data to the 2nd partition. Using your admin account, make sure that all the permission on the user folder on the 2nd partition are the same as on the original partition. The, use System Preferences > Users & Groups > your account > right/control click > Advanced > Home directory and change it to the home directory on the 2nd partition. Log in to that new account. Make sure everything works. Then delete the user directory at /Users.
    2) A even more robust, old-school option is to create an /etc/fstab file and have your 2nd partition mounted at /Users. Then, everything will function normally and your user home directories will all be at /Users. /Users will, however, be on a different volume.

Maybe you are looking for

  • In FCPX, how do I use multiple clips of the same event?

    This is a beginner question, but having recently moved from Final Cut Express to FCPX, I am having trouble using 3 clips (church play, 3 cameras) of the same event (and reading through the manual has not helped me). I have lined them up in the timeli

  • Custom BIOS for MSI GT60 ONC

    Hello there Svet I bought my laptop a few months ago and I noticed within a few days that it throttled quite a lot when put under heavy load. I thought that was quite odd for a gaming laptop of this calibre so I decided to try the 16F3EMS1_T16.rar fr

  • Ease-in ease-out weirdness in motion tab FCP

    I've been doing a lot of animating stills in FCP by keyframing SCALE and CENTRE in the MOTION TAB of the VIEWER. I discovered the EASE-IN and EASE-OUT options by control-clicking the end key frames, but when I play back the animated clip, it does the

  • Generating key pair on PKCS#11token and save it there

    Hello, again i'm completely lost in this PKCS11 jungle. What i want to do: Generating key pair on crypto pkcs11 token and store it there. In the moment i've tried eg: sun.security.pkcs11.SunPKCS11 p = new sun.security.pkcs11.SunPKCS11(configName); Se

  • Crashes with moving docs to a new folder

    Adobe crashes every time I try to move a document to another folder. This never happened on the last version. Is there something I can do to stop this, or is it a bug in the new version?