Cube Partition and Date

Hi,
I have a cube which has data from 2000 till now. The data from 2000-2007 is very less.
But for 2008 and till may 2009 we have huge data
So we decided to partition the cube on Fiscal Year/Period with 1252( 12 for 2008, 5 for Jan 2009 till may 2009 and 1 for 2007 and below and 1 for June 2009 onwards)
Now My question is how would we give the data range?
Can anyone please tell me?
Regards

Hi AS,
Suggest you to partition until October/December 2009, so that you wont have to repartition again immediately(reduce administrative effort).
For your partitioning request.
Fiscal year/period - 001/2008 to 005/2009.
Maximum number of partitionins - 19.
Check Features subtopic in this link for further details
[Partitioning exmaple|http://help.sap.com/saphelp_nw70/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm]
Hope it helps,
Best regards,
Sunmit.

Similar Messages

  • Kernel issue reading drive partitions and data?

    Hey guys, I hope some of you can help with this.  I don't post problems often, but feel that I finally must do so in this case.  I've searched, and apologise if I've missed any related posts.  Now to the problem...
    Ever since the 2.6.31.x kernels have hit the repo, I've been having issues with certain partitions and external drives being mounted and the data on the drives being read.
    With all the 2.6.30.x kernels and earlier releases, I could mount my external USB drives and they would auto mount and the data read and show up in Nautilus or Dolphin almost instantly.  With all the new 2.6.31.x kernels, it takes a few seconds for the drives to auto mount, then about 40 seconds of hang time for the file manager window to open up showing all the files.  This happens with Nautilus, or Dolphin, depending on whether I'm on my Gnome or KDE box.  Reverting back to the previous kernel makes things work normally again.
    Out of curiosity, I tried the latest version of Ubuntu to see what would happen since it's using the newer kernel, but things are working fine again there, so it has to be a difference in the kernel they use, or there is something wrong with my configuration under Arch.
    I've already filed a bug report on this about a month or so back and was told that it's probably an upstream issue with the kernel:
    http://bugs.archlinux.org/task/16655
    Do any of you know what might be happening with this, or can you confirm the issue on your end?  I don't like Ubuntu at all, so won't be switching to that.  I really want to get my Arch boxes running properly again, but with the newer kernel.
    Thanks for any tips! 
    Last edited by ozar (2009-11-01 17:29:04)

    Since few profile options work fine and few don’t, we suspect that the definition used to seed the data is incorrect. If so, how do we find out if there is any thing missing in the definition or do we have any tools or methods to validate the setup is correct.
    Any help is appreciated.
    Edited by: 922005 on 20-Mar-2012 11:35

  • Due to reinstallation of mac 2nd partition is nt showing how to recover the partition and data

    Due to reinstallation of mac 2nd partition is nt showing how to recover the partition and data

    If you have no backup your only option is to perform incomplete recovery (point-in-time recovery) to the time just before the drop, export the table and then restore the database (for example, from a cold backup taken just before the incomplete recovery,) and import the table. This obviously requires a full backup
    taken before the drop which you don't seem to have, so the answer is "regrettably no."
    If you have a backup;
    Take backup of ur current db, apply previous day backup, do point in time recovery to get table back,export table, shutdown and apply latest backup, import table back
    Regards,

  • Cube Performance and Data Explosion

    Hi Experts,
    One of the parterns developed a data warehouse application. And the DW application has some performance issue:
    when the report get the query of the high level dimensions, the performance is okey, and when the query get the very detail data in the cube, the performance gets bad.
    The aggregations and the detail data are all stored in the cube, and the cube data gets Explosion quite quickly since some detailed transaction data need to be queried and stored in the cube too.
    So, experts, do you have any good suggestion on this issue? or if the may be a better design for the cube? e.g. in DW, the cube only stores the aggregations or summary on coarse grained data and measure, for fine grained data , it can be got in ODS.
    another question, I google the architecture solution for the above issue, and someone said that if the DW is designed in a hypercube, there maybe data explosion issue, but instead of desinging the hypercube, multicube should be used, so I wonder if multicube can solve the data explosion issue, and how to solve it. And if the multicube has better performance than the hypercube or can also solve detail data query.
    Last question, do you have any experience on DW implementation on TB level Data, and any good suggestion for architecture design using Oracle OLAP or Essbase for good performance.
    Thanks,
    Royal.
    Edited by: Royal on 2012-11-4 上午4:01

    You have not asked any specific technical question. In my opinion all Oracle Datawarehouses should use Oracle OLAP option for the Aggregation strategy. Significant improvements in 11.2.0.2 (and later versions) have been made. It has become much easier now to create and maintain dimensions/cubes. On the reporting side, OBIEE 11g now understands OLAP metadata. Other reporting tools can use the CUBE_TABLE views.
    Here are some links that you may find useful.
    Comparing MVs and OLAP... Oracle White paper
    http://www.oracle.com/technetwork/database/bi-datawarehousing/comparison-aw-mv-11g-twp-130903.pdf
    Oracle OLAP Support page
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1107593.1
    Three demos done by OLAP Development which explains how OLAP can help in a DW.
    http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_1/OLAP_Features_and_Use_Cases_1.html
    http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_2/OLAP_Features_and_Use_Cases_2.html
    http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_3/OLAP_Features_and_Use_Cases_3.html
    Main OLAP page at Oracle OTN site
    http://www.oracle.com/technetwork/database/options/olap/index.html
    Recommended Releases for Oracle OLAP
    http://www.oracle.com/technetwork/database/options/olap/olap-certification-092987.html
    Accelerating Data Warehouses using OLAP option
    http://www.oracle.com/technetwork/issue-archive/2008/08-may/o38olap-085800.html
    What's new in 11.2.0.2 database OLAP option
    http://docs.oracle.com/cd/E11882_01/olap.112/e17123/whatsnew.htm
    Oracle 11.2 OLAP Documentation (scroll down to OLAP section)
    http://www.oracle.com/pls/db112/portal.portal_db?selected=6&frame=#online_analytical_processing_%28olap%29
    Excel reporting from OLAP using Simba tool. This was developed in partnership with Oracle.
    http://www.simba.com/MDX-Provider-for-Oracle-OLAP.htm
    There is a good demo for Simba Excel tool at:
    http://www.simba.com/demos/MDX-Provider-for-Oracle-OLAP-web-demo.html

  • I see 'enq: JI - contention' when building multiple cubes/partitions

    Version 11.2.0.3
    I can successfully build multiple partitions of a cube simultaneously by supplying the degree of parallelism that I want. I can also build multiple cubes and multiple partitions of multiple cubes by submitting separate jobs (one per cube) with parallelism set in the job (for number of partitions per job/cube).
    My goal was to refresh 2 cubes simultaneously, 2 partitions in parallel each, so that 4 partitions total were refreshing simultaneously. There were sufficient hardware resources (memory and processes) to do this. I tried to submit 2 jobs, one for each cube, with parallel 2 on each.
    What happens is that 3 partitions start loading, not 4. The smaller of the 2 cubes loads 2 partitions at a time, but the larger of the cubes starts loading only 1 partition and the other partition process waits with JI - contention.
    I understand that JI contention is related one materialized view refresh blocking another refresh of the same MV. Yet simultaneous refresh of different partitions is supported for cube MVs.
    Because I see the large cube having the problem but not the smaller one, I wonder if adding more hash partitions to the AW$ (analytic workspace) table would allow more concurrent update processes. We have a high enough setting for processes and job_queue_processes, and enough available threads, etc.
    Will more hash subpartitions on the AW$ table allow for more concurrency for cube refreshes?

    It looks like the JI contention was coming from having multiple jobs submitted to update the SAME cube (albeit different partitions). Multiple jobs for different cubes (up to one job/cube each) seems to avoid this issue. I thought there was only one job per cube, but that was not true.
    Still, if someone has some insight into creating more AW hash subpartitions, I'd like to hear it. I know how to do it, but I am not sure what the impact will be on load or solve times. I have read a few sources online indicating that it is a good idea to have as many subpartitions as logical cube partitions, and that it is a good idea to set the subpartition number to a power of two to ensure good balance.

  • Is there a way of partitioning the data in the cubes

    Hello BPC Experts,
    we are currently running an Appset with 4 Applciations. Anyway two of these are getting really big.
    In BPC for MS there is a way to partitioning the data as I saw in the How tos.
    In NW Versions the BPC queries the Multiprovider. Is there a way to split the underlying Basis Cube to several (split by time or Legal Entity).
    I think this would help to increase the speed a lot as data could be read in parallel.
    Help is very much appreciated.
    Daniel
    Edited by: Daniel Schäfer on Feb 12, 2010 2:16 PM

    Hi Daniel,
    The short answer to your question is that, no, there is not a way to manually partition the infocubes at the BW level. The longer answer comes in several parts:
    1. BW automatically partitions the underlying database tables for BPC cubes based on request ID, depending on the BW setting for the cube and the underlying database.
    2. BW InfoCubes are very different from MS SQL server cubes (ROLAP approach in BW vs. MOLAP approach usually used in Analysis Services cubes). This results in BW cubes being a lot smaller, reads and writes being highly parallel, and no need for a large rollup operation if the underlying data changes. In other words, you probably wouldn't gain much from semantic partitioning of the BW cubes underlying BPC, except possibly in query performance, and only then if you have very high data volumes (>100 million records).
    3. BWA is an option for very large cubes. It is expensive, but if you are talking 100s of millions of records you should probably consider it. It uses a completely different data model than ROLAP or MOLAP and it is highly partition-able, though this is transparent to the BW administrator.
    4. In some circumstances it is useful to partition BW cubes. In the BW world, this is usually called "semantic partitioning". For example, you might want to partition cubes by company, time, or category. In BW this is currently supported through manually creating several basic cubes under a multiprovider. In BPC, this approach is not supported. It is highly recommended to not change the BPC-generated Infocubes or Queries in any way.
    5. If you have determined that you really need to semantically partition to manage data volumes in BPC, the current best way is probably to have multiple BPC applications with identical dimensions. In other words, partition in the application layer instead of in the data layer.
    Hopefully that's helpful to you.
    Ethan

  • Data Recovery from Partitioned and formatted Bit Locker Encrypted Drive

    Recently because of some issues in windows 7 installation from windows 8 installed OS. it was giving as the disc is dynamic windows can not be installed on it. so at last after struggling hard no other solution i partitioned and formatted my whole
    drive so all data gone included the drive which was encrypted by bit lockers.
    For recovery i used many software such as ontrack easy recover, get data back, recovery my files professional edition but still i couldnt able to recover my data from that drive. then i found some suggestion Using CMD to decrypt my data first 
    http://technet.microsoft.com/en-us/library/ee523219(WS.10).aspx
    where it shows it successfully decrypt my data at that moment my drives were in RAW format excluding on which windows is installed and then in CMD i check Chdsk which also shows no problem found. but now problem is still i coudnt able to recover
    my data then i format the drive D and again tried to recover data using above software after decryption still no result. 
    Now i need assistance how i can recover my encrypted drive as it was partitioned and also formatted but decrypted also as i have its recovery key too. thanks

    Hi ,
    I am afraid that we cannot get the data back if the drive has been formatted even if use the
    BitLocker Repair Tool.
    You’d better contact your local data recovery center to try to get data back.
    Tracy Cai
    TechNet Community Support

  • What's the best way to partition my disk with Vista, Arch, and data?

    Hey everybody, I'm in a bit of a quandary here and I'd love a bit of help.
    I have a 320 GB hdd on my new laptop. I want to dual boot Windows Vista and Arch, with a shared partition for data in between. I have Windows Vista installed with 110 GB of unallocated space. (Windows, being a piece of crap, has gone and locked some stupid system files at the end of its partition, completely preventing me from shrinking it any further.)
    Vista has hogged two partitions for itself. One is C:, and I know what that's for; the other is D:, and I have no idea what lives on it. (In the off chance that anyone here knows, I'd actually like to find out what D: is doing there. It takes up 77 MB, 65 MB of which is empty, and is always "in use," and yet Vista's file manager says there's nothing in it.)
    ANYWAY: Vista's taken up two partitions, and I need at least two for Arch: / and swap. Ideally, I'd like to have one for /home as well, but i don't know if that possible. I also really, really want to have a shared partition for music and documents and such for both OSes.
    I thought at first I could stick all of Arch into an extended partition, but I read here that they can't be booted from. It doesn't make any sense to put my data partition into an extended partition, and Windows won't work either. What should my partition scheme be, since I apparently am going to need to put 5 primary partitions on one disk?
    If you've gone this far, thanks for reading my wall of text. Any advice you can give would be deeply appreciated.
    UPDATE: Okay, after doing a bit more research, I've read in a couple of places that Linux can be installed to a logical partition as long as I make sure GRUB (installed, I assume, on /dev/sda) points to it. Can anyone confirm this?
    Last edited by wirenik (2008-11-29 07:27:46)

    wirenik,
    You can't have 5 primary partitions.  4 max, or 3 primary partitions, and a bunch of logical partitions grouped inside an extended partition.  Arch will gladly install into a logical partition.  (Yes, it will boot just fine too )
    If you currently have 2 partitions, you could create one more primary, then use the rest of the disk as an extended partition.  You will then be able to create as many logical partitions as you want (well, a bunch anyway).
    Something like this:
    /dev/sda1 (primary, Windows C)
    /dev/sda2 (primary, Windows D)
    /dev/sda3 (primary, Arch root)
    --- Extended partition ---
    /dev/sda5 (logical, Arch /home)
    /dev/sda6 (logical, swap)
    /dev/sda7 (logical, shared data)
    Last edited by peart (2008-11-29 07:26:07)

  • Loaded data amount into cube and data monitor amount

    Hi,
    when I load data into the cube the inserted data amount in the administrator section shows 650000 data sets. The monitor of that request shows a lot of data packages. When I sum the data packages, the sum is about 700000 data sets.
    Where is the difference coming from?
    Thanks!

    Hi ,
       If it is a full load to the cube , all the records are updated in it since in a cube data can be overwritten.
       If it is a delta load and u want to see why the difference occurs between the records transferred and added in cube ,
       u can go to the manage tab in dso , go to the contents tab ,there click change log button at the below , check the number of entries in that table , the number of entries are the added records in cube since only these records are the new records other records with the same key are already present in the cube.

  • Standard PS (WBS) Extractor and cube that contains data generated by CJR2

    Hi Experts,
    Can someone recommend a suitable standard extractor for us to extract plan data that was generated in R/3 PS, via transaction code CJR2, as well as a standard PS Cube that stores that data?
    The extractor and Cube should be able to capture characteristics such as, sender cost center, Activity Type, WBS element, Activity quantity, total costs, version, posting periods, fiscal year etc.
    I searched through those many extractors and cubes, but could not find one.
    Thanks.

    Hi Ehab,
    Go through this link, on Page No.7, you can see PS Cubes, ODS and standard queries.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c04e90b2-f31c-2a10-0595-8409b37914f3
    Thanks
    Sayed

  • Placing OS image and Data image on right partition

    Hi ,
    We are facing issue in placing right image at the right partition. 
    We have OS image and data image. We are formatting and partitioning HDD as 40% (OSDisk) and remaining 100% as (Data). Both paritioons are active and selected as primary. 
    Then in installing operating system step, i have selected place image based on variable 'OSDisk'
    and Data image is also slected to be placed based on variable 'Data'.
    TS finishes without any issues but when i login to the machine i can see that OSDisk has become 'D' Drive and Data Drive has become 'C' so it shows C drive with label as 'Data' and D drive is label as 'OSDisk'
    I want to place OS on C drive and Data drive should always be as 'D' drive.
    I have tried another method and this time i tried to install OS as 'place based on stored variable i.e. OSDisk but install data image as next available partition but with this placement OS never boot after restart and TS eventually fails. 
    My aim is to make 'C' drive contain OS image and 'D' Drive to containg  data image content and not the otherway around as it is happening with current test. 
    Any suggestions will be highly appreciated. Thanks. 
    Regards,

    Hi ,
    Yes it is Config manager 2012 R2 but image is actually not source media install.wim. It a custom captured image through MDT2013. Image has 1 partition only.
    However I tried the above solution but drive letters are still not coming correct for OSDisk and Data Drive. Data is coming on 'C' drive and OSDisk is coming at 'D' drive.
    Regards,

  • Partition by Date and PK

    I am designing a new laboratory database.
    My primary data tables will have at least <tt>id (PK NUMBER)</tt> and <tt>created_on (DATE)</tt>. Also, for any two entries, the entry with a higher <tt>id</tt> will have a later <tt>created_on</tt> date.
    I plan to partition by <tt>created_on</tt> to increase performance on recently entered data. Since the columns increase together, the table would also be partitioned by <tt>id</tt>, implicitly. However, Oracle would not know about the implied partitioning by <tt>id</tt> to take advantage of the partitioning of table joins on <tt>id</tt>.
    Two questions:
    1. How do I enforce both columns increasing together?
    2. How can I take advantage of this implicit partitioning for table joins?

    1. How do I enforce both columns increasing together?With a trigger on insert?
    2. How can I take advantage of this implicit partitioning for table joins?Not to worry, PK will allways be GLOBAL index, and fall (when seqential read) into correct partition of date(created_on) performance-wise.
    All other indexes you create LOCAL.
    :p

  • View Cube Partitioned Data...

    Hi
    I partitioned my Cube from Jan2010 to Dec2010. Now i want to see its data individually for each partition. Is it possible?
    Thanks...

    check the links in this post.
    Display of data after cube partition
    bottom line : partition are logical features of the cube, not physical. they are not new tables.
    M.

  • Can we display Cube update time and date

    Hi All,
    can we display Cube refresh time and date for user with out going into the Cube properties. Can we use any substitution variable for that?
    Another doubt----
    Will there be any systen defined default substitution variable in Essbase or anly user defined variables. IF there what are they and purpose of it?
    Regards

    After the successful cube building, you can capture the time stamp from the system into a variable and you can set the note on database.
    suppose below is the maxl script updatetime.mxl
    login $1 identified by $2 on $3 ;
    spool on to $4 ;
    alter database $5.$6 set note $7 ;
    logout ;
    spool off ;
    exit ;
    you can call the script like
    essmsh updatetime.mxl <<userid>> <<pwd>> <<server>> <<spoolfile>> <<appname>> <<dbname>> <<updatetime>>

  • Cube Partition

    Hi,
    Can anbody please explain as to what criteria is used to specify the MAX. NO OF PARTITIONS when partitioning a Cube.

    Hi
    By using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    Only certain database providers support this function (for example, ORACLE, INFORMIX). If you use a database that does not support this function, then this function is not provided by the BW system.
    Integration
    Prerequisites
    You can only partition a dataset using one of the two partitioning criteria u2018calendar monthu2019 (0CALMONTH) or u2018fiscal year/period (0FISCPER). At least one of the two InfoObjects must be contained in the InfoCube.
    If you want to partition an InfoCube using the fiscal year/period (0FISCPER) characteristic, you have to set the fiscal year variant characteristic to constant.
    See Partitioning InfoCubes using the Characteristic 0FISCPER
    Functions
    When activating the InfoCube, the fact table is created on the database with one of the number of partitions corresponding to the value range. You can set the value range yourself.
    You choose the partitioning criterion 0CALMONTH and determine the value range
    from      01.1998
    to   12.2003
    6 years * 12 months + 2 = 74 partitions are created (2 partitions for values that lay outside of the range, meaning < 01.1998 or >12.2003).
    You can also determine how many partitions are created as a maximum on the database for the fact table of the InfoCube.
    You choose the partitioning criterion 0CALMONTH and determine the value range
    from        01.1998
    to   12.2003
    You choose 30 as the maximum number of partitions.
    Resulting from the value range: 6 years * 12 calendar months + 2 marginal partitions (up to 01.1998, from 12.2003) = 74 single values.
    The system groups three months at a time together in a partition (meaning that a partition corresponds to exactly one quarter); In this way, 6 years * 4 partitions/year + 2 marginal partitions = 26 partitions are created on the database.
    The performance gain is only gained for the partitioned InfoCube if the time dimension of the InfoCube is consistent. This means that all values of the 0CAL* characteristics of a data record in the time dimension must fit each other with a partitioning via 0CALMONTH.
    In the following example, only record 1 is consistent. Records 2 and 3 are not:
    Activities
    In the InfoCube maintenance choose Extras ® Partitioning, and specify the value range. Where necessary, limit the maximum number of partitions. Note: You can only change the value range when the InfoCube does not contain any data.

Maybe you are looking for