Automatic Partitioning in 10gR2

Hi
We have a very large table which contains millions of rows in Oracle database 10gR2. In order to increase performance and manageability we want an automatic partitioning scheme in which at the beginning of every month a new partition is added, so that new data is added in newly created partition.
Help us.
Thanks in advance.

930037 wrote:
Hi
We have a very large table which contains millions of rows in Oracle database 10gR2. In order to increase performance and manageability we want an automatic partitioning scheme in which at the beginning of every month a new partition is added, so that new data is added in newly created partition.
Help us.
Thanks in advance.Hi,
Oracle10g had not available the feature of Automatic partitioning scheme
Yet but you can do like this way........
http://ocpdba.wordpress.com/2009/10/12/automatic-partition-management-for-oracle-10g/
http://begintowrite.blogspot.in/2012/02/automatic-partition-management-for.html
Regards
Hitesh
Edited by: hitgon on Apr 26, 2012 6:52 PM

Similar Messages

  • Automatic partition by list creation in 11gR2

    Hi all,
    Is it posibble with Oracle 11gR2 automatic partition creation by list?
    Or is it possible only by range?
    What I need is that whenever a new value (partitioned key) is inserted in a partitioned table, Oracle automatically creates a new partition based on that new value.
    Thanks

    Interval partitioning currently only supports range partitions, and must be based on a column of number or date type.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16541/part_admin001.htm#VLDBG1088
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16541/part_avail.htm#VLDBG1279

  • Automate partition creation

    Hi,
    Is there any example out there on how to automate the creation of time based partitions?
    As an example, I would like to create 1 partition for each quarter and when a new quarter start have a new partition automatically added with the same attributes than the previous one.
    I would also have the oldest 4 partitions automatically deleted as soon as the total number of partitions reaches 13.
    Any thoughts or links?
    Thanks,
    Philippe

    Hi Philippe,
    The Project REAL Analysis Services Technical Drilldown discusses one implementation of such automation:
    http://www.microsoft.com/technet/prodtechnol/sql/2005/realastd.mspx
    >>
    Project REAL: Analysis Services Technical Drilldown
    By Dave Wickert, Microsoft Corporation
    SQL Server Technical Article
    Published: September 2005
    Appendix A: Automating Partition Creation
    The Project REAL design uses partitioning quite heavily. The production system has more than 220 extremely large partitions. The sample data uses over 125 partitions that are only tens of thousands of records per partition. The full production system has 180 to 200 million records per partition. With so many partitions, extensive typing was required to create each partition every time we generated a new schema.
    So, as the saying goes, “When the going gets rough, a programmer writes a program.”
    This appendix documents the BuildASPartition SQL Server 2005 Integration Services package that we created to automate the building of Analysis Services measure group partitions in SQL Server 2005 Analysis Services databases. This package synchronizes the relational partition scheme with the Analysis Services partition scheme. It loops through the relational database looking for a weekly fact table partition (by using a table naming convention). If a relational table is found, it looks to see if an Analysis Services measure group partition already exists (using the same naming convention). If not, it constructs and executes a XMLA script that creates it.
    >>

  • Dual boot Windows 7 partition help

    I want to dual boot Windows 7 & arch but I need help with figuring out what partitions to make and where to put them because it seems the automatic partition-er won't do the trick for me.
    I have two drives that I want to completely format for a fresh, clean install: a 60GB SSD & a 750GB hard drive. I want the end result to be that the SSD only has Windows 7 Pro x64 SP1 installed to it and I will point the 'My Documents', 'My Pictures', etc. to a NTFS partition on the hard disk drive ( I know how to do this folder pointing ). I don't want arch to touch the SSD if possible so I can reformat the SSD separately if I ever just want to reformat Windows. I only want arch to be on the hard disk for that reason. I don't think I care which one handles the OS switching at boot ( should I favor Windows MBR or syslinux? Please give advice. ) And I would assume I make the NTFS partition on the hard disk a primary partition so how do I split up arch for the 3 other primary partitions left since the auto partition from the arch boot CD uses 4?

    I'm no expert, but the way I would do it would be like this:
    1. Create partitions on the HDD for /, /boot, /home and swap, along with one (or more) for your Windows personal files
    2. Set the BIOS to boot from the SSD.
    3. Install Windows on the SSD.
    4. Right-click the "My Documents" folder, select "properties", then the "Location" tab and choose the new location for the folder.
    5. Install Arch on the HDD partitions, and allow the boot manager to install itself in the MBR of the SSD.
    Then the whole boot process will be on the SSD...  If you ever need to get rid of the Linux bootloader, you can overwrite it with a "clean" Windows one using bootrec.exe (see here: http://support.microsoft.com/kb/927392).
    Last edited by esuhl (2012-03-26 04:52:12)

  • Is there a way of partitioning the data in the cubes

    Hello BPC Experts,
    we are currently running an Appset with 4 Applciations. Anyway two of these are getting really big.
    In BPC for MS there is a way to partitioning the data as I saw in the How tos.
    In NW Versions the BPC queries the Multiprovider. Is there a way to split the underlying Basis Cube to several (split by time or Legal Entity).
    I think this would help to increase the speed a lot as data could be read in parallel.
    Help is very much appreciated.
    Daniel
    Edited by: Daniel Schäfer on Feb 12, 2010 2:16 PM

    Hi Daniel,
    The short answer to your question is that, no, there is not a way to manually partition the infocubes at the BW level. The longer answer comes in several parts:
    1. BW automatically partitions the underlying database tables for BPC cubes based on request ID, depending on the BW setting for the cube and the underlying database.
    2. BW InfoCubes are very different from MS SQL server cubes (ROLAP approach in BW vs. MOLAP approach usually used in Analysis Services cubes). This results in BW cubes being a lot smaller, reads and writes being highly parallel, and no need for a large rollup operation if the underlying data changes. In other words, you probably wouldn't gain much from semantic partitioning of the BW cubes underlying BPC, except possibly in query performance, and only then if you have very high data volumes (>100 million records).
    3. BWA is an option for very large cubes. It is expensive, but if you are talking 100s of millions of records you should probably consider it. It uses a completely different data model than ROLAP or MOLAP and it is highly partition-able, though this is transparent to the BW administrator.
    4. In some circumstances it is useful to partition BW cubes. In the BW world, this is usually called "semantic partitioning". For example, you might want to partition cubes by company, time, or category. In BW this is currently supported through manually creating several basic cubes under a multiprovider. In BPC, this approach is not supported. It is highly recommended to not change the BPC-generated Infocubes or Queries in any way.
    5. If you have determined that you really need to semantically partition to manage data volumes in BPC, the current best way is probably to have multiple BPC applications with identical dimensions. In other words, partition in the application layer instead of in the data layer.
    Hopefully that's helpful to you.
    Ethan

  • Problems with partitioning and install Grub. Fresh install

    All,
    First post here. I appreciate any help you can offer.
    I am having some problems when installing Arch Linux.
    I am installing Arch on a brand new (3 days old) Toshiba SatelliteC655D-S5300 Laptop.
    Hot sheet can be found at http://cdgenp01.csd.toshiba.com/content … -S5300.pdf.
    I was initially installing from 2011.08.19 x86_64 Core CD but someone suggested using the latest version.
    Now I am installing from 2011.11.13 x86_64 CD burned at 4x (the slowest my burner can go).
    I am able to complete all steps up to installing GRUB, but it fails to install.
    During partitioning I receive a few errors and I believe this is contributing to the issue.
    At first I tried automatic partitioning with 100mb boot, 1024mb swap, 10,000mb / and the rest of 320g for /home. Each partition is ext3 except /boot which is ext2.
    During the automatic partitioning an error briefly occured: /usr/lib/aif/core/libs/lib-blockdevices-filesystems.sh: line 355: !((partition_flag)): command not found.
    After speaking with a friend they suggested manually partitioning and using UUIDs instead.
    1) So far I have removed all partitions, rebooted.
    2) Partitioned using cFdisk. Bootable 100mb parition, 1024mb swap, 15,000mb primary (/), 3000mb logical (/var), and the rest 300949mb logical (/home).
    3) Once I write the changes and quit I reboot.
    4)I go back into the installer and complete steps 1-3.
    5) Go to step 4 and and then manually configure block devices, file systems, or mount points.
    6) I choose the option for uuid and hit ok.
    At this point 3 error messages appear at the bottom:
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'part,' : not a valid identifier
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'type,' : not a valid identifier
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'label,' : not a valid identifier
    (Screenshot: http://i.imgur.com/OHRKo.jpg)
    7) Next it prompts me to add the mount points for each partition set.
    8) Select the partition, the mount point, it asks me for label and any additional opts for mkfs.ext3.
    9) I leave the label and opts field blank. After selecting ok to the opts field I get the same 3 errors as above:
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'part,' : not a valid identifier
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'type,' : not a valid identifier
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'label,' : not a valid identifier
    (Screenshot: http://i.imgur.com/QqkSP.jpg)
    I am able to successfully set a mount point and format each partition. But I receive the same set of 3 errors occur for each partition.
    10) Once I complete the formatting I proceed to step 8, install bootloader.
    It says Generating Grub device map.. This could take a while. Please be patient.
    I receivieve the following error on this screen: /usr/lib/aif/core/libs/lib-blockdevices-filesystems.sh: line 355: !((partition_flag)): command not found.
    (Screenshot: http://i.imgur.com/B5j4K.jpg)
    11) After the error displays it goes to the next screen, before installing grub you must review config file. etc.
    12) I hit ok and then :q the config file. Is there a critical change in the config file that I'm missing?
    13) After closing the file I select which the boot device where the GRUB bootloader will be installed. My only option is /dev/sda. I hit ok
    Then I get the following 2 errors:
    /usr/lib/aif/core/libs/lib-blockdevices-filesystems.sh: line 355: !((partition_flag)): command not found
    /usr/lib/aif/core/libs/lib-blockdevices-filesystems.sh: line 355: !((partition_flag)): command not found
    (Screenshot: http://i.imgur.com/ol840.jpg)
    13) Error installing GRUB. See /dev/tty7 for output. Ok
    14) GRUB was NOT successfully installed. Ok
    I checked out TTY7.
    It shows the installer issuing the following commands in GRUB.
    1) device (hd0,) /dev/sda
         Error 12: Invalid device requested
    2) root (hd0,0)
         Filesystem type is extf2, partition type 0x83
    3) setup (hd0,)
    Checking if "/boot/grub/stage1" exists... no
    Checking if "/grub/stage1" exists... yes
    Checking if "/grub/stage2" exists... yes
    Checking if "/grub/e2fs_stage1_5" exists... yes
    Running "embed /grub/e2fs_stage1_5 (hd0,0)"... failed (this is not fatal)
    Running "embed /grub/e2fs_stage1_5 (hd0,0)"... failed (this is not fatal)
    Running "install /grub/stage1 (hd0,0) /grub/stage2 p /grub/menu.lst "... succeeded
    Done.
    4) quit
    I have tried rebooting from here and using the Arch CD to boot into the existing OS but it does not work.
    I tried grub-install /dev/sda
    I get Probing devices to check BIOS drives. This may take a long time.
    /dev/mapper../dm-0 does not have any corresponding BIOS drive.
    I have tried going into grub and issuing the same commands the install script did.
    Same errors.
    I'm afraid I don't have network access at the moment so I can't get a successful /arc/report-issues to run.
    I hope I've included enough information to start the troubleshooting.
    Let me know if I've missed anything!
    Thanks in advance,
    -Jason
    Last edited by username17 (2011-11-17 22:37:56)

    username17 wrote:I get Probing devices to check BIOS drives. This may take a long time.
    /dev/mapper../dm-0 does not have any corresponding BIOS drive.
    Your drive does not have an MBR to install grub to as it is a GPT disk - which is also not supported under the old GRUB.
    You need to create a small partition at the very beginning of the drive (8MB is plenty) and set the "bios_grub" flag. ie the "BIOS drive" your error refers to.
    You will then need to install the grub2-bios package following the chroot instructions on the grub2 wiki page here: https://wiki.archlinux.org/index.php/GRUB2#Installation
    ** Please note that I found the chroot mounts to be outdated - replace "/tmp/install" with "/mnt" **
    Your alternative solution is to boot a gparted liveCD and prepare your disk as MBR - this will (most likely) destroy all existing data on the disk.

  • INTERVAL in Range Partition?

    Hi All,
    In Teradata, we have one option named INTERVAL which helps to define the interval at which we want the range partition to work.
    In HANA, I see that we have to mention all the dates respectively for each partition.
    Do you know if there any option like INTERVAL in HANA?
    Regards,
    Krishna Tangudu

    Hi Krishna,
    currently (SPS 8) there is no such half-automatic partition creation functionality available for SAP HANA.
    - Lars

  • Partitioning an active but uncompressed cube

    Hello Community,
    There are many posts on partitioning InfoCubes.  I promise I have read all of them before asking this question -->
    Partitioning for a fact table must be defined <b>before you activate</b> the InfoCube. It cannot be done afterwards.
    But what if the cube has not been compressed yet ?
    My hope is that the ability to partition any time up until the <b>1st fact-table compression</b> has come with newer BW releases ? 
    The reason I have this hope is because partitioning affects only the E-fact tables.  And, of course, the E-Fact remains empty until the 1st compression.  (F-fact tables are automatically partitioned by the request-ID).
    So why should it be impossible to define the partitioning if the E-Fact table is still empty ?
    Another question : when working with the BW GUI, if it is indeed true that I must unload, partition, and reload the unpartitioned cubes before compressing them, what is lost or gained by skipping that process and simply compressing the unpartitioned cube followed by partitioning with database tools ?
    Thanks!
    Keith

    Hi Keith,
    Sorry, but you have to implement partitioning before the first data load - when the InfoCube is initially activated.
    With regard to you second question, you could indeed do this but you will run into major issues if you need to activate the InfoCube again or do any maintenance on the InfoCube. The activation from BI could result in the activation of the E and F fact tables with differing definitions to the modified DB tables (effectively loosing the partition tables underneath). I have not tested this but it is a big possibility so I would advise care).
    I hope this helps,
    Mike.

  • A confusing Partitioning question

    I am partitioning all of the tables in my database into 6 partitions. It is based on a "District ID." Each table has this column, but the Company that is populating the database is only populating this field in one table(LOCATION) and leaving it null in all of the other tables. Can these other tables be partitioned on an update to this "District ID" field after all records have been inserted? Or can I have the database automatically partition these tables based on the foreign key relationship between the LOCATION.District ID field and the other_tables.DISTRICT_ID field? Essentially saying the LOCATION.DISTRICT_ID is in this partition so new record go to the same partition. Sorry for the confusion.

    I am partitioning all of the tables in my database into 6 partitions. It is based on a "District ID." Each table has this column, but the Company that is populating the database is only populating this field in one table(LOCATION) and leaving it null in all of the other tables. Can these other tables be partitioned on an update to this "District ID" field after all records have been inserted? Or can I have the database automatically partition these tables based on the foreign key relationship between the LOCATION.District ID field and the other_tables.DISTRICT_ID field? Essentially saying the LOCATION.DISTRICT_ID is in this partition so new record go to the same partition. Sorry for the confusion. yes the partition key can be null. so you can create 6 partitioned tables and they will automatically
    be managed by oracle. so when you insert the data into the child tables based on the district_id they will
    go to a particular partition. you need not write any piece of trigger/update statement to manage this.
    oracle will do this for you. the only thing is if most of the values for
    district_id are null then you lose the advantage of using the partition
    as most of the rows will end up in one particular partiion.

  • Partition by hour

    Hello,
    Is there an automatic partition method that can have a partition for one hour.
    and another partition for the other times?
    example.
    table
    ID NUMBER
    DD DATE
    partition p1 when DD between sysdate and sysdate-1/24
    partition p2 for other values
    tnx

    No there isn't such a method.
    It just not possible to define a partition like your p1 partition because sysdate is not a constant value. Oracle would have to check every second if any row has to be move from p1 to p2 and this definitely doesn't make any sense.

  • Logical Vs physical partitions ?

    Hello BW Experts,
    What is the diff between the logical and physical partitions. The partitions that we do on a cube based on 0calmonth / 0fiscyear is that logical or physical.
    Suggestions appreciated.
    Thanks,
    Kalyan

    hi,
    in bw, physical partitioning done on database level and logical partitioning done with multiprovider. cube partitioning with 0calmonth is physical partitioning.
    take a look
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    Database (or Physical) Partitioning
    -Database tables are cut into smaller chunks (partitions)
    -One logical database table
    -Transparent for user
    -Available for the following database management systems
    Range Partitioning: ORACLE, Informix, IBM DB2/390
    Hash Partitioning: IBM DB2/UDB
    Database (or Physical) Partitioning
    *Benefits
    -Parallel accesses to partitions
    -Read smaller sets of data Fast
    -Deletion of partitions (DROP PARTITION instead DELETE FROM WHERE)
    *Automatically Partitioned Database Tables (for Range Partitioning)
    -InfoCube F-Fact table: partitioned by request
    -PSA table: partitioned by request ODS
    -Change Log: similar to PSA table
    *User Defined Partitioning Criteria (for Range Partitioning)
    -InfoCube E-Fact table
    -Partition Criteria:
    Time characteristics like month or fiscal period
    -Note: Both fact tables are extended by the SID of the chosen time characteristic
    MultiProvider (or Logical)
    -Partitioning Possible partitioning criteria: year, plan/actual, regions, business area
    -Parallel sub-queries are started automatically to basic InfoCubes

  • Reference partitioning - thoughts

    Hi,
    At moment we use range-hash partitioning of a large dimension table (dimension model warehouse) table with 2 levels - range partitioned on columns only available at bottom level of hierarchy - date and issue_id.
    Result is a partition with null value - assume would get a null partition in large fact table if was partitioned with reference to the large dimension.
    Large fact table similarly partitioned date range-hash local bitmap indexes
    Suggested to use would get automatic partition-wise joins if used reference partitioning
    Would have thought would get that with range-hash on both dimension
    Any disadvtanatges with reference partitioning.
    Know can't us range interval partitioning.
    Thanks

    >
    At moment, the large dimension table and large fact table are have the same partitioning strategy but partitioned independently(range-hash)
    the range column is a date datatype and the hash column is the surrogate key
    >
    As long as the 'hash column' is the SAME key value in both tables there is no problem. Obviously you can't hash on one column/value in one table and a different one in the other table.
    >
    With regards null values the dimesnion table has 3 levels in it (part of a dimensional model data wqarehouse)i.e. the date on which tabel partitioned is only at the loest level of the diemnsion.
    >
    High or low doesn't matter and, as you ask in your other thread (Order of columsn in table - how important from performance perspective the column order generally doesn't matter.
    >
    By default in a diemsnional model data warehouse, this attribute not populated in the higher levels therefore is a default null value in the dimension table for such records
    >
    Still not clear what you mean by this. The columns must be populated at some point or they wouldn't need to be in the table. Can you provide a small sample of data that shows what you mean?
    >
    The problem the performance team are attempting to solve is as follows:
    the tow tables are joined on the sub-partition key, they have tried joined the two tables together on the entire partition key but then complained they don'y get star transformation.
    >
    Which means that team isn't trying to 'solve' a problem at all. They are just trying to mechanically achieve a 'star transformation'.
    A full partition-wise join REQUIRES that the partitioning be on the join columns or you need to use reference partitioning. See the doc I provided the link for earlier:
    >
    Full Partition-Wise Joins
    A full partition-wise join divides a large join into smaller joins between a pair of partitions from the two joined tables. To use this feature, you must equipartition both tables on their join keys, or use reference partitioning.
    >
    They believe that by partitioning by reference as opposed to indepently they will get a partition-wise join automatically.
    >
    They may. But you don't need to partition by reference to get partition-wise joins. And you don't need to get 'star transformation' to get the best performance.
    Static partition pruning will occur, if possible, whether a star transformation is done or not. It is dynamic pruning that is done AFTER a star transform. Again, you need to review all of the relevant sections of that doc. They cover most of this, with example code and example execution plans.
    >
    Dynamic Pruning with Star Transformation
    Statements that get transformed by the database using the star transformation result in dynamic pruning.
    >
    Also, there are some requirements before star transformation can even be considered. The main one is that it must be ENABLED; it is NOT enabled by default. Has your team enabled the use of the star transform?
    The database data warehousing guide discusses star queries and how to tune them:
    http://docs.oracle.com/cd/E11882_01/server.112/e25554/schemas.htm#CIHFGCEJ
    >
    Tuning Star Queries
    To get the best possible performance for star queries, it is important to follow some basic guidelines:
    A bitmap index should be built on each of the foreign key columns of the fact table or tables.
    The initialization parameter STAR_TRANSFORMATION_ENABLED should be set to TRUE. This enables an important optimizer feature for star-queries. It is set to FALSE by default for backward-compatibility.
    When a data warehouse satisfies these conditions, the majority of the star queries running in the data warehouse uses a query execution strategy known as the star transformation. The star transformation provides very efficient query performance for star queries.
    >
    And that doc section ALSO has example code and an example execution plan that shows that the star transform is being use.
    That also also has some important info about how Oracle chooses to use a star transform and a large list of restrictions where the transform is NOT supported.
    >
    How Oracle Chooses to Use Star Transformation
    The optimizer generates and saves the best plan it can produce without the transformation. If the transformation is enabled, the optimizer then tries to apply it to the query and, if applicable, generates the best plan using the transformed query. Based on a comparison of the cost estimates between the best plans for the two versions of the query, the optimizer then decides whether to use the best plan for the transformed or untransformed version.
    If the query requires accessing a large percentage of the rows in the fact table, it might be better to use a full table scan and not use the transformations. However, if the constraining predicates on the dimension tables are sufficiently selective that only a small portion of the fact table must be retrieved, the plan based on the transformation will probably be superior.
    Note that the optimizer generates a subquery for a dimension table only if it decides that it is reasonable to do so based on a number of criteria. There is no guarantee that subqueries will be generated for all dimension tables. The optimizer may also decide, based on the properties of the tables and the query, that the transformation does not merit being applied to a particular query. In this case the best regular plan will be used.
    Star Transformation Restrictions
    Star transformation is not supported for tables with any of the following characteristics:
    >
    Re reference partitioning
    >
    Also this is a data warehouse star model and mentioned to us that reference partitioning not great with local indexes - the large fact table has several local bitmpa indexes.
    Any thoughts on reference partitioning negatively impacting performance in this way compared to standalone partitioned table.
    >
    Reference partitioning is for those situations where your child table does NOT have a column that the parent table is being partitioned on. That is NOT your use case. Dont' use reference partitioning unless your use case is appropriate.
    I suggest that you and your team thoroughly review all of the relevant sections of both the database data warehousing guide and the VLDB and partitioning guide.
    Then create a SIMPLE data model that only includes your partitioning keys and not all of the other columns. Experiment with that simple model with a small amount of data and run the traces and execution plans until you get the behaviour you think you are wanting.
    Then scale it up and test it. You cannot design it all ahead of time and expect it to work the way you want.
    You need to use an iterative approach. That starts by collecting all the relevant information about your data: how much data, how is it organized, how is it updated (batch or online), how is it queried. You already mention using hash subpartitioning but haven't posted ANYTHING that indicates you even need to use hash. So why has that decision already been made when you haven't even gotten past the basics yet?

  • Tool for table partitioning maintenance?

    Hi,
    Anyone knows a tool for table partitioning maintenance?
    I have tested Oracle ILM and it gives me the scripts but are not executed automatically.
    I'm looking for something that helps me with creating new partitions automatically.
    Thanks.
    Carlos

    Which version?
    This is solved in 11g you need not search for outside tool. Oracle has Interval partitioning for tables available which creates automatic partitions based on time, partitions by date, month,.....
    you can script the partitions creation based on date, month in older versions.

  • Partitioned a cube

    Hi
    I have partitioned a cube on 0CALMONTH, now i want to see the partitioned data in cube, how the data is seen in the cube, can we see physically the data as it was partitioned by 0CALMONTH or partition is only logical (not physically seen)? is there any separate table to see the partitioned cube data?
    why do we do partitionng....

    Hi Ramesh,
    Once the F table is compressed ,data goes from F table to E table then u can no longer see data in F table.
    You can control whether all the Requests in the F fact table get compressed or limit it up to a cetain specific request, a certain number of requests, or request over xx days old. Generally, you never want to compress all Requests, since you might need to back out a Reqeust, and once you have compressed a Request, you can't bak it out.
    Good practice would be to verify the correctness of the data in a Request before you compress it.
    When u partition ,u do it for the infocube based on two characteristics OCALMONTH or OFISCPER.The fact table is created on the database with one of the number of partitions corresponding to the value range. You can set the value range yourself.
    keep in mind when you partition in RSA1, that you are configuring the partitioning for the E fact table (F fact table is already automatically partitioned by BW).
    but key pt is that no data gets into this E fact table you configured for partitioning UNLESS you run the cube compression/collapse!
    see the below link for more on partitioning.
    http://help.sap.com/saphelpsem40bw/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm_
    If you need any other information, let me know.
    Thanks and Regards,
    Pavan Kumar Gali.

  • Partitioning of multiprovider

    Hi experts,
    can we partition multiprovider, if so.. wt is the procedure, how Logical partitioning is associated with it.... can u plz... explain me.....
    regards
    rekha

    Hi Rekha,
                  If I understood ur question ..Here is my explanation ..Partitioning can be done only to the cube ..not possible for multi provider I guess..In BW table partitioning is done two types.
    1) Range partitioning( IBM DB2/390, Informix, Oracle)
    2) Hash partitioning (IBM DB2/UDB)This feature is transparent to BW.
    This means that all fact tables in an IBM UDB based BW systems
    are automatically partitioned. The user does not have to configure
    anything.
    Normallly fact tables can be partitioned over a time charactersitic for example by 0CALMONTH or 0FISCPER ...because DBMS requires range when fact table is created ..so you need to know the future values of the partition column in adance.This is straightforward for 0CALMONTH or 0FISCPER..
    you can do the partition of the cube ..goto change mode of the cube ..
    -->Extras on the menu ..you will find partition ..select your required time characteristic that available in ur cube ...
    I guess above information  help you
    Cheers
    Sreedhar

Maybe you are looking for