BW Performance: 10 InfoCubes v.s. Partitioning

We are working with reasonably large amounts of data, 40-60 million records will be extracted every month into an InfoCube.
Since performance is important, I got a tip to create 10 InfoCubes instead of one and then use a MultiProvider on top of these to decrease load- and query execution times. To load data in parallell, 10 ODS are also required (but we just have one InfoSource).
I thought that partitioning could be used instead of creating 10 InfoCubes, is this correct?
Any tips are appreciated, should I create 10 InfoCubes and a MultiProvider, or use partitioning on one InfoCube, or should I use both?
Thank you.

Obviously size of your server is a consideration, and your DB.  As Torsten mentions, not all DBs support range partitioning which is required to physically partition an IC.
Other thing to keep in mind is that you can only physically an InfoCube on FISCPER or CALMONTH, and your queries would need ot restrict on the one you partitioned on in order to take advantage of physical partitioning. 
I think Torsten's first question is key -
Would queries typically run against all ten InfoCubes, or would they be running against just a subset of them, or just one?
What are the 10 InfoCubes - different time periods, different Business Areas?
As an example - you create 10 ICs, one for each of 10 Business Areas.  If your queries typically run against all 10 or most of the Business Areas when they are run , then the parallel processing of an MP could help a lot.  If however, the queries are usually run for just one Business Area when they are run, then the MP doesn't really buy you anything as far as query performance.
So you really need to understand how the data will be queried.

Similar Messages

  • InfoCube Design and Partitioning

    Hi All, when we design an InfoCube, like when we decide which craracteristics to go in which dimension, does partitioning decided at this time as well? If os, please provide some details?
    Can somebody guide me on this? thanks in advance.

    Hi,
    Both Infocube design and Partitioning will help in improving the performance.
    Partioning will be decided based on the data to be stored in the Infocube. If for example, lets assume that the cube will hold 15 yrs of data. As you know, partiontioning can be done only based on Fiscal Year or Calmonth
    If partioning is done on Fiscal Year, we might decide to go for 15 partitions as we have 15 yrs of data. If we decide to go for 7 partitions, then 2 yrs of data will be stored in each partition.  Like wise for Calmonths, we might go for 15*12 partitions.
    If your system is 3.5, data should not be there to do partioning, if your system is BI7, partioning can be done even if data is present in the cube.
    Regards,
    Sekhar

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Warehouse partitioning - performance of queries across multiple partitions?

    Hi,
    We are using Oracle 11.2.0.3 and have a large central fact table with several surrogate ids which have bitmap indexes on them and have fks looking at dimension tables + several measures
    (PRODUCT_ID,
    CUSTOMER_ID,
    DAY_ID,
    TRANS_TYPE_ID,
    REGION_ID,
    QTY
    VALUE)
    We have 2 distinct sets of queries users look to run for most part, ones accessing all transactions for products regradless of the time those transactions happened (i.e. non-financial queries - about 70%,
    queries determining what happened in a particular week - 20% of queries.
    Table will have approx 4bn rows in eventually.
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year - however this data wouldn't be joined to any other table.
    Then considering sub-partitioning by hash of product_id which is surrogate key for product dimension.
    Thoughts on performance?
    Queries by their nature would hit several sub-partitions.
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    Thanks

    >
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    >
    Queries that access multiple partitions can improve performance for two use cases: 1) only a subset of the entire table is needed and 2) if the access is done in parallel.
    Even if 9 of 10 partitions are needed that can still be better than scanning a single table containing all of the data. And when there is a logical partitioning key (transaction date) that matches typical query predicate conditions then you can get guaranteed benefits by limiting a query to only 1 (or a small number) partition when an index on a single table might not get used at all.
    Conversely, if all table data is needed (perhaps there is no good partition key) and parallel option is not available then I wouldn't expect any performance benefit for either single table or partitioning.
    You don't mention if you have licensed the parallel option.
    >
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    >
    You provide some confusing information. On the one hand you say that 70% of your queries are
    >
    ones accessing all transactions for products regradless of the time those transactions happened
    >
    But then you add that you are
    >
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year
    >
    How can you drop old partitions every year if 70% of the queries need product data 'regardless of the time those transactions happened'?
    What is the actual 'datetime' requirement'? And what is your definition of 'a particular week'? Does a week cross Month and Year boundaries? Does the requirement include MONTHLY, QUARTERLY or ANNUAL reporting?
    Those 'boundary' requirements (and the online/offline need) are critical inputs to the best partitioning strategy. A MONTHLY partitioning strategy means that for some weeks two partitions are needed. A weekly partitioning strategy means that for some months two partitions are needed. Which queries are run more frequently weekly or monthly?
    Why did you mention sub-partitioning? What benefit do you expect or what problem are you trying to address? And why hash? Hash partitioning guarantees that ALL partitions will be needed for predicate-based queries since Oracle can't prune partitions when it evaluates execution plans.
    The biggest performance benefit of partitioning is when the partition keys used have a high correspondence with the filter predicates used in the queries that you run.
    Contrarily the biggest management benefit of partitioning is when you can use interval partitioning to automate the creation of new partitions (and subpartitions if used) based solely on the data.
    The other big consideration for partitioning, for both performance and management, is the use of global versus local indexes. WIth global indexes (e.g. a global primary key) you can't just drop a partition in isolation; the global primary key needs to be maintained by deleting the corresponding index entries.
    On the other hand if your partition key includes the primary key column(s) then you can use a local index for the primary key. Then partition maintenance (drop, exchange) is very efficient.

  • Linux performance for LVM vs independent partition on 10G XE?

    Hi groups,
    I am a programmer and now requested to evaluate using Oracle 10G XE to replace MySQL for some of running web application. I remember the old experience that when playing with oracle 8i with linux, the guideline said that datafile of oracle should be placed at a independent partition. I m now testing XE with SUSE Enterprise 9, which using LVM(logical volume management).
    So far there's not seen guideline that talking about if 10G XE should using dependent partition. And the case is that SUSE using LVM that application see multiple physical partitions as one logicial partition. This is totally an opposite to what Oracle intended for using independent partition. I would like is that the 4GB datafile limit make this not a concern?
    I m not a DBA, thus hoping some gurus can give me some advise. Thanks!

    You can separate the redo logs, tablespaces etc in XE onto separate disks if you want. It's a manual exercise but there is nothing to stop you from doing it.
    But as Andrea mentioned, this is largely an exercise in futility given that XE is 4 Gb of data and a single CPU. If performance is an issue then you would probably be better to use Standard Edition or Standard Edition One that gives you greater control, and also includes advanced storage management techniques such as Automatic Storage Management.
    The default install of XE should be more than adequate for the majority of usages that XE is targeted at. With one proviso - the production release will use a defined flashback recovery area (FRA) for some of the online redo logs, archived redo logs, and backupsets.
    We would strongly recommend that if you have more than 1 disk, then the FRA should be placed on a seperate disk other than the one that holds the database.

  • Slow performance in windows after resizing partitions with bootcamp

    Windows XP Pro crashed on my iMac 6.1 intel core 2 duo @ 2.16GHz and 3 GB RAM so I reinstalled Mac OSX Snow Leopard and created 2 equal partitions of 125GB each w/Boot Camp. I formatted both partitions in NTFS instead of the previous configuration of 32GB FATS for Windows and 218GB for Mac OSX. I updated EVERYTHING with the latest updates and service packs (I installed Windows XP Pro with a CD that had Service Pack 2 on it then installed Srvice Pack 3). I need to use Windows for AutoCAD but want the support and reliability of Apple.
    Now the Windows partition is running VERY slow whereas before the crash and changes, I had no performance problems. I don't know what to do.

    Windows XP Pro crashed on my iMac 6.1 intel core 2 duo @ 2.16GHz and 3 GB RAM so I reinstalled Mac OSX Snow Leopard and created 2 equal partitions of 125GB each w/Boot Camp. I formatted both partitions in NTFS instead of the previous configuration of 32GB FATS for Windows and 218GB for Mac OSX. I updated EVERYTHING with the latest updates and service packs (I installed Windows XP Pro with a CD that had Service Pack 2 on it then installed Srvice Pack 3). I need to use Windows for AutoCAD but want the support and reliability of Apple.
    Now the Windows partition is running VERY slow whereas before the crash and changes, I had no performance problems. I don't know what to do.

  • OSX has performed slower since a Win7 partition was created...

    Hi,
    Late last July (I think) I partitioned the 1TB HDD of the iMac below, and installed Windows 7 onto the 100GB partition. Since early October of last year, I noticed that performance in OSX had dropped; it took longer to boot up, and especially to launch any applications very soon after login. Ever since mid January the performance under OSX had appeared to be even worse. It reportedly takes very much longer to launch many applications, like Safari or Word; again especially recently after login. I had installed a certain application in the Win 7 partition on the 13th of Jan: Tencent QQ. This was so I could correspond with friends in China.
    It appears that ever since then, many tasks have taken longer to execute and completely. A strange thing is that in comparison to the slowing down of OSX, 7 had not had any performance hit at all, even with only twenty percent left on the allocated partition.
    Any advice and suggestions would be greatly appreciated.
    Thanks, LIb.
    Note: the Windows partition has been defragmented, and both OSes have have read access to both partitions.

    Safe Boot is okay but doesn't do as good a job on the directory as fsck will, better yet to boot from OS X DVD or another hard drive running 10.6.6.
    Apple First Aid has not put Alsoft or MicroMat out of business, and they have their own more robust disk maintenance.
    For free or $29, clone and erase/restore will optimize Mac volume and OS X.
    There are always a few threads in Boot Camp forum of people that have a slower OS X response after installing Windows. Your iMac 4-core 8GB RAM should be fast, and no slower now after partitioning. Unless there is some structural problem. Though it does sound like the boot and system cache (which should be deleted and recreated from 10.6.6, and are rebuilt fresh using SuperDuper restore (it doesn't copy temp files and caches for one).
    Maybe Boot Camp Assistant really is to blame and a poor partition manager. For me, Windows has always been on its own dedicated drives.

  • Infocubes with empty partitions

    Hi All,
    Can you suggest a way to find the cubes with empty partitions ?
    Program SAP_DROP_EMPTY_FPARTITIONS is not a suitable way to do that, since we have more than 1500 infocubes to be checked.
    Thank in advance
    Berna

    Hi Venkatesh,
    Report RSORAVDV gives me result for table name (/BIC/F*) and its partition count.
    What I need is the partitions with zero records. I could not find this data in  this report result.
    I should get the list of tables with lots of partitions and run the report SAP_DROP_EMPTY_FPARTITIONS for them.
    And it is still a time consuming activity. ( we have more than 200 tables that have more than 100 partitions)
    Any other suggestion ?
    Regards
    Berna

  • How to recover after performing an 'Erase' and then 'Partition' within Disk Utility of an External USB drive?

    Hi - I accidently ran an 'Erase' and then 'Partition' within Disk Utility of an External USB drive (meant to run it against another drive that was plugged in).  Does anyone have any suggestions on if it is possible to recover from this?  It has all my family photos on it and my backup is a few months old :-(
    I tried running Stellar Phoenix Mac Data Recovery but it found what looks like basic mac folder structure (probably from the first erase).  I think what messed me up was that I rain the repartition which formats the drive a 2nd time!  Any help is truly appreciated!
    thank you,
    -Steve

    Hello RalphR_MI,
    Take a look at this page, which lists several articles.  Please let me know if any of these help you restore your backup.
    Also, take a look at this article, as well as this one. 
    I hope this helps.  I'm not sure of a few things, so I'm sorry for sending you so much, but I'm confident it will give you something new to try.
    Please let me know if this helps.  Good luck!
    ↙-----------How do I give Kudos?| How do I mark a post as Solved? ----------------↓

  • Performance Impact with InfoCube Compression

    Hi,
    Is there any delivered content which gives a comparative analysis of performance before InfoCube compression and after it? If not then which is the best way to have such stats?
    Thank you,
    sam

    The BW Technical Content cubes/queries can tell you if a query is performing better at differnet points in time.  I like ot always compare a volume of queries before and after, rather than look at a single execution.  As mentioned, ST03 can provide info, as can RSRT.
    Three major components of compression that can aid performance:
    <u><b>Compression</b></u>
    The compression itself - how many rows do you end up with in the E fact table compared to what you had in the F fact table.  this all depends on the data - some cubes compress quite a bit, others, not at all, e.g.
    Lets say you have a cube with a time grain of Calendar Month.  You load trans to it daily.  A particular combination of characteristic values on a transaction occurs every day so after a month you have 30 transactions spread across 30 Requests in the F fact table.  Now you run compression - these 30 rows would compress to just 1 row.  You have now reduced the volume of ddata in your cube to just about 3% of what it used to be.  Queries should run much faster in this case.  In real life, doubt you would see a 30 - 1 reduction, but perhaps a 2 - 1 or 3 - 1 is reasonable.  It all depends on your data and your model.
    <b><u>Zero Elimination</u></b>
    Some R3 appls generate trans where all the KFs are 0, or generate trans that offset each other, netting to 0.  Specifying Aero Elimination during compression will get rid of those records.
    [<b>u]Partitioning</u></b>
    The E fact table can be partitioned on 0FISCPER or 0CALMONTH.  If you have queries that restrict on those characteristics, the DB can narrow in on just the partitions that hold the relevant data (partition pruning is how it is usually referred to).  if a query on goes after 1 month of data form a cube that has 5 years of data, this can be a big benefit.

  • List Partition performance

    I'm attempting to address some performance issues by using list partitioning (10g running on Windows Server 2003). As an experiment I created two small tables (~1000 rows) of vehicle maintenance data, one with no partitions the other partitioned by vehicle type:
    create table myschema.mynewtable
    tablespace mynewtablespace
    parallel ( degree default )
    nologging
    monitoring
    partition by list (vehicle_type) (
    partition TRUCK values ('truck'),
    partition CAR values ('car'),
    partition SUV values ('suv'),
    partition VAN values ('van')) as
    select * from myschema.myoldtable;
    Problem is that queries into the partitioned table take approx. twice as long to complete as compared to the non-partitioned table. Am I missing something here?
    I also get the the following error when I view the table using the enterprise manager:
    "Only range partitioned tables are supported in the current version"
    As I stated before I am using 10g on Windows Server 2003.
    Thanks in advance!

    Could you possibly compare the execution plans with and without the list partition? Are the stats recent? Looks like the EM doesn't support yet list partitioning, since this feature is so new (sorry, can't try this myself, don't have EM currently).
    Daniel

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • How to know  Whether Partition is done for Cube or not

    Hi All,
    I need information regarding Partition of a cube.
    a)How do i know the Partition is done or not for Cube
    b)On what basis we should make parttion of Cube.
    Thanks and Regards,
    C.V.

    There are several threads on partitioning, so it would be a good start for this question (as with any question) to search the BI forums on "Partitioning" and review thsoe first.
    Some basic considerations on partitioning -
    - Your DB must support Range partitioning to permit partitioning your InfoCube. The option will be greyed out if it is not available.
    - InfoCube must be empty to be partitioned.
    - InfoCube can only be partitioned on 0FISCPER or 0CALMONTH.  You can define it so that you have a partition for each month/fiscper, or so that each partition will hold a few or several months of data.
    - Generally, you would not partition small cubes.
    - Thru BW 3.5 Aggregates automatically get partitioned the sme way the InfoCube is partitioned as long as the partitioning characteristic is in the aggregate.  In 2004s, I believe you have options as to whether an aggregate gets partitioned or not.
    - Partitioning may be done for query performance reasons and data administration.  If the queries on the InfoCube regularly run with restrictions/filters on the partitioning characteristic (FISCPER or CALMONTH), the database will "prune" any partitions that do not contain the FISCPER/CALMONTH value(s), so that it does not need to consider them , e.g. most of your users only run a sales query for the current and previous month, but your cube contains 3 years of data.  By partitioning on CALMONTH (we'll assume 1 partition / month), the database will exclude all but the two partitions from consideration.  This could help query performance a lot. or maybe only a little, depending on a variety of other factors.
    Again - it is important that the queries restrict onthe partitioning characteristic to be of any value on query performance.  So don't partition on FISCPER if all the queries use CALMONTH for restrictions.
    The data adminsitration reason you might partition is to improve selective deletion or archiving time.  These processes are capable of using a DB function to Drop the partition, which quickly removes the data from the cube, rather than having to run a resource intensive Delete query.  This only happens if you deletion/archiving criteria is set to remove an entire partition of data.
    Again - review the other threads on the BI forums on Partitioning.  Most questions you have will already have been asked and answered before, and post again on SDN if there is something you still have a question about.

  • Shadow tables that have been created via the new partitioning schema

    Hi,
         Complete Partitioning :
                    In a complete partitioning, the fact table of the infocube are fully converted using shadow
    tables that have been created via the new partitioning schema.
                   in the above Explanation what is the meaning of shadow tables which perform the
                   partitioning of an info cube.

    Hi
    Shadow tables have the namespace /BIC/4F<Name of InfoCube> or /BIC/4E<Name of InfoCube>.
    Complete Partitioning
    Complete Partitioning fully converts the fact tables of the InfoCube. The system creates shadow tables with the new partitioning schema and copies all of the data from the original tables into the shadow tables. As soon as the data is copied, the system creates indexes and the original table replaces the shadow table. After the system has successfully completed the partitioning request, both fact tables exist in the original state (shadow table), as well as in the modified state with the new partitioning schema (original table). You can manually delete the shadow tables after repartitioning has been successfully completed to free up the memory. Shadow tables have the namespace /BIC/4F<Name of InfoCube> or /BIC/4E<Name of InfoCube>.
    You can only use complete repartitioning for InfoCubes. A heterogeneous state is possible. For example, it is possible to have a partitioned InfoCube with non partitioned aggregates. This does not have an adverse effect on functionality. You can automatically modify all of the active aggregates by reactivating them.
    Hope it helps and clear

  • Partitioning

    Hi,
    Can anyone tell me how you go about partitioning a fact table in 3.5?
    Andy

    You can partion the data based on the time characteristic available in your info-provider. If you have 0CALMONTH, 0FISCPER, 0CALWEEK in your info-provider then you can also partition based on these time characteristics.
    From a perform,ance point of view partitioning by 0CALDAY is not recommended as it is going to create a huge number of partitions. If you have a date range over 2 months in the report then you are literally going to access 60 partitions to get that data which is not good. However if you partion by year it will be in one / two partion depending on the year and in 2 partitions if based on 0CALMONTH / 0FISCPER.
    It is always recommended to partition by 0CALMONTH / 0FISCPER. You need to have these 2 chars in your cube if you want to partition your data based on these 2 characteristic.
    To partition cube
    Just right click on Infocube -> Change or doble click on the infocube to go to infocube maintenace. From the menu choose Extras ->Partitioning.
    Partitioning:
    http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Partitioning InfoCubes using the Characteristic 0FISCPER:
    http://help.sap.com/saphelp_nw04/helpdata/en/0a/cd6e3a30aac013e10000000a114084/content.htm
    infocube partitioning
    Older versions of SQL database (prior to MS SQL Server 2005) does not support "Partition" this feature.
    Oracle and Informix does support.. refer oss note: 869407 for more details.

Maybe you are looking for

  • Change data capture - ignore delete?

    Hello, I'm trying to solve an issue I met on a synchronous change data capture : I created a CDC table with the DBMS_LOGMNR_CDC_PUBLISH.CREATE_CHANGE_TABLE procedure: BEGIN    DBMS_LOGMNR_CDC_PUBLISH.CREATE_CHANGE_TABLE (       OWNER             => '

  • Newly installed xcode 3.2.1 can't find library

    Can someone help me find where this rpath is defined so Xcode can find the DevToolsFoundation libary? Dyld Error Message: Library not loaded: @rpath/DevToolsFoundation.framework/Versions/A/DevToolsFoundation Referenced from: /Users/user/Desktop/Xcode

  • Suite type interface, trouble reverting to selections window

    I'm working on the GUI for my program and I've ran into a little bit of trouble. My desired GUI: User starts with a JFrame (Manage), containing a splash screen and some buttons allowing them to choose which part of the package they want to run. User

  • Secure Copy, SSH

    1.) SCP On my routers I have enabled SSH, which works just fine. However making use of the enabled secure copy server with ip scp server enable seems a bit diffcult to me. The scp server seems to run on the router, however I'm missing a tool to acces

  • INVALID BUDGET ENTRY METHOD OR MISSING BASELINE VERSION

    Getting the following error while saving a project related supplier invoice in payables.