Copy Cube, Fact table partitioned automatically.

Hello Guys.
I have a trouble.
I do a copy of one cube to make a backup of data to redisign the original cube.
The problem is that now I see that the fact table of my new cube is partitioned into multiple tables.
I'm on Oracle database, an I can see with program SAP_DROP_EMPTY_FPARTITIONS in se 38, that on Dev environment there is any partitions in both cubes, but in prod system, if I use the same program, I see that the original cube is not parttitioned, but the new cube (that is a copy of the original one) does have a lot of partitions in it.
This partitions are generating that the loads from one cube to other be very extensive (for 300,000 registers) it delays around 4 hours, and it's because of the partition of the cube.
So I want to know what thing cause that my fact table of the copy cube be parttitioned, and how can I solve this problem in order to make the loads of data more quickly.
THANKS in advance.

Did u tried partitioning the cube by fiscper/calmonth?
This will surely help the system with very low number of partition.
Cheers..
(BTW: if you creating  a backup copy/archive of cube which doesnt require posting date in reporting, try to change posting date to end of month and hence the aggregation level with change resulting in low volume)...

Similar Messages

  • Regarding Fact Table Partitioning

    Hi Experts,
    I have small clarification,
    How to do Fact table partitioning?In which senario its necessary?
    Some body can explain Please with Example.
    Thanks in Advance.
    Janardhana Rao.

    And keep in mind - the main goal of partitioning the Cube is usually performance (although there are also some data administration benefits) which is achieved by partition "pruning". This "pruning" occurs only if your query has selection criteria/restrictions based on the partitioning characteristic.  What it does is restrict the amount of data that the query must examine.
    That is - if you partition your cube on 0FISCPER, your queries would want to be using 0FISCPER as a selection/restriction, e.g. you query on a single 0FISCPER, the database is smart enough to prune (exclude) all the data in the other partitions.
    If 0FSICPER or 0CALMONTH don't fit your needs, open a customer message with SAP, I saw a post that suggested that they have a procedure to permit partitioning the E fact table on some other characteristic, but I have not verified that myself (think I'll do that...)

  • Sequential read on Cube fact table

    I have couple of queries on a particular cube that run forever , i rebuilt the indexes for that
    cubes fact table and regenerated the queries . some of them started executing pretty quick
    whereas the others are still very slow .
    The result is similar when i use listcube also
    when i check sm66 it it sows that the query reads the cube fact table for a very very long time .
    Is there anything that can be done ?
    All suggestions welcome !!
    cube has no aggregates

    Hi oops,
    Why dont you build aggregates on the cube? it will considerably reduce the query run time.refreshing cube statistics and creating index definitely help in loading the cube and reading the cube but if it has got huge records, then obviously, the query run time will be high and you have to build aggregates for the cube. If you click on business intelligence in SND, then ull find many documents as how to create an aggregate under performance link. This should help you
    Regards
    Sriram

  • Modeling: Is Fact Table partitioning that important?

    I read of how much partitioning a fact table can speed a query.  On an upcoming project with specifications for only 4 InfoCubes, how can we take advantage in partitioning to fact tables to speed up queries?
    Can this be factored into the design now or, we should wait till queries on Cube begins to perform poorly?
    Thanks.

    Spend some time reviewing the links others have provided and serach SDN.
    As mentioned, there are two types of partitioning generally talked about with SAP BW, logical and physical partitioning.
    <u><b>Logical partitioning</b></u> - instead of having all your data in a single cube, you might break into separate cubes, with each cube holding aspecific year's data, e.g. you could have 5 sales cubes, one for each year 2001 thru 2005.
    You would then create a Multi-Provider that allowed you to query all of them together.
    A query that needs data from all 5 years would then automatically (you can control this) be split into 5 separate queries, one against each cube, <b>running at the same time</b>.  The system automatically merges the results from the 5 queries into a single result set.
    So it's easy to see when this could be a benefit. If your queries however are primarily run just for a single year, then you don't receive the benefit of the parallel processing.  In non-Oracle DBs, splitting the data like this may still be a benefit by reducing the amount of rows in the fact table that must be read, but does not provide as much value to an Oracle DB since Infocube queries are using a Star_Transformation.
    <u><b>Physical Partitioning</b></u> - I believe only Oracle and Informix currently support Range partitioning.  This is a separately licensed option in Oracle. 
    Physical partitioning allows you to split an Infocube into smaller pieces. The pieces, or partitions, can only be created by 0FISCPER or 0CALMONTH  for an InfoCube (ODSs can be partitioned, but require a DBAs involvement).  The DB can then take advantage of this partitioning by "pruning" partitions during a query, e.g. a query only needs data form June 2005
    The DB is smart enough to restrict the indices and data it will read to the June 2005 partition. This assumes your query restricts/filters on the partitioning characteristic.  It can apply this pruning to a range of partitions as well, e.g. 0FISCPER 001/2005 thru 003/2005 would only look at the 3 partitions.
    It is <u><b>NOT</b></u>smart enough, however, to figure out that if your restrict to 0FISC<u>YEAR</u> = 2005, that it should only read 000/2005 thru 016/2005 since 0FISCYEAR is NOT the partitioning characteristic.
    An InfoCube MUST be empty in order to physically partition it.  At this time, there is no way to add additional partitions thru AWB, so you want to make sure that you create partitions out into the future for at least a of couple of years.
    If the base cube is partitioned, any aggregates that contain the partitioning characteristic (0CALMONTH or 0FISCPER) will automatically be partitioned.
    In summary, you need to figure out if you want to use physical or logical partitioning on the cube(s), or both, as they are not mutually exclusive.
    So you would need to know how the data will be queried, and the volume of data.  It would make little sense to partition cubes that will not be very large.

  • Experiences of Partitioning FACT tables

    Running BPC 7.0 SP3 for MS
    We have two very large FACT tables (195milliion records and 105million records) and these are currently growing at a rate of 2m/5m records per month - we are running an incremental optimize twice per day
    It has been suggested that we consider partioning the tables to improve performance, but I have not been able to find any users/customers with any experience of doing this
    Specifically
    1. Does it improve performance?
    2. What additional complexity does it add to regular maintenance ?
    3. Have there been any problems encountered implementing Partioned tables?
    4. It would seem that partioning based on time would make sense - historic data in one partition, current data in another HOWEVER many of our reports pull current year and prior year so will this cause a reporting issue? Or degrade report performance?

    I don't know if this is still an issue for you.  You ask about Fact Table partitioning specifically, but you need to be aware that it is possible to partition either the FACT tables or the Fact table partition of the cube, or both. We have used (further) partioning of Fact table partition in the cube with success, and it sounds as if this is what you are really asking about. 
    The impacts are on
    1. processing time, a full optimize without Compress only processes the paritions that have changed, thereby reducing the run time where there is a lot of unchanged data. You mention that you run incremental updates twice daily,  this is currently reprocessing the whole database.  I would have expected the lite optimize to be more effective, supported by an overnight full optimize, if you have an overnight window. You can also run the lite optimize more frequently.
    2. query time. The filters defined in the partitions provide a more efficient path to data in the reporting processes than the defaults, which have the potential to scan large parts of the database.
    Partitioning is not a panacea. You need to be specific about the areas of performance problem that you have and choose the performance improvement strategy to address these.  Looking at the indexing of the database is also an area where you can improve performance significantly.
    If you partition the cube, it is transparent to the usage of the application, from both user and admin perspective. The greatest complexity comes is the definition of the partitions in the first place, but this is a normal DBA function.  The trick is ensure that the filter statements do not overlap, otherwise you might get a value duplicated in 2 partitions, and to define a catchall partition to include anything not included in specific partitions. You should expect to revist the partitioning from time to time.  It is quite straightforward to repartition, you are not doing anything to the underlying data in the FACT tables
    Time is a common dimension to partition and you may partition at different levels of granularity for different periods, e.g. current year by qtr or month, prior and future years by year.  This reflects where the most frequent updates will be.  It is also possible to define partitions based on combinations of dimensions, we use category and time, so that currenct year actuals has the most granular partitions and all historic years budgets go into a single partition.

  • Copy from Fact: "Invalid selection passed"

    Using OS5.0 SP2, has anyone received this error while running a standard copy from fact table package---"invalid selection passed"? Odd thing is, the copy package will work with the exact same parameters.
    I didn't find anything about this in the sap notes.

    Can you list out the source and destination settings you're using?  I cannot remember the last time I ran that particular package but can try and run it with similar setting in apshell.

  • How to combine multiple fact tables and dimensions in one worksheet?

    Hello Forum,
    I am encountering a reporting problem when trying to create a worksheet that uses more than one cube/fact table and common dimensions. I have used Oracle Warehouse Builder 10Gr2 to design and deploy a pretty simple ROLAP data mart. We are using Discoverer Plus for OLAP as our reporting tool. We have 5 dimension tables using a star schema and 3 fact tables, when I create the worksheet I bring in our sales measure from our sales item table and then Store_Name from my Stores Dimension and then day from my time dimension, everything looks good at the stage, we're just trying to get a sum of all sales for that store on that day. Then I bring in a measure from our advertising cost table and a join window pops up asking which join to use, if I choose either the Store or the Time dimension I get correct data for the first fact table (sales) and grossly incorrect data for the ad cost measure from the second fact table (advertsing costs)...... any help would be appreciated

    You have encountered one of the key limitations of Discoverer... which I complained about to the Discoverer product manager at OpenWorld in 2001....
    Anyhow, to get around this, you are going to have to deal with it either in the database, (views, materialized views, tables), or within the admin tool by creating a custom folder.
    Discoverer also calls this the "fan trap", but never really had a solution to the problem. [The solution only worked is you joined to one and only one dimension!]
    What you want (using Sales_Fact and Inventory_Fact as an example) is to join Sales to Time, Store, and Product, and save that result. Then join Inventory to Time, Store, and Product, save that result, then do a double outer join between the two intermediate temporary tables in order to calculate something useful like inventory turns by store and product line.
    This is also known a "multipass SQL", and is supported by some (but not many) other tools.
    So, to accomplish this with Discoverer, you'll either need to create a view, or table, or materialized view that has already put Sales and Inventory into a single (virtual?) fact table. Alternatively you can write the SQL for how to do this linkage (don't forget to handle missing data), and use the Discoverer admin tool to create a custom folder that uses your SQL.
    Hope this helps!

  • Indexes on a fact table

    Hi All,
    We are trying to build a data warehouse. The data marts would be accessed by cognos reporting layer. In the data marts we have around 9 dimension tables and 1 fact table. For each month we will have around 21-25 million records in the fact table. Out of 9 dimensions there dim1 and dim2 have 21 million and 10 million records respectively. The rest 7 dimensions are very small like less than 10k records.
    In cognos reports they are trying to join the some dimension tables and the fact table to populate some reports. they are taking around 5-6 min.
    I have around 8 B-Tree indexes on this fact table with all possible combination of columns. I believe that these many indexes is not improving the performance. So I decided to create a aggregated table with measures. But in cognos there are some reports which give detailed information from the fact table and that are taking around 8 min.
    please advice as to what type indexes can be created on fact tables.
    I read that we can create bit map indexes based on join conditions but the documentation says that it can include columns only from dimension tables and not fact tables. Should the indexed columns be keys in dimensional tables?
    I have observed that the fact table is around 1.5gb. But each index is around 1.9 -2gb. I was kind of surprised when I saw that figure. Does it imply that index scan and table lookup would take more time than the full table scan? And hence it is not using the indexes.
    Any help is greatly appreciated.
    Thanks
    Hari

    What sort of queries are you running? Do you have an example (with a query plan)?
    Are indexes even useful? Or are you accessing too much data to make indexes worthwhile?
    Are you licensed to use partitioning? If so, are your fact tables partitioned? Are the queries doing partition pruning?
    Are you using parallelism? If so, is parallel query actually being invoked?
    If creating aggregate tables is a potentially useful strategy, you would want to use materialized views with query rewrite.
    Justin

  • Join fact tables with different grain without upsetting measures?

    Say I am a business analyst at a university, and I have two analytical cubes available to me:  an Enrollment Cube and an Admissions Cube.  The Enrollment Cube has a grain of Student and Semester, whereas the Admission Cube has a grain of Student, Application and Semester (a potential student can have more than one application in a semester).  Is there any way for the OLAP developer that makes these cubes to make it so that I can use both cubes' fact tables without messing up their corresponding measures, such that I can create such analyses as the following:
    County
    Semester
    Acceptance Rate (Admission Fact)
    Retention Rate (Enrollment Fact)
    County A
    1
    30%
    88%
    2
    60%
    87%
    3
    42%
    84%
    County B
    1
    90%
    100%

    One suggestion is to create a view that joins the 2 fact tables and then create a cube with measures mapped to the view for analysis purposes.  However, this would be using the measures of the new cube for the analysis.

  • Error for the fact table while processing the cube - attribute key cannot be found when processing

    Please help as I am new to SSAS and this is urgent requirement. This is a MOLAP cube and below is the error that I am receiving when processing the cube. The cube is set to Prrocess Full. Several similar errors are popped up for various dimensions.
    "Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: 'Fact_Table', Column: 'ID', Value: '1'. The attribute is 'Id'. Errors in the OLAP storage engine: The attribute key was converted to an unknown member because
    the attribute key was not found. Attribute Id of Dimension: 17 - Ves - PoC Cont from Database: DB, Cube: IPNCube, Measure Group: iSrvy, Partition: Partition1, Record: 1."
    Thanks in advance.

    Thanks for the recommendations David.
    It will be really great if you can clear some of my doubts:
    To my information, all the dimensions need to be processed first and then the fact table will be processed.
    So if the ID's are not present in the dimension tables, then it should not be present in the Fact table either.
    Here we found null values in the dimension table and the ID's were present in the Fact table. What might be the reasons causing such situation?
    Also how frequently the cube needs to be processed? Currently the ETL which processes the cube, is scheduled in a SQL Job Agent on hourly basis everyday. 
    Is there any possibilty that the cube might be under processing state and the SQL job for the next run getting executed trying to access and process the cube while it was still processing?

  • Distinct count for multiple fact tables in the same cube

    I'm fairly new to working with SSAS, but have been working with DW environments for many years.
    I have a cube which has 4 fact tables.  The central fact table is Encounter and then I also have Visit, Procedure and Medication.  Visit, Procedure and Medication all join to Encounter on Encounter Key.  The relationship between Encounter
    and Procedure and Encounter and Medication are both an optional 1 to 1.  The relationship between Encounter and Visit is an optional 1 to many.
    Each of the fact tables join to the Patient dimension on the Patient Key.  The users are looking for a distinct count of patients in all 4 fact tables.  
    What is the best way to accomplish this so that my cube does not talk all day to process?  Please let me know if you need any more information about my cube in order to answer this.
    Thanks for the help,
    Andy

    Hi Andy,
    Each distinct count measure cause an ORDER BY clause in the SELECT sent to the relational data source during processing. In SSAS 2005 or later, it creates a new measure group for each distinct count measure(it's a technique strategy for improving perormance).
    Besides, please take a look at the following distinct count optimization techniques:
    Create Customized Aggregations
    Define a Processing Plan
    Create Partitions of Equal Size
    Use Partitions Comprised of a Distinct Range of Integers
    Distribute the Hash of Your UserIDs
    Modulo Function
    Hash Function
    Choose a Partitioning Strategy
    For more detail information, please refer to the article below:
    Analysis Services Distinct Count Optimization:
    http://www.microsoft.com/en-us/download/details.aspx?id=891
    In addition, here is a good article about SSAS Best Practices for your reference:
    http://technet.microsoft.com/en-us/library/cc966525.aspx
    If you have any feedback on our support, please click
    here.
    Hope this helps.
    Elvis Long
    TechNet Community Support

  • Fact table size of the cube

    BW Gods!
    I need to selectively delete data from a cube which holds data from 2001-2006.....say I want to delete data only for 2001. It was suggested to me by pizzaman in one of our earlier threads that BW would decide if the data to be deleted is more than 10% of the cube size and if it is, it would just copy the cube without the data which has to deleted and it would rename the cube. So he suggested that I should find out the fact table size and make sure that there is enough space in my fact tablespace for this operation to go on.
    My question is, how do I find out the fact table size of the cube?
    How do I find out if there is enough space on fact tablespace for the above said deletion operation to be executed.
    I am not too sure whether I have put across the question correctly. Let me know.
    Thank you all in adavance
    Ashwin

    You want to make sure you include space for the F fact table and the E fact table, assuming you compress the infocube. If you use DB02 and use wild cards you migh be able to get all teh size info in one shot.
    Gnerally, he indices for an InfoCube don't occupy a lot of space, but they do take space.
    The default InfoCube table space is PSAFACTD and indics are in PSAPFACTI - not 100% sure that is universal acorss all the different DBs.  Your shop could also have put them in differnt tablespaces altogether.  Probably worth a quick check with your DBAs - they might know that they have lot of space availalbe, and you don't need to take the time to run down the space usage info.
    The table copy process will NOT delete the original fact tables unless/until it has successfully loaded to new copy.

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • SSAS .abf file without partitions of fact table

    Hi All,
    I am trying to take a cube .abf file from Dev env. This cube contains patitions for month on one of the fact table. our each env has differnt number of months.
    My problem is when i take .abf file from DEV backup it contains only 4 months of data so it as only 4paritions. But SIT conatins 8 months data they are not showing up even after full cube process. i came to know that we must take a fresh .abf file without
    processing. How can i take it??
    Note: I cannot create partions from SSMS.
    Thanks in advance

    I am trying to take a cube .abf file from Dev env. This cube contains patitions for month on one of the fact table. our each env has differnt number of months.
    My problem is when i take .abf file from DEV backup it contains only 4 months of data so it as only 4paritions. But SIT conatins 8 months data they are not showing up even after full cube process.
    Hi shrSan,
    If we backup a cube which contains 4 partitions for the measure group, it always got 4 partitions for the measure group when we restore the cube on a new OLAP Server. If we need to filter a fact tabel with more partitions, we should create partitions by
    manually on the new OLAP Server.
    For more information, please see:
    Filtering a Fact Table for Multiple Partitions:
    http://technet.microsoft.com/en-us/library/ms175325(v=sql.105).aspx
    Partitions (Analysis Services - Multidimensional Data):
    http://technet.microsoft.com/en-us/library/ms175688.aspx
    If I have something misunderstood, please point out and elaborate your issue with more detail.
    Regards,
    Elvis Long
    TechNet Community Support

  • Can't load data to cubes - same error for all DTPs  "exception in substep write to fact table"

    Hi Experts,
    This initial DTP error is:
    Data package processing terminated Message no. RSBK229
    Error while updating to target <cubename> (type INFOCUBE) Message no. RSBK241
    If I drill down to the specific package in the process monitor the error is:  "exception in substep write to fact table" .
    No more details for why the error is being generated.
    SAP BW 307 service pack 8.
    I have run the debugger without success, an ABAPer also ran debug mode to get more details on the error without success
    We checked teh termination point and the ABAP stack
    No error stack is available
    No short dumps
    I have reactivated the DTP
    The data looks fine
    The server crashed the day before the DTPs started to fail
    One cube is standard content and the other is a Z version of the same cube.
    Please let me know if anyone has resolved this issue before and what steps were necessary.

    Hello Russell,
    We have got the similar situation and we later found that it was a data issue only.
    We tried to enable the error stack, but as the data issue was due to a lookup which is bringing the invalid data, its not captured in the error stack and throwing short dump with the message u mentioned.
    so my suggestion, just put a break point if there are any lookups and check the data in internal tables if everything is as expected.
    Hope this helps.
    Thanks,
    Venkata Naresh

Maybe you are looking for

  • Hi, where is the new event window in ical??

    Hi, what I'm trying to do is to repeat an event every week for the rest of the year in ical, so I read one post which had a picture of the new event window pop up. Where is that new event window??? It doesn't pop up for me. Here's what I do: 1. I cli

  • How can you print a single page of a multipage report.

    I am using Office 2003 pro, the LabVIEW Report Gen Toolkit, LabVIEW 7.0, and Windows 2000. I have created a lengthy report with the first page being a summary. I would like the user to be able to select; print the entire report, or just the first pag

  • Recording just audio from a video DVD

    I thought I was ordering a CD with the music of a Broadway show. It turns out to have been a DVD of the entire production. How do I get just the audio from the DVD into iTunes? And while I'm at it, once I get the audio, I'll need to strip the dialogu

  • Error while launching JDeveloper 10.1.3.5

    Hi, when I launch JDeveloper, it open the screen whith loadbar and close it immediatly. I have the following error : Product extension oracle.jdeveloper does not exist! Can someone help me? I have setup the zip install whith its own jdk Thanks, and s

  • F110 and F-53

    I m using FB60 for invoicing vendor. We have F-53 and f110 to pay vendors. I want to know what is the difference between these two and if i have to give some amount as an advance then which t code i should use. Replies will be appreciated.