Dimension Tables deleted
Hi BW Experts,
Recently we loaded SD data. There are 6 data sources and we got 6 requests in Manage screen and got transferred/added records. There are about 8 lakh records. (INITIALIZATION)
When i check the contents, iam not able to see the data in Infocube.
We found the problem that Dimension Tables data deleted and we are able to see the fact table data.
So please can anyone tell me the solution??
Regards
Anjali
Hi,
right click on the cube and select delete data. It will ask you options that you want to delete data from only fact table or fact table and dimension table.
select the second option i.e. "fact table and dimension table", it will delete all data from fact table and dimension table drops them and recreate them.
then you can do the data load again.
Regards,
Purvang
Similar Messages
-
One Dimension Table Deletion issue
Dear Guru's
Query run's very long time to execute in prod system
In our production system, we have a Cube in which one dimension(Eg:- /BIC/PC_17 which show 38% when i run report SAP_INFOCUBE_DESIGNS ). Due to the performance issue of the Query, i am thinking to delete data from SE11 only for this dim table...What will happen with the Factable?
Since Dim table has reference to Factable...Is it correct if i delete the entries for this Dimension table for the perfomance issue?
PLease suggest me i am very much confused, also this is a prod system....so pls give me only good answers..no gess please...
Also ....suppose if this Dim17 is not used in Query Does it make sense to delete entries from this Dim 17 for Query performance, since this exist's in cube...
Thanks in adv,
DevHi,
you should not do that. In no case. There are some ways to speed up a query. Use them: e.g. building aggregates, running rsrv to check the cube and possibly have unused entries of dimensions deleted, compress the cube....
regards
Siggi -
Delete Fact and Dimension Tables
Hai All,
Small question! When deleting data from info cube, it asks whether to delete just Dimension table or both fact and Dimension tables.
What is the difference between these two?
Thank you.Hi Visu,
The dimension table will contain the references to char value by way of SID and DIM ids, whereas the fact table will contain the number (of key figures). If you delete just the Fact table, the numbers go away, yet the DIM ids remain intact, so reload is faster. But if you are convinced that this data is not correct either, or you want a clean slate to strat reload again, then delete both Fact and Dim tables. Then when you reload, the DIM ids will be created again.
Hope this helps... -
Deletion of Dimension Table data in Process Chain Step
Hi,
I am trying to delete the InfoCube Contents using a Process Chian.This process is successfully deleting the Fact table contents, but not dimension table contents.
I need your help to automate the deletion of dimension table contents.
FYI--
We are using BW 3.0 B.
Thanks in Advance.
Vardhan.Hi
Unfortunately is not possible. I had in the past the same issue and I had also a confirmation from SAP.
You can use RSRV to delete unsed value, manually or build a program.
the process chain allows only to delete the fact table.
regards
Mike -
Deleting MultiCube deleted all dimension tables of underlying BaseCube
Hello,
I recognized that the MultiCube wasn't what I needed and decided to delete it.
Normally, deleting a Multicube isn't a problem and it has no influence on the InfoCubes that belong to him. But now we had a bad surprise. Deleting the MultiCube stopped with error messages whicj said that some tables couldn't be deleted.
When I examinded the table names I saw that BW tried to remove all dimension tables of one of the physically InfoCubes which belong to the MultiCube.
I guessed that the warning wouldn't lead to a data loss but when I tried to check data directly from the physical cube I got a new error message: One of the dimesion tables are unknown. Then I tried to check all dimension tables of the cube. They are all lost
I checked the tables in SE11 and saw that they aren't tables any more but structures.
Any idea what could have happened in this case?tim_bag wrote:
Hello,
>
> I recognized that the MultiCube wasn't what I needed and decided to delete it.
>
> Normally, deleting a Multicube isn't a problem and it has no influence on the InfoCubes that belong to him. But now we had a bad surprise. Deleting the MultiCube stopped with error messages whicj said that some tables couldn't be deleted.
>
> When I examinded the table names I saw that BW tried to remove all dimension tables of one of the physically InfoCubes which belong to the MultiCube.
>
> I guessed that the warning wouldn't lead to a data loss but when I tried to check data directly from the physical cube I got a new error message: One of the dimesion tables are unknown. Then I tried to check all dimension tables of the cube. They are all lost
>
> I checked the tables in SE11 and saw that they aren't tables any more but structures.
>
> Any idea what could have happened in this case?
A multi-cube is a union of basic cubes. You can cross check tables RSDCUBEMULTI or RSDICMULTIIOBJ.
The multi-cube itself does not contain any data; rather, data reside in the basic cubes. To a user, the multi-cube is similar to a basic cube.
You may have a look at the following tables to check the data
/BIC/F* Info cube F-Fact table
BIC/E* Info cube E-Fact table (compressed records)
/BIC/D* info cube dimension tables -
[39008] Logical dimension table has a source that does not join to any fact
Dear reader,
After deleting a fact table from my physical layer and deleting it from my business model I'm getting an error: [39008] Logical dimension table TABLE X has a source TABLE X that does not join to any fact source. I do have an other fact table in the same physical model and in the same business model wich is joined to TABLE X both in the physical and business model.
I cannot figure out why I'm getting this error, even after deleting all joins and rebuilding the joins I'm getting this error. When I look into the "Joins Manager" these joins both in physical as well as logical model do exist, but with consistency check it warns me about [39008] blabla. When I ignore the warning and go to answers and try to show TABLE X (not fact, but dim) it gives me the following error.
Odbc driver returned an error (SQLExecDirectW).
Error Details
Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: TABLE X.column X Please fix the metadata consistency warnings. (HY000)
SQL Issued: SELECT TABLE X.column X saw_0 FROM subject area ORDER BY saw_0
There is one *"special"* thing about this join. It is a complex join in the physical layer, because I need to do a between on dates and a smaller or equal than current date like this example dim.date between fact.date_begin and fact.date_end and dim.date <= current_date. In the business model I've got another complex join
Any help is kindly appreciated!Hi,
Have you specified the Content level of the Fact table and mapped it with the dimension in question? Ideally this should be done by default since one of the main features of the Oracle BI is its ability to determine which source to use and specifying content level is one of the main ways to achieve this.
Another quick thing that you might try is creating a dimension (hierarchy) in case it is not already present. I had a similar issue few days back and the warning was miraculously gone by doing this.
Regards -
Dimension Table populating data
Hi
I am in the process of creating a data mart with a star schema.
The star schema has been defined with the fact and dimension tables and the primary and foreign keys.
I have written the script for one of the dimensions and would like to know when the job runs on a daily basis should the job truncate the table every day and rebuild the dimension table or should it only add new records to the table. If it should add only
new records to the table how do is this done?
I assume that the fact table job is run once a day and only new data is added to it?
ThanksIt will depend on the volume of your dimensions. In most of our projects, we do not truncate, we update only updated rows based on a fingerprint (to make the comparison faster than column by column), and insert for new rows (SCD1). For SCD2 we apply
similar approach for updates and inserts, and expirations in batch (one UPDATE for all applicable rows at the end of the package/ETL).
If your dimension is very large, you can consider truncating all data or deleting only affected modified rows (based on nosiness key) to later reload those, but you have to be carefully maintaining the same surrogate keys reference by your
existing facts.
HTH,
Please, mark this post as Answer if this helps you to solve your question/problem.
Alan Koo | "Microsoft Business Intelligence and more..."
http://www.alankoo.com -
Dimension key 16 missing in dimension table /BIC/DZPP_CP1P
Hi all,
I have a problem with an infocube ZPP_CP1. I am not able to delete nor load any data. It was working fine till some time back.
Below is the outcome of running RSRV check on this cube. I tried to run the error correction in RSRV. But n o use
Please help.
Dimension key 16 missing in dimension table /BIC/DZPP_CP1P
Message no. RSRV018
Diagnosis
The dimension key 16 that appears as field KEY_ZPP_CP1P in the fact table, does not appear as a value of the DIMID field in the dimensions table /BIC/DZPP_CP1P.
There are 17580 fact records that use the dimension key 16.
The facts belonging to dimension key 16 are therefore no longer connected to the master data of the characteristic in dimension.
Note that errors that are reported for the package dimension are not serious (They are thus shown as warnings (yellow) and not errors (red). When deleting transaction data requests, it can arise that the associated entries in the package dimension have already been deleted. As a result, the system terminates when deleting what can be a very large number of fact records. At the moment, we are working on a correction which will delete such data which remains after deletion of the request. Under no circumstances must you do this manually. Also note that data for request 0 cannot generally be deleted.
The test investigates whether all the facts are zero. If this is the case, the system is able to remove the inconsistency by deleting these fact records. If the error cannot be removed, the only way to re-establish a consistent status is to reconstruct the InfoCube. It may be possible for SAP to correct the inconsistency, for which you should create an error message.
Procedure
This inconsistency can occur if you use methods other than those found in BW to delete data from the SAP BW tables (for example, maintaining tables manually, using your own coding or database tools).Hi Ansel,
There has been no changes in the cube. I am getting this problem in my QA server. So I retransported the cube again from Dev to QA. But did not help me..
Any other ideas??
Regards,
Adarsh -
Urgent-Issue in the Dimension tables
Hi Experts,
Question1:
I have a flat file load to a cube.this flat file has 1.5 mil records.One of the dimension created has a 2 dates(original date & current date) assigned.
When i look at dimension table for # entries is 30million.And when i look at the table i see the sids as 0,0 ( for 2 dates) and dim id's being creted.
When i did a search on the dimension table with current date & original date as not equal to 0.I see only 76,000 records.
Question 2:
we have an ODS which loads to the cube.In the process chain we have program that deletes the data in ods which does not match some conditions and then loads it tot he cube.
My question is,since we are not deleting contents from the cube and reloading it from the ODS(Full update).Will i not be seeing same records coming in with Full update which get agrregated in the cube.
Ex: i have a record in ODS.
A X Z 100 1000
After full update to the cube,the cube would have
A X Z 100 1000
When i run the process chain and data is dleeted from ODS on some condition and i still have hte same record in ODS and when this loads into cube wont this be aggregated with the previous record.
A X Z 200 2000
Would appreciate,if anyone could explain if i am missing anything.Hello,
If you can't see the SID means you have not loaded the master data, that why there is no reference to the SID table and the values are 0.
InfoCube by default will have aggregated values, when there are duplicate records on the Keyfigures will be aggregated.
For example I have a Material Dimension and Customer Dimension
In the fact table, it will be like this
DIM1 DIM2 KF1 KF2
Mat001 Cust1 100 10
Mat001 Cust2 200 5
for this there will be 1 entry in Material DIM table for Mat001 and 2 entries for Customer DIM table for Customer Cust1 and Cust2.
Material Dimension
DIM ID SID
1 Mat001 (Here it will be SID from Material Master)
Customer Dimension
1 Cust1 (Here it will be SID from Customer Master)
2 Cust2 (Here it will be SID from Customer Master)
Note : DIM ID is the combination of one or more SID in the dimension table.
So the exact fact table will look like
MATDIM CUSDIM AMT QTY
1 1 100 10
1 2 200 5
If you load the data again with same characteristics values then the key figure will be aggregated
Example if you load
Mat001 Cust2 25 5
then the fact table will not have a new entry instead the it will aggregates and looks like (bolded one)
MATDIM CUSDIM AMT QTY
1 1 100 10
1 2 220 10
Hope its clear
thanks
Chandran -
Change entris on dimension table
Hi All,
We have two entries in DIM table one for Fixed currency info object in the cube like blow.
1. DIMID SID_0CURRENCY SID_0SALES_UNIT
0 1000000044 2
DIMID is zero beacuse we have created Info Object with fixed currency.
Now we have to open this filed for all the currency but system not allowing to change the Info Object beacuse data in the cube and we have almost 2 Billion record in the cubes.
So we are thinking to do the changes like below.
We have another entries in the table for not fixed currency Info Object
2.DIMID ; SID_0CURRENCY SID_0SALES_UNIT
2 1000000045 3
SO now we are planning to overwrite record 1 with record 2. so my new record 1 look like below.
1. DIMID SID_0CURRENCY SID_0SALES_UNIT
02 1000000045 3
Please suggest me can we change the DIM table.
Saleem.Hi Saleem
Dont manupulate the dimension table values manaually,it may leads to inconsistency(after doing manual change RSRV sure will through error).If you do so the connection of Dimension table with Fact and SID will get distrub,same can be rectified by RSRV,repair objects.Not sure how for it will recreate the connections..
Still if you want to take risk...dont delete the Fact table data...via SE38 (prog name DB_DROP TABLES*...Not sure about the program name,better search) delete the Dimension table values(if its not permissible means same can be achieved by ABAP debugging),delete master table and freshly populate the master data table.
Use SLG1 for more Txn data analysis.
Thanks
TG -
Is convenient to create a dimension table in this situation...?
I have Oracle 9.2.0.5
I have a table âFACT_OPERATIONSâ with 10.000.000 of records and these fields:
TABLE âFACT_OPERATIONSâ
Category VARCHAR2(40) (PK)
Type VARCHAR2(35) (PK)
Source VARCHAR2(50) (PK)
Description VARCHAR2(100) (PK)
CUSTOMER_ID NUMBER(6) (PK)
MODEL_ID NUMBER(6) (PK)
STATE_ID NUMBER(6) (PK)
TIME_ID NUMBER(8) (PK)
Quantity (PK)
I charge it from multiple fact_tables with a Merge Statement
I have the Dimensions tables: Dim_Customers, Dim_Models, Dim_States and Dim_Time.
The field Source have the name of the Fact_Table where I charge the information
The field Type is the father of the field Source.
and the field Category is father of the field Type.
For example:
Category: EVENT
Type: SALES
Source: SALES_VI_ZU
Category: EVENT
Type: SALES
Source: SALES_VI_AU
Category: EVENT
Type: MOVS
Source: MOV_SALES
The question is:
Is convenient to improve the query performance I create the dimension Source (It will contain the fields Category, Type, Source and so on). Then, I put in the FACT_OPERATIONS the field SOURCE_ID and delete the fields Category, Type and Source.
Now I do for example these queries for example:
select Category, Type, Source, count(*)
from FACT_OPERATIONS
group by Category, Type, Source
or
select *
from FACT_OPERATIONS
where CATEGORY = âEVENTâ
or
select *
from FACT_OPERATIONS
where TYPE = âSALESâ and SOURCE = âSALES_VI_AUâ
¿I will improve the performance with these changes or is the same?
Thanks!Your queries would probably run more slowly if you moved the category and type to a dimension table, but having said that it is still the correct thing to do. If you need faster performance for those three queries then it would be apropriate to use a bitmap join index, or to create a materialized view to aggregate the data -- in fact query 1 could be satisfied very quickly using a materialized view.
-
When we delete a request on a InfoCube, the information not necessary on the Dimension Table does not canceled. For example if we delete all the data about year 2003 on a InfoCube, the information about 2003 continuous to alive on time dimension table.
Do you know if exists a program or some method to delete the information from the Dimension Table not used on FACTTABLE?
Best regards,
Nicola CommariNicola,
Navigate to transaction RSRV, there exists a test which you can run
"Entries not Used in the Dimension of a cube" which will help you repair the issue you are currently experiencing.
Hope this helps.
Cheers,
Scott -
Dimension table is larger than the fact table
Hi Community,
How can we explain the phenomenon when a dimension table has MORE records in it than the fact table ? What are the conditions that would cause this to occur ?
Thank you !
KeithThanks, Bhanu,
I am wondering specifically how to explain the output from program SAP_INFOCUBE_DESIGNS when the dimension table is shown to have a fact table ratio that is greater than 100%
I believe that SAP_INFOCUBE_DESIGNS already takes into consideration both the E and also the F-fact table when calculating the ratio. So in this case, we could not explain it by your first suggestion (after compression - but looking at only the F table).
In the case where selective deletions have been performed, how can we correct the situation ? For example, how could we clean out the records in the dimension tables which no longer have any facts in the fact table ? (I think the BW system should do this automatically as a part of the selective deletion, don't you agree ?).
Also, is there any other explanation for how the dimension table could arrive at greater than 100% the size of the fact table(s) ?
For example, lets say that (theoretically) we placed many very dynamic characteristics together into the same dimension.. which we know you should not do. Would it be possible for the combination of these very many dynamic characteristics to cause so many DIM IDs that the dimension table overtakes the record count of the fact table ? Is this situation then made worse by compression if the number of fact table records is reduced thanks to removal of the request ID ? -
Dimension table greater than Fact table
Hello,
By analysing infocubes with SAP report , i encounter a situation where I have more records in a dimension table than in the cube :
ABI_C0101 /BIC/DABI_C01011 rows: 22 ratio: 0 %
ABI_C0101 /BIC/DABI_C01012 rows: 27 ratio: 0 %
ABI_C0101 /BIC/DABI_C01013 rows: 3.558.433 ratio: 229 %
ABI_C0101 /BIC/DABI_C01014 rows: 1 ratio: 0 %
ABI_C0101 /BIC/DABI_C01015 rows: 66.440 ratio: 4 %
ABI_C0101 /BIC/DABI_C01016 rows: 15.383 ratio: 1 %
ABI_C0101 /BIC/DABI_C01017 rows: 2.533 ratio: 0 %
ABI_C0101 /BIC/DABI_C01018 rows: 1 ratio: 0 %
ABI_C0101 /BIC/DABI_C0101P rows: 2 ratio: 0 %
ABI_C0101 /BIC/DABI_C0101T rows: 122 ratio: 0 %
ABI_C0101 /BIC/EABI_C0101 rows: 0 ratio: 0 %
ABI_C0101 /BIC/FABI_C0101 rows: 1.551.333 ratio: 100 %
As the contents of the cube AND dimension are deleted before reload, I'm wondering how this can happen.
Can somebody help me here ?
Thanks.
Fred.Hi,
As I said yesterday, I'vz got the same feeling and test it by manually delete cube and dimensions before load and dimension is now only 17% of the cube. This is well what I expected.
I will give also a feeback on the report SAP_INFOCUBE_DESIGNS. When I ran it after manual delation of the cube, the number of rows in this report doesn't change ! So, I suppose that thsi program read some statistics but it does'nt read directly tables for sure.
So, conclusion is well that the variant doesn't work correctly. I will check OSS and open a note if required.
Thanks all for you ideas.
Fred. -
Fact and dimension table partition
My team is implementing new data-warehouse. I would like to know that when should we plan to do partition of fact and dimension table, before data comes in or after?
Hi,
It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
Refer below content for detailed info
Designing and Administrating Partitions in SQL Server 2012
A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
Tip
Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
of the ALTER TABLE statement.
After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
and moving historical data from the active portion of a table to a partition with less activity.
Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
steps:
1. Create
the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
2. Create
a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
3. Create
a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
4. Create
the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
Leveraging the Create Partition Wizard to Create Table and Index Partitions
The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
Figure 3.13. Selecting a partitioning column.
The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
ranges and settings on the grid include the following:
Note
By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
date). The data types are based on dates.
Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
Enhancements to Partitioning in SQL Server 2012
SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
level and, if so, can be used to perform their function on a subset of data in the partitioned table.
Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
Administrating Data Using Partition Switching
Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
index needs to be rebuilt from time to time to reestablish its fill factor setting.)
Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
Transact-SQL statement. Both options enable you to ensure partitions are
well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
does not actually move the data, a few prerequisites must be in place:
• Partitions must use the same column when switching between two partitions.
• The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
index partitions, and indexed view partitions.
• The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
• The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
(length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
Clustered and nonclustered indexes must be identical. ROWGUID properties
and XML schemas must match. Finally, settings for in-row data storage must also be the same.
• The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
NULL are supported, NOT
NULL is strongly recommended.
Likewise, the ALTER TABLE...SWITCH statement
will not work under certain circumstances:
• Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
are allowed).
• Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
Triggers are allowed on tables but must not fire during the switch.
• Indexes on the source and target table must reside on the same partition as the tables themselves.
• Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
• Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
and extra work will be required to even make partition switching possible, let alone efficient.
Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
window partition.
Example and Best Practices for Managing Sliding Window Partitions
Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
The answer to a scenario like the preceding one is called a sliding window partition because
we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
1. How
data is handled varies according to the choice of LEFT or RIGHT partition function window:
• With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
• With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
holds recent data.
• Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
of the partition.
• RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
value and work upward from there.
2. Assuming
that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
to split empty partition5 into two empty partitions, 5 and 6.
3. We
use the SWITCH subclause
of ALTER TABLE to
switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
4. We
can then use MERGE to
combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
5. We
can use SWITCH to
push the new quarter’s data into the spot of partition1.
Tip
Use the $PARTITION system
function to determine where a partition function places values within a range of partitions.
Some best practices to consider for using a slide window partition include the following:
• Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
data sets, drop the partition with the oldest data.
• Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
loading in new data, and merge, after unloading old data, do not cause data movement.
• Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
• Create the load staging table in the same filegroup as the partition you are loading.
• Create the unload staging table in the same filegroup as the partition you are deleting.
• Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
• Unload one partition at a time.
• The ALTER TABLE...SWITCH statement
issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
Thanks Shiven:) If Answer is Helpful, Please Vote
Maybe you are looking for
-
Weblogic security authentication; question to interact with the realm
Hi, I have a quick question about weblogic security authentication.... We are using weblogic 81sp3. We have user-group info in an Novell eDirectory LDAP server. Currently, a Novell Authenticator provider is configured under : Security > Realms > myRe
-
Does FCSvr read ACL permissions set in OD?
We have set up a series of permissions using ACL's within OD. I assumed that FCSvr would pick up these permissions to certain areas of an Xsan. However within FCSvr you can still see media from areas that have been denied to certain users - set up in
-
Hi I'm looking to purchase a MacBook to run Logic Studio. Is this a good idea from the performance point of view? If so, what is the minimum model I should be looking at? - John.
-
Dear SAP Consultants I have created Vendor (F-57) & Customer (F-49) noted item but i want remove or delete that balance. So plz give me solution. Good answer very good points will assign. Thanks Girish
-
Using iweb 11 I can no longer publish site updates, please help.
I am using iweb 11 and Lion and I can no longer publish site updates, please help.