Dimension table in database [H1]

Hi Expert,
Can Yours share some info with me about dimension table in database.
Example for my dimension Entity I have 3 parent in this dimension ,
then in database I will have table :
mbrEntity
dim_Entity
dim_EntityH1
dim_EntityH2
dim_EntityH3
The more parent I create in my dimension will generate more table.
I just want to know what is different between this 5 table.
Because some time I query for MDX I will need something like  [Entity].[H1]
I just no idea what for need to put this.... [H1]
and some dimension just dunt need [H1]..
Thanks..

actually I want to know how come entity dimension have so many table like :
mbrEntity
dim_Entity
dim_EntityH1
dim_EntityH2
dim_EntityH3
meanwhile my dim category is only 2 table
mbrCategory
dim_Categorry
thanks.

Similar Messages

  • A dimension table outer join across two databases

    I have two databases of the same schema but may have different data that I would like to do comparisons on. For this discussion, each has two tables, Dimension and Fact. I created a common dimension which would show dimension data that exists on both databases. However, I want a common dimension which is a full outer join of the two Dimension tables that can be used to view data on the Fact tables on the two databases; this seems to be a little difference than a fact which may having missing value, isn't it? should this outer join occurs at the physical or logical? Can I even do it at physical if the data are from two different source? Any recommendation on what is the best way to do so? Thanks

    It depends on how you are defining your BM. Always, BM is for creating joins on folders within BM, in the sense that each folder can contain data from within multiple databases. So, create it in Physical layer and also create new joins for your defined facts dimensions in BM. But remember, cross database joins are not recommended. If both your schemas reside on 2 different oracle databases, you would be better off creating a single view (by creating a dblink between these 2 databases). So, create a single view and include that in the physical layer.
    Thanks,
    Venkat
    http://oraclebizint.wordpress.com

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • Dimension on a database view

    Hi,
    Can we create a dimension on a database view instead of a table?
    Thanks.

    For the purposes of ETL within OWB you can use a view as the basis of a dimension, you cannot generate dimension DDL for a dimension based on a view you will get the following error when trying to deploy;
    ORA-01702: a view is not appropriate here
    CWM2 does support such a dimension.
    Cheers
    David

  • How to Maintain Surrogate Key Mapping (cross-reference) for Dimension Tables

    Hi,
    What would be the best approach on ODI to implement the Surrogate Key Mapping Table on the STG layer according to Kimball's technique:
    "Surrogate key mapping tables are designed to map natural keys from the disparate source systems to their master data warehouse surrogate key. Mapping tables are an efficient way to maintain surrogate keys in your data warehouse. These compact tables are designed for high-speed processing. Mapping tables contain only the most current value of a surrogate key— used to populate a dimension—and the natural key from the source system. Since the same dimension can have many sources, a mapping table contains a natural key column for each of its sources.
    Mapping tables can be equally effective if they are stored in a database or on the file system. The advantage of using a database for mapping tables is that you can utilize the database sequence generator to create new surrogate keys. And also, when indexed properly, mapping tables in a database are very efficient during key value lookups."
    We have a requirement to implement cross-reference mapping tables with Natural and Surrogate Keys for each dimension table. These mappings tables will be populated automatically (only inserts) during the E-LT execution, right after inserting into the dimension table.
    Someone have any idea on how to implement this on ODI?
    Thanks,
    Danilo

    Hi,
    first of all please avoid bolding something. After this according Kimball (if i remember well) is a 1:1 mapping, so no-surrogate key.
    After that personally you could use Lookup Table
    http://www.odigurus.com/2012/02/lookup-transformation-using-odi.html
    or make a simple outer join filtering by your "Active_Flag" column (remember that this filter need to be inside your outer join).
    Let us know
    Francesco

  • Consistency Warning: [39008] Logical dimension table ... has a source ...

    Hi and many thinks for reading,
    I have a problem (which is quite common as I saw, though I could'nt find a solution which resolves my issue yet) with the consistency.
    My physical layer looks like this:
    Table_A
    Table_B
    Table_C
    Table_D
    Table_E
    In the Business Model I have just the same tables and the following joins in the Logical Table Diagram:
    Table_A -> Table_B
    Table_C -> Table_A
    Table_D -> Table_A
    Table_E -> Table_A
    The consistency check is delivering the following two warnings:
    Warning
    Logical Table Source
    Table_A
    [39008] Logical dimension table Table_A has a source Table_A that does not join to any fact source.
    Warning
    Logical Table Source
    Table_B
    [39008] Logical dimension table Table_B has a source Table_B that does not join to any fact source.
    While working in BI answers I manage to retrieve all data from Tables C, D, E. But for tables A and B I can retrieve just the columns which are defined in the join Table_A->Table_B. All other columns deliver the following error:
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: Table_A.Column_X. Please fix the metadata consistency warnings. (HY000)
    If i retrieve one Column from Table_A and one from Table_E, I have the following error:
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 15018] Incorrectly defined logical table source (for fact table Table_E) does not contain mapping for [Table_B.Column_X]. (HY000)
    (yes its Table_B not a)
    I would like to fix this errors but am not sure where the problem is hidden.
    Any help is kindly appreciated.
    Evgeny

    Hi Goran,
    thanks for you help.
    I tried to create the BMM in the two way but, it doesn't work out neither way. The Administration tool is recognizing the repository as consistent, thought it has some warnings, but in BI Answers I cant retrieve even a single column from one table.
    I am also not sure, how does it work, that you just add TableB to the logical TableA, who would the programm know which rows belong together? I assume one must establish a logical connection between TableA and TableB to make it compatible?
    About my tables:
    Actually TableA, TableB and TableC are recognized dimensions (white) and TableD and TableE are recognized as facts (yellow) in the Administration tool. Though I dont agree with this assignment. TableA is definitly a fact table (it is also the center of the star schema) while the other tables are the dimension tables. Not sure how the Administration tool is distinguishing between facts and dimensions...
    Here are my warnings:
    BUSINESS MODEL TAB: (with two logical tables)
    [39008] Logical dimension table TableA has a source TableA that does not join to any fact source.
    [39008] Logical dimension table TableA has a source TableB that does not join to any fact source.
    BUSINESS MODEL TACDE: (with three logical table sources)
    [39008] Logical dimension table LT2 has a source TableE that does not join to any fact source.
    [39008] Logical dimension table LT2 has a source TableD that does not join to any fact source.
    [39008] Logical dimension table LT2 has a source TableC that does not join to any fact source.
    And the errors in Answers:
    Retrieving ColumnX from TableA in the first model
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: TableA.ColumnX. Please fix the metadata consistency warnings. (HY000)
    Retrieving ColumnX from TableA in the second model
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 17001] Oracle Error code: 12170, message: ORA-12170: TNS:Connect timeout occurred at OCI call OCIServerAttach. [nQSError: 17014] Could not connect to Oracle database. (HY000)
    Thanks for further help!
    Evgeny

  • Dimension key 16 missing in dimension table /BIC/DZPP_CP1P

    Hi all,
    I have a problem with an infocube ZPP_CP1. I am not able to delete nor load any data. It was working fine till some time back.
    Below is the outcome of running RSRV check on this cube. I tried to run the error correction in RSRV. But n o use
    Please help.
    Dimension key 16 missing in dimension table /BIC/DZPP_CP1P
    Message no. RSRV018
    Diagnosis
    The dimension key 16 that appears as field KEY_ZPP_CP1P in the fact table, does not appear as a value of the DIMID field in the dimensions table /BIC/DZPP_CP1P.
    There are 17580 fact records that use the dimension key 16.
    The facts belonging to dimension key 16 are therefore no longer connected to the master data of the characteristic in dimension.
    Note that errors that are reported for the package dimension are not serious (They are thus shown as warnings (yellow) and not errors (red). When deleting transaction data requests, it can arise that the associated entries in the package dimension have already been deleted. As a result, the system terminates when deleting what can be a very large number of fact records. At the moment, we are working on a correction which will delete such data which remains after deletion of the request. Under no circumstances must you do this manually. Also note that data for request 0 cannot generally be deleted.
    The test investigates whether all the facts are zero. If this is the case, the system is able to remove the inconsistency by deleting these fact records. If the error cannot be removed, the only way to re-establish a consistent status is to reconstruct the InfoCube. It may be possible for SAP to correct the inconsistency, for which you should create an error message.
    Procedure
    This inconsistency can occur if you use methods other than those found in BW to delete data from the SAP BW tables (for example, maintaining tables manually, using your own coding or database tools).

    Hi Ansel,
                 There has been no changes in the cube. I am getting this problem in my QA server. So I retransported the cube again from Dev to QA. But did not help me..
    Any other ideas??
    Regards,
    Adarsh

  • Fact Tables without Dimension Tables.

    Hi,
    I am currently working on a project where a data warehouse is being developed using an application database as the source. Since the application database is already normalised, the tables are being loaded in as they are and the consultant involved is creating
    data marts for them. I have asked about why he is not creating separate dimension and fact tables as data marts, and he has told me that he will create a fact table with the dimensions as column names without any dimension tables created as you might expect
    as per star/snowflake schema.
    Is this right, as I always thought you had to have dimension and fact tables, especially if you want to start using SSAS?
    Regards,
    W.

    create a fact table with the dimensions as column names without any dimension tables created as you might expect as per star/snowflake schema.
    This is the trouble with denormalization: where is the stopping point?
    If the data warehouse is the source for SSAS OLAP cubes, it is better to go with the usual star schema.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

  • How to identify fact and dimension tables

    Hi ,
    We are having the list of parent child relation info for each database tables. Based upon the parent child tables ,needs to identify which table is fact ,which table is dimension .Could you please help me how to identify the fact table and dimensions tables ?
    Thanks in advance

    Hi,
    Refer this link........
    http://www.oraclebidwh.com/2007/12/fact-dimension-tables-in-obiee/
    Please mark if it helps you.......

  • Reg: Fact table and Dimension table in Data Warehousing -

    Hi Experts,
    I'm not exactly getting the difference between the criteria which decide how to create a Fact table and Dimension table.
    This link http://stackoverflow.com/questions/9362854/database-fact-table-and-dimension-table states :
    Fact table contains data that can be aggregate.
    Measures are aggregated data expressions (e. Sum of costs, Count of calls, ...)
    Dimension contains data that is use to generate groups and filters.
    This's fine but how does one decide which columns to consider for Fact table and which columns for Dimension table?
    Any help is much appreciated.
    Pardon me if this's not the correct place for this question. My first question in the new forum.
    Thanks and Regards,
    Ranit Biswas

    ranitB wrote:
    But my main doubt was - what is the criteria to differentiate between columns for Fact tables and Dimension tables? How can one decide upon the design?
    Columns of a fact table will often be 'scalar' attributes of the 'fact' data item. A dimension table will often be 'compound' attributes of a 'fact'.
    Consider employee information. The EMPLOYEE table can be a fact table. It might have scalar attribute columns such as: DATE_HIRED, STATUS, EMPLOYEE_ID, and so on.
    Other related information that can't be specified as a single attribute value would often be stored in a 'dimension' table: ADDRESS, PHONE_NUMBER.
    Each address requires several columns to define it: ADDRESS1, ADDRESS2, CITY, STATE, ZIP, COUNTRY. And an employee might have several addresses: WORK_ADDRESS, HOME_ADDRESS. That address info would be stored in a 'dimension' table and only the primary key value of the address record would be stored in the EMPLOYEE 'fact' table.
    Same with PHONE_NUMBER. Several columns are required to define a phone number and each employee might have several of them. The dimension tables are used to help 'normalize' the data in the employee 'fact' table.
    And that EMPLOYEE table might also be a DIMENSION table for other FACT tables. A DEVELOPER table might have an EMPLOYEE_ID column with a value that points to a 'dimension' row in the EMPLOYEE dimension table.

  • Best Practice loading Dimension Table with Surrogate Keys for Levels

    Hi Experts,
    how would you load an Oracle dimension table with a hierarchy of at least 5 levels with surrogate keys in each level and a unique dimension key for the dimension table.
    With OWB it is an integrated feature to use surrogate keys in every level of a hierarchy. You don't have to care about
    the parent child relation. The load process of the mapping generates the right keys and cares about the relation between the parent and child inside the dimension key.
    I tried to use one interface per Level and created a surrogate key with a native Oracle sequence.
    After that I put all the interfaces in to one big Interface with a union data set per level and added look ups for the right parent child relation.
    I think it is a bit too complicated making the interface like that.
    I will be more than happy for any suggestions? Thank you in advance!
    negib
    Edited by: nmarhoul on Jun 14, 2012 2:26 AM

    Hi,
    I do like the level keys feature of OWB - It makes aggregate tables very easy to implement if your sticking with a star schema.
    Sadly there is nothing off the shelf with the built in knowledge modules with ODI , It doesnt support creating dimension objects in the database by default but there is nothing stopping you coding up your own knowledge module (use flex fields maybe on the datastore to tag column attributes as needed)
    Your approach is what I would have done, possibly use a view (if you dont mind having it external to ODI) to make the interface simpler.

  • Discoverer Condition on Time Dimension table

    Hi,
    We have a time dimension table TIME_DIM like this:
    DATE_ID, YEAR, QUARTER, MONTH, WEEK, DAY
    1, 2005, 1, 1, 1, '01-JAN-2005'
    2, 2005, 1, 1, 2, '02-JAN-2005'
    3, 2005, 1, 1, 2, '03-JAN-2005'
    4, 2005, 1, 1, 2, '04-JAN-2005'
    5, 2005, 1, 1, 2, '05-JAN-2005'
    6, 2005, 1, 1, 2, '06-JAN-2005'
    7, 2005, 1, 1, 2, '07-JAN-2005'
    8, 2005, 1, 1, 2, '08-JAN-2005'
    9, 2005, 1, 1, 3, '09-JAN-2005'
    10, 2005, 1, 1, 3, '10-JAN-2005'
    The week starts from a sunday and there are 52 weeks in a year. So, in 2005,
    Week 52 will have all th days starting Dec-18 till Dec-31.
    DATE_ID, YEAR, QUARTER, MONTH, WEEK, DAY
    352, 2005, 4, 12, 52, '18-DEC-2005'
    353, 2005, 4, 12, 52, '19-DEC-2005'
    354, 2005, 4, 12, 52, '20-DEC-2005'
    355, 2005, 4, 12, 52, '21-DEC-2005'
    356, 2005, 4, 12, 52, '22-DEC-2005'
    357, 2005, 4, 12, 52, '23-DEC-2005'
    358, 2005, 4, 12, 52, '24-DEC-2005'
    359, 2005, 4, 12, 52, '25-DEC-2005'
    360, 2005, 4, 12, 52, '26-DEC-2005'
    361, 2005, 4, 12, 52, '27-DEC-2005'
    362, 2005, 4, 12, 52, '28-DEC-2005'
    363, 2005, 4, 12, 52, '29-DEC-2005'
    364, 2005, 4, 12, 52, '30-DEC-2005'
    365, 2005, 4, 12, 52, '31-DEC-2005'
    The condition we would like to have defined is as follows:
    Last week(Or Previous Week)
    If today is Jan-10, 2005. Current week number is 3 and hence Previous week is 2. So, i should get a condition as follows:
    DATE_ID in (Select Date_ID from TIME_DIM where week=(select week - 1 from TIME_DIM where day = sysdate))
    Discoverer does not allow subqueries in conditions. How to implement this?
    Any help will be appreciated.
    The problem is similar to accounting periods that do not correspond to the Gregorian calendar. For gregorian calendar, we can use the date functions to acheive the functionality.
    Regards,
    Ramesh

    The best soultion would probably be to create a database function and register it through Discoverer Administration edition. You can then call the funciton in your condition, to replace the need for a sub-query

  • Dimension table and fact table exists data physically

    Hi experts,
    can anyone plz tell me weather dimension table and fact table exists data physically or not/

    Hi..Sudheer
    SAPu2019s BW is based on "Enhanced Star schema" or "Info Cubes" database design.This database design has a central database table, known as u2018Fact Tableu2019 which is surrounded by associated dimension tables.
    Fact table is surrounded by dimensional tables. Fact table is usually very large, that means it contains
    millions to billions of records.
    These dimension tables doesn't contain data  it contain references to the pointer tables that point to the master data tables which in turn contain Master data objects such as customer, material and destination country stored in BW as Info objects. An InfoObjects can contain single field definitions such as transaction data or complex Customer Master Data that hold attributes, hierarchy and customer texts that are stored in their own tables.
    SID is surrogate ID generated by the system. The SID tables are created when we create a master data IO. In SAP BW star schema, the distinction is made between two self contained areas: Infocube & master data tables/SID tables.
    The master data doesn't reside in the satr schema but resides in separate tables which are shared across all the star schemas in SAP BW. A numer ID is generated which connects the dimension tables of the infocube to that of the master data tables.
    The dimension tables contain the dim ID and SID of a particular IO. Using this SID the attributes and texts of an master data Io is accessed.
    The SID table is connected to the associated master data tables via teh char key.
    Fact table(Transaction data,DIM ID)<>Dimention Table(SID and Dim ID)<->Masterdata table(SID,IO)
    Thanks,
    Abha

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Oracle Data Compression on SID tables and Dimension Tables

    Hello Community,
    We have had great success with Oracle compression on ODS tables that are no longer loaded.
    We'd now like to move on to other types of BW tables that are very large.
    OSS Note 701235 provides answers to questions concerning the possible use of Oracle compression together with SAP BW.
    But the Note does not give suggestions for (or against) Oracle compression on SID tables or Dimension tables.
    I believe both table types would exhibit the same behaviour : mostly inserts of new SID IDs and new DIM IDs, but few updates to existing SID or Dimension records.  If this is true, then both are good candidates for oracle compression. 
    Do you also agree that this is the typical behaviour for SID tables and dimension tables ?  And that these types of tables are good candidates for Oracle compression in a large BW system ?
    Thanks kindly!
    Keith Helfrich

    Hi all,
    Although this is an old thread I stumbled on during my own investigations I can provide some answers to your questions.
    Table candidates for compression are found by these criteria
           - Table size big enough?
           - Long lifetime of object planned ?
          - No or only rare structural changes for the table   ?
          - u201EUpdateu201C rate low : is your data mostly kind of u201Cread onlyu201D ?
    --               for the wideley used rolling window partition techniques of
                      tables in BW this is not a problem: mostly INSERTu2019s in the
                     current partition not affecting other partitions
    BW tables that can benefit from compression (see SAP notes 105047,701235)
           - PSA tables with data that must be saved for a longer time
           - ODS change log (no Updates of old data, only Inserts of new data)
           - u201Ehistoricalu201C cubes wich get no changes in table structure anymore
    Limitations
             - normal Insert or Update statements are stored ALWAYS in uncompressed
                    format and must be compressed separately ( <= Oracle10g )
             - Slight CPU overhead of compression, butu2026
             --      CPU consumption is more than compensated by doing less I/O as
                   for Bulk loads or parallel processing
             --      SAP BW transformations took a significant amount of CPU for
                       overall load-time into cubes caused by the application server not
                       the database
              - Table must have not more than 255 fields
              - Adding columns with an initial value or dropping columns require
                   uncompression of the complete table (strongest limitation)
    Consider all this above and you can decide that tables that going through UPDATE's are
    not good candidates for compression or tables that can change it's structure (like
    Fact- or DIM tables) .
    Now, my questions to you:
    Wich Oracle version do you use?
    Wich tool do you use for  Oracle compression?
    BRspace (can you give an example ?)
    ALTER ... MOVE COMPRESS
    bye
    yk

Maybe you are looking for

  • Error messages in OACoreGroup.0.stderr under iAS/Apache/Jserv/logs/jvm/

    Hi, We have been receiving the below error messages in OACoreGroup.0.stderr file. [Aug 23, 2010 11:42:12 AM CEST]:1282556532823:Thread[Thread-116635,10,main]:-1:-1:bz-antwerpen02:10.214.215.25:10020:20020:UNEXPECTED:[fnd.common.logging.DebugEventMana

  • Creation of new sort version in T.code OAVI

    Hi  Gurus, Can you please help me, I am Supposed to create  New sort version(Tcode OAVI), is it possible to create a version with more than 5 fields? I am not able to create a sort version, with more than 5 fields, If possible , kindly provide a solu

  • Regex with strings that contain non-latin chars

    I am having difficulty with a regex when testing for words that contain non-latin characters (specifcally Japanese, I haven't tested other scripts). My code: keyword = StringUtil.trim(keyword); //if(keywords.indexOf(keyword) == -1) regex = new RegExp

  • Account segmentation - singl G/L problem

    Hi All I am using account segmentation. Now in G/L account determination down payment clearing account I can mention only one account and the same account should appear in business partner account. Now if I am having different G/L for different branc

  • Apple Keyboard and Win 8: Cannot type numbers ! Looking for driver or workaround.

    I have just upgraded to Windows 8 on one of my Mac Minis. Alas, the Apple keyboard cannot be used to enter numbers in Windows 8 ! Is there a new driver out there that fixes this ? I have been unable to find one but I may have conducted a faulty searc