Setting up Hierarchies and dimensions

Do you have any idea what i need to do if the commissions depend on discounts, item numbers and customer numbers?
We are on 3i- 3.1.2 and pay commission after invoicing and have used indirect mapping from AR. How to bring in discount related info.
How many hierarchies will I need to setup. Will I need one hierarchy for items, customers and rules apart from the sales and revenue class hierarchies. Also what should revenue classes and how should the rev class hierarchy look like if we have several thousand items and customers.
Will I need to create each combination of item and customer as a revenue class and assign all of them to each compensation plan.
null

You will need a hierarchy of values for each dimension of classification that you need to determine the basis of classification. Your objective is to create the FEWEST rules that accomplish this objective. For example,
A company sells 5000 products through two distribution channels. On the face of it, this would imply 10,000 rules and revenue classes. However, when we looked closer, it was the method of distribution that really determined how much they they paid. The product id was important to capture for auditing and reporting but was not actually needed for classification. Therefore, the only classifcation categories required were the two that distinguished the different distribution channels (which presumably paid different rates)
Be sure to distinguish between the concept of classification to determine the basis of calculation and the linkages between salesreps and customer accounts. For example,
does every customer have seperately negotiated rates for every product?
The revenue class hierarchy will ultimately reflect the groupings of rules that you need for classification. The revenue classes are simply bundles of rule combinations...but you should only create the combinations that you definitely need. Think twice before blindly creating revenue classes for every combination.
The sales hierarchy is completely different since that simply indicates the chain of credit receivers who potentially receive credit. The presence of a specific revenue class in their plan is the second determinant.
I believe in 3i there is a plan element function which covers discounts. Not needed for classification.
The customer dimension is not needed for classification unless individual accounts have seperately negotiated commission rates in addition to the discounts mentioned above. If that is the case, you will still want to look for ways to group similar customers together.
Thinking this through can be challenging but the payoff is a system that is easier to implement and administer.
null

Similar Messages

  • How to set both dpi and dimensions?

    I haven't used illustrator in a while, but for classes, I have to set the image to 300dpi and 800x800 px, and I can only keep one or the other, so how do I keep both?
    My art board is 800x800px, I've set my doc raster effects settings to 300dpi. I've tried searching for an answer but I can only find solutions to one or the other, so I'm suspecting it's something simple that I just can't figure out! Any help would be appreciated! Thanks!

    This question is one of the most frequent points of confusion asked about in this forum. So I will go ahead and post the answer I typed, despite your having resolved the issue. It may be of benefit to others.
      ...I have to set the image to 300dpi and 800x800 px...
    You have to set what image to those values, and where?
    This is a vector-based drawing and design assembly program. Unlike a raster imaging program (Photoshop, etc.) it isn't about just manipulating a single array of pixels. Any program like Illustrator (or like InDesign, etc.) is about assembling any combination of independent OBJECTS into a layout. Each of those OBJECTS can be a raster image, a mathematically-defined shape (vector-based path), or a text object.
    So you have to be careful about your terminology when asking for explanations. In the context of a program like Illustrator, the term "image" is assumed to refer to a raster image object. That object may be an individual raster image that exists somewhere on the page, or a rasterization of the entire page that you intend to export, or a raster image that is automatically generated on-the-fly when you apply a raster-based Effect like a Drop Shadow or a Blur. Your question makes it clear that you are confusing all three, and the answer to your question depends on which kind of object you are talking about setting to the required values.
    PPI (Pixels Per Inch) is nothing but a scaling factor. 800 x 800 pixels is a specific COUNT of rows and columns of pixels. Any COUNT of pixels can be scaled so as to achieve any PPI. "Pixels" is only half of the equation. "Inch" is the other half.
    In other words, you can't know the PPI of a raster image until you know TWO things: The COUNT of rows/columns of pixels contained in the image and the MEASURE of the overall image. Only then can you know the third thing: the COUNT per MEASURE; the Pixels Per Inch:
    COUNT / MEASURE=Count Per Measure
    So your stated requirements provide two values: 800 is a COUNT of pixels (actually rows or columns) contained in an image. 300 is a COUNT per MEASURE after the image has been scaled to some unknown MEASURE. So:
    800/300=Number of Inches
    The PPI of an 800 x 800 pixel raster image is 300 PPI only when those 800 pixels (rows/columns) contained in the image have been scaled to occupy 2.66666.... inches. (Notice that this is actually quite intuitive enough to do in your head: How many "threes" does it take to equal one "eight"? Two-and-two-thirds, or 2.6666....)
    So whatever 800 x 800 pixel image you are talking about will have to be scaled to MEASURE 2.66 inches in order for its PPI to be 300. That's 2.66... inches or its equivalent in some other unit of actual measure.
    Again the question remains: What image are we talking about? And how does that relate to Illustrator's rulers and Document Raster Effects setting? Read on.
    Read the first paragraph again and be sure you understand this point: An Illustrator page can contain any number of INDEPENDENT raster images. Each is an independent OBJECT that each can be independently scaled. Therefore, each raster image on an Illustrator page can have its own, individual, PPI value. But Illustrator has only one horizontal page ruler. So there is no way that setting Illustrator's rulers to "Pixels" can reliably indicate
    the actual number of pixels contained in any and all raster image(s) that may exist on the page.
    Unlike Photoshop, Illustrator's rulers are always an indication of linear measure. A pixel has no linear measure whatsoever until you give it one; i.e.; until you scale it to an actual size. Therefore, a pixel is not a unit of linear measure. So the actual PPI of any raster object that resides on the page, in truth, has absolutely NOTHING to do with the "Pixels" indicated on the page rulers when you set the ruler units to "Pixels," unless you deliberately set up that correspondence.
    (I can't decipher from Jacob's tongue-in-cheek response to whom he is referring as "some." If "some" includes me, I assure you I am not saying this to confuse anyone. I'm saying this to help clear up the confusion that chronically occurs among newcomers due to this regretful element of the interface.)
    Suppose for example, you have an image that you know contains one pixel. You import this image into an Illustrator document. The rulers are set to Inches. In the Transform palette, you set the height and width of this raster image object to 1 inch. You carefully position the upper left corner of the image to the page origin.
    Based on what you observe by looking at the rulers, the resolution of this image is one Pixel Per Inch. Makes sense, right?
    Now change the rulers' Unit of Measure to "Pixels."
    Based on what you observe by looking at the rulers, the resolution of this image is one Pixel Per 72 Pixels. Wha...? Umm... its resolution is 1/72 Pixels Per Pixel? Its resloution is .013888... Pixel Per Pixel? What kind of sense does this make?
    See? Remember: In order to know the resolution of a raster image, you need two values: A COUNT and a MEASURE. What the heck does "Pixel Per Pixel" mean?
    In Illustrator, it simply means Pixel Per Point (1/72 inch).
    ...and I can only keep one or the other, so how do I keep both?
    Sure you can have both. Again, any number of pixels can be scaled to any size, to result in any PPI. But in Illustrator, the only way an 800-pixel image is going to have a PPI of 300 is if it is scaled to measure 2.666... inches. Simply select the 800 x 800 image, set the Unit of Measure to Inches, go to the Transform palette, turn on the proportional chain link, and set the width or height to 2.66.
    But if you then change the Unit of Measure back to "Pixels", don't expect the rulers to indicate a "width" or "height" of 800 "Pixels." They will indicate 192 "Pixels", because 2.66... inches equals 192 Points.
    My art board is 800x800px,...
    No. As you should now understand, your Artboard is, in fact, 800 x 800 points, which is 11.111... inches. If you don't believe this, just change your Unit of Measure temporarily to Inches. The only way you will get an 800 x 800 pixel image corresponding to this Artboard is if you export it to a raster image format (i.e.; create a raster image of it) at a resolution of 72 PPI.
    I've set my doc raster effects settings to 300dpi.
    That simply means that the raster images that are created when you apply raster-based live Effects (Drop Shadows, Glows, Blurs, etc.) will be created with pixels which measure 1/300th of an inch, according to Illustrator's rulers which, again, always refer to an actual unit of measure. So if your rulers are set to Inches, a 1 inch by 1 inch Effect will be rasterized to 300 x 300 pixels. If you change your rulers to "Pixels", the Effect will still be rasterized to 300 x 300 pixels, but your ruler will indicate that that raster image only measures 72 x 72 "Pixels." What that really means is merely that the image measures 72 points, which is equal to 1 inch.
    Don't blame me. I didn't design the program.
    JET

  • Video quality and dimension always smaller than i make it

    every time that i set the quality and dimensions in the media encoder it look great. i make sure the preview is set to output but when it finishes no matter what the video is always very smal compared to what i put in. when i put the file in after fx it is also less quality then what it was in adobe premiere. any help would be awesome!!!

    You have Zoom on... double tap with three fingers to turn it off.   Go into Settings > General > Accessibility to disable it.
    See p. 243... http://manuals.info.apple.com/en_US/iPhone_iOS4_User_Guide.pdf

  • How to export and import template and dimensions mumbers between 2 appset

    hi expert:
          1.How to export and import template and dimensions mumbers between two application set
          2. In Our project Requirement ,
              we moust be  develop our planning in develop evn application set , distribute template and
              dimension mumbers to another  Qas applications set .
         3. how can  i   package our template in dev application set  and Anti-package to Qas application set
             with BPC  system tools?
         4. This  Requirement  is  from  our Security control Dep. we must be change template version or dimension
              mumber with system tools.
         5.thanks for your help!

    Ian,
    So, in other words, you're saying that it's not possible to export directly objects (like smart albums) that reside outside of projects?
    I know, for example, you can "export" anything created inside a folder by dragging and dropping the folder to another library; but you then have to rebuild the library to get the new folder and its contents to be recognized. For my library, this takes too much time to do.

  • Setting Evaluation order and Reordering dimension

    hi i was going through the hp admin pdf and i have a few doubts
    what do you mean by setting the evaluation option in planning means and why is it reccomended to select only one dimension(setting evaluation order)
    in "About Reordering Dimensions" topic it was mentioned about the ordering of aggregating sparse dimensions before non aggregatin ones , what are these two types and also it was said to arrange the sparse dimensions in the order of more sparse members to less sparse members but isnt it the opposite way around i mean as per the hour glass model the order should be from least sparse to most sparse dimension and attribute dimension in the end
    what exactly is the diffrence in Setting Evaluation Order and Reordering dimensions ?

    Reording the dimensions sets the order of the dimensions in essbase, reordering the dimensions can be part of optimizing the database.
    Setting the evaluation order is more to do with how the dimensions are evaluated in forms, so if account is set to be first then the properties of the account members data type will be used first, for instance if the account member is set to Percentage then member will be displayed as a percentage.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How to resolve many fact tables and Dimensions tables

    Hi,
    The scenario is we have many facts and dimension tables. Based on some conditions one measure from one fact will be divided by another fact measure. I have encoutered with many errors like " Unable to navigate .... " ? How to resolve these errors and reduce many to few ? ( I assume creating logical tables, but is there any other alternatives ? )
    thanks
    Suresh

    Suresh,
    I assume that you know how to create a single logical fact from n-physical facts, ie only if the fact tables are related. Then join all the conformed dimensions to this single Logical table using a join in the Business Model layer. Remember to set the mappings in the LTS. All if you have any hierarchies please set the aggregation level for those.
    - Red

  • Load hierarchical attribute dimensions with Integration Services?

    Hi everybody,
    I need to load product dimension which is organized in a relational table like this:
    Product (parent_code, member_code, member_alias, brand, consolidation, formula)
    Every product has a brand and I need to load brand as attribute dimension. The thing is Brand is not flat but it has it's own hierarchy. For Brand I have another relational table where data is organized parent-child like this:
    Brand (brand_parent_code, brand_child_code, brand_child_alias).
    I have used in the past Integration Services but with flat attribute dimensions.
    Can I load hierarchical attribute dimensions with Integration Services? If yes, how do I do it, how do I specify the hierarchy?
    Thank you,
    Daniela

    Graham,
    This is definitely a supported feature in EIS/9.3.1/ASO. I have many models with this type of structure. How you set it up can vary. Usually my Attribute Hierarchies are not that deep, only two to three levels, maybe four in a rare case, so I don't usually use a parent child table to set up the hierarchy (I'm not saying that it won't work, it might, I haven't tried, but same steps should apply). In a typical model I will have my stock table which has a buyer field. Then in another table I will have my attribute structure which will have columns for buyer, teams, and categories.
    In EIS OLAP model, you add your attribute hierarchy table and use a join to link it to the main stock table, joining on the buyer field (you are now going from a "star" schema to a "snowflake". Go into the properties and make sure you define all the columns as "Attributes".
    Then in Metadata model, drag your categories attribute onto the outline, then drag the teams and set it as a child of categories and finally drag buyer and set as child of teams. You only set the attribute association for the buyer back to the base dimension.
    When you run your dim build it will set up your attribute dimension correctly.
    Some things to keep in mind, make sure you have a process that ensures for every stock code you have in the main table, you have a matching one in your attribute dim table.
    Sometimes, depending on how much manipulation I need to do, instead of joining the tables in EIS I will go back to relational source and create a view that joins the two tables together, then in my OLAP model, I have one table that has three attribute columns, one column for the buyer and then the other two for team and category, from that point setting up in metadata model is the same.
    Good luck, let me know if you run into trouble.

  • Join multiple fact tables and dimensions and use all tables in report issue

    Hi,
    I have a report requirements and need to use multiple fact tables and unconformed dimensions as described below
    Fact table: F1,F2,F3
    Dimensions tables: D1.....D9
    F1:(joined to) D1,D2,D3,D4
    F2::(joined to)D1,D2,D5,D6
    F3::(joined to)D1,D2,D7,D8
    D7::(joined to)D9,D8 (dimension D7 joined to two other dimensions D9 and D8
    I'm trying to use columns from almost all the fact and dimension tables but getting "Unable to navigate requested expression. Please fix the metadata consistency warnings."
    Repository is consistent and no errors and warnings.
    How can I configure the repository to develop reports using all fact tables and dimensions?
    Appreciate for your help.
    Thanks
    Jay.
    Edited by: Jay on Feb 9, 2012 4:14 PM

    So you want me to convert snowflake schema to star. does it solve my problem? individual star queries are working find but when I query multiple stars together getting inconsistency errors. I removed content tables dim level totals for unconformed dimensions in logical fact LTS and set level for measures at total level for unconformed dimensions. it is still in progress and need to test.
    Thanks
    Jay.

  • Images change position and dimension after client edits on admin console!

    Hi, 
    1. After my client uploaded his images onto the site using the admin console they changed position slightly and did not fit exactually into the image frames i had set when i first made the site.
    2. When i opened the site on muse and merged changes the images positions and dimensions changed even more than what was being diplayed on the live site? (E.g I originally had two columns of square thumbnails for some slideshows and now i have a single column of rectangle images)
    3. I republished to see what would happen it now displays the way it was diplaying in Muse. The site is: www.jacobbuckland.co.uk
    My understanding was that when i create an image frame or gallery in muse that any image uploaded via the admin console should fill that frame no matter what the size or dimensions of that image?
    Any help gratfully received. Thanks

    Our technical team managed to find the immediate cause for the problem, and a solution.
    According to them there was a corruption on a speciffic OLAP file. By deleting it, they allowed it to be recreated on the next application processing. The file in question was found in \Microsoft SQL Server\MSSQL.2\OLAP\Data\UGF.0.db\<APPLICATION>.38.cub.xml
    Where <APPLICATION> is the name of the problematic cube.
    After that everything is working.

  • How to design many to many relationship in the fact and dimension

    There is a problem in my project what is the subject.And i wanna know how to implement in owb.I use the warehouse builder 10. Thanks.

    You may design and load whatever db model you want to.
    But If you set a unique key, you may find some integrity issues. I wouldn't do a many to many relationship between facts and dimensions. This could cause you lots of headaches when users start to submit queries using this tables. You'll probably face performance issues.
    Regards,
    Marcos

  • Fact table and dimension table

    what is the difference b/w fact table and dimension table

    A fact table contains numeric values and also contain composite key(i.e collection of foreign key) e.g.. sales and profit. Typically has two types of columns: those that contain facts and those that are foreign keys to dimension  tables.
    Dimension tables, also known as lookup or reference tables, contain the relatively static data in the warehouse. It contains character values E.g Customer_name,Customer_city.
    Dimension tables store the information you normally use to contain queries. Dimension tables are usually textual and descriptive and you can use them as the row headers of the result set.
    Rachna

  • Fact table and Dimension support

    When a datastore is setup, it can be set as 'OLAP type' Dimension, Slowly Changing Dimension or Fact table.
    I know that for a SCD you then define which columns are the surrogate key, natural key, start/end date etc and then use an appropriate 'Slowly Changing Dimension' KM. But what abuot the other types - Does ODI provide any additional functionality when something is defined (e.g) as a fact table ?
    Thanks,
    Chris

    Apologies for the delay in replying - I have been on holiday!
    I was just asking a general question. OCI has special features in its user interface to support slowly changing dimensions (where you define which columns are the surrogate key etc and then choose the SCD KM).
    However you can also define a datastore as a dimension or fact table but there do not appear to be any user interface or KM features that make use of this.
    Thanks,
    Chris

  • Table as fact and dimension

    Hi,
    Can one table act as a fact in one subject area and act as a dimension in another subject area? Thanks.

    Hi
    I confirm Stijn Gabriels' post.
    You don't have to do an alias in your physical table, otherwise the request will generate an alias in SQL for nothing ! However, in your logical layer, you will create 2 logical table : one for the fact, one for dimension. Both of them will have the same source : your unique physical table.
    Let's take an example : suppose you have only 2 table in your datawarehouse : 1 fact table with degenerate dimension attributes (so a table with fact and dimension data), we'll call it "revenue", and 1 dimension table for... "Time", for example. We'll call it"Time"
    Your conceptual (on paper) is a star schema with 1 fact table (revenue_fact), and 2 dimension table (time, and revenue_carac).
    In your OBIEE physical layer :
    - you import the 2 tables "revenue" and "time" from your database.
    - you link "revenue" with "time"
    In your OBIEE logical layer :
    - you create a logical table called "Dim Time", based on the "Time" physical table and you do what you want with it (hierarchy...)
    - you create a logical table called "Dim Revenue Carac", based on the "revenue" physical table, and you do what you want with attributes
    - you create a logical table called "Fact revenue", based on the "revenue" physical table, and you do you what you want with measures and aggregation
    - you link the 2 logical dimension table with the logical fact table
    And that's all. Now, let's see which kind of SQL OBIEE will generate if you want to display the measure "revenue" with the attribute "revenue_carac" and the attribute "year".
    Select Sum(R.revenue_measure) , R.revenue_carac , T.year
    From revenue R , time T
    Where R.time_id = T.id
    Group by R.revenue_carac , T.year
    If you set alias in your physical layer, the request will be that (and you don't want it) :
    Select Sum(R1.revenue_measure) , R2.revenue_carac , T.year
    From revenue R1, revenue R2 , time T
    Where R1.time_id = T.id
    And R1.id = R2.id
    Group by R2.revenue_carac , T.year
    same results, but useless join between the same physical table

  • Need a document about how to move the fact and dimension table's to different server's

    Hello Experts,
    I need a detailed doc on how to move the fact and dimension tables to different server's.Please help me out from this
           Thanks in advance....

    You still haven't told anyone what products besides Essbase you are using, without which this is an impossible question to answer.
    https://forums.oracle.com/thread/2585515
    https://forums.oracle.com/thread/2585171
    Are you connecting to these tables from Essbase with a load rule / ODBC?  Using Studio?  Using Integration Services?  Any Drill-Through reporting set up?
    This may sound harsh, but if you truly don't know how to answer any of these questions you should probably not be anywhere near this task...

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

Maybe you are looking for