Time and Dimension Consolidation

Hi All
Can some one help in explaining Time and Dimension Consolidation in BICA link under create scorecard metric.
How does this setting impact target calculation. And what does Enter and None option work under dimension consolidation and options avaiable under time consolidation.
Help is highly appreciated.
Regards
Nagesh

Nagesh,
There are some significant differences between how Time and Dimensions are handled in SSM.
Time: InfoCubes have time characteristics for each time period. SSM has no characteristic for Time which gives it greater flexibility to use different time settings. In BICA you are taking the most granular time characteristic to bring into SSM. Then in SSM you can aggregate or divide the measure value into any other time periodcity.
Dimensions: Key Figures in InfoCubes are dimensioned by every characteristic in the Dimension Tables. In SSM, measures can have different dimensionality or, in some cases, no dimensionality. An easy example to better understand is that in SSM, SALES can be dimensioned by Customer, Customer Group and Product, and COSTS can be dimensioned by just Customer Group and Product.
The BICA connector that is used does not bring in all the internal features of BEx. There are internal calculations that BEx does that is not exposed to SSM through the BICA. That is why in some cases you can see some difference between the BEx output and the data in SSM.
There is a Best Practices Guide BICA for BW which is the best resource of information on the BICA connector and understanding some of the key differences between data in BW and how to bring that into SSM.
http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/701b8da0-3135-2b10-929d-936e43164b06
Although the document says its for SSM 7.0, the doc will apply to SSM 7.5 as well.
Regards,
Bob
Edited by: Bob McGlynn on Jan 14, 2011 6:27 AM

Similar Messages

  • "Semi/Non additive measure with" : "LastChild Across All Dimension" OR "LastChild Across Time and MAX across All Other Dimension"

    Hey All
    I am using the SQL SERVER 2008R2 standard edition so please be mindful when posting any suggestion if that will work on this edition or not.
    Here is the situation: I have Cube with one fact table "Positions". Measure I am querying is "AUM" and Dimensions across user can query are combinations of {"Portfolio", "Security" and "Time"}.
    I am receiving the "AUM" values at "portfolio+Security+Time" level. AUM is actually the Portfolio level value. so sample data I receive would like this.
    PORTFOLIO SECURITY TME AUM
    A SEC1 JAN-1 100
    A SEC2 JAN-1 100
    B SEC1 JAN-1 200
    B SEC3 JAN-1 200
    A SEC1 JAN-2 300
    A SEC2 JAN-2 300
    B SEC1 JAN-2 400
    B SEC3 JAN-2 400
    So, as shown in above example , we received same "AUM" value for a given portfolio and time. (regardless of what security value is).  
    And as AUM is the snapshot values, we can not sum across time as well, so if user is looking across time we need to pick Last values.
    Now when user wants to view this data from cube, two different ways s/he wants to look at it.
    option one: AUM at port+sec+time level. that is same as shown above in example.
    option two: AUM at portfolio level only,  (notice that this is across time so results are picking values for time=JAN-2)
    PORTFOLIO AUM
    A 300
    B 400
    GRAND TOTAL 700
    I have applied the "LastChild" aggregation property to measure to achieve "Do no Sum across Time "behavior. but with this I also get "SUM Across Except time" and hence my AUM values get summed across Portfolio and security as
    well. I have also enable Time Intelligence to enable MTD, YTD calculations.
    I tried to add SCOPE statement for Security before and after the "time calculations" yet I am not able to achieve expected behavior.
    here is the calculated member definition, and scope statement i have tried.
    CREATE MEMBER
    CURRENTCUBE.[MEASURES].[AUMNetBaseCurrency] AS
    [Measures].[AUMNetBaseCurrency_Raw_Max],
    FORMAT_STRING = "#,##0.00 ;( #,##0.00 )",
    VISIBLE = 1 , DISPLAY_FOLDER = 'Position';
    After this I have MTD and YTD time calculation for few measures including AUM.
    I tried placing the following SCOPE for AUM before and after the Time calculations but it no difference in result, in fact it seems like it has no affect of scope at all due to the "LastChild" aggregate property applied on measure.
    SCOPE([MEASURES].[AUMNetBaseCurrency]);
    SCOPE([Security].[Symbol].[All]);
    THIS = MAX([Measures].[AUMNetBaseCurrency_Raw_Max]);
    END SCOPE;
    END SCOPE;
    so options I am thinking of but do not have clear idea how to implement those are:
    0) why my Security Scope Statement are not showing any effect, and can we do something about it?
    1) Can be change "SUM" behavior to "MAX" / "MIN" across all Dimension except time.
    2) Is there a way to implement "LastChild" across all dimension instead only across Time.
    3) Should I need to make any changes at Fact table level to achieve this ? I refer one Q&A in this forum that suggested to create such measure in different measure group but it doesn't have any clarification.
    Any help would be appreciated.
    Thanks
    Anup Shah

    hi,
    you will have to enable cache in your nqsconfig.ini file under the section
    # Query Result Cache Section
    [ CACHE ]
    ENABLE     = YES;
    cache seeding is also a mechanism through which the data would be updated in the cache.

  • Update new material master weight and dimension in open sales orders and de

    Hi,
    Iu2019m maintaining gross weight, net weight, volume in material master. When the time of sales order entry in VA01 its calculating weight and dimensions based on material master and order quantity. And I create deliveries in VL01N.
    If I made a correction in the net and gross weight in material master after I created the sales order will the correct net and gross weight will pick up at the delivery creation.
    In other term is the VL01N net and gross weight is taking from which is available in SO or Material master.
    Is there any standard transaction to update already existing open sales order, delivery net and gross weight once it is corrected in material master?
    Please advice.
    Sam

    Is there any standard transaction to update
    No it is not possible to update the weight in existing sale order or delivery.  You have to change it manually or create a new sale order.   Weight in delivery will be fetched from sale order only and hence,  whatever delivery you create referencing a sale order, system will copy whatever is there.
    thanks
    G. Lakshmipathi

  • Multiple Fact Tables and Dimension Tables

    I have been having some problems trying to model the data from Oracle E-Business Suite maintenance. I will try to give the best description of how the data is held in the tables. The structure is such that a work order can have multiple operations and an operation can have multiple resources as well. I believe the problem comes in the fact that an operation doesn't necessarily need to have a resource. I could not attach an image so I have written out an example below. I am not saying this is right or that it works, but just to give you an idea of what I am thinking. The full dimension would be Organization -> WorkOrder -> Operation -> Resource. Now, the fact tables all hold factual data for the three different levels, with the facts being at each corresponding level. This causes an obvious problem in combining the tables into one large fact table through the ETL process.
    Can anyone tell me if they think this can be done? Am I way off? I am sure that there is a solution as there always is but I have been killing myself trying to figure this one out. I currently have the entire solution in different Business Models. I would like however to be able to compare facts from multiple areas such as the Work Order level and the Resource level.
    Any help is greatly appreciated. I realize that the solution may also require additional work on the ETL side so I am open to any and all suggestions.
    Thank you in advance for anyones time. :)
    Dimension Tables
    WorkOrder_D
    Operation_D
    Resource_D
    Organization_D
    Fact Tables
    WorkOrder_F
    Operation_F
    Resource_F
    Joins
    WorkOrder_D -> Operation_D
    Operation_D -> Resource_D
    WorkOrder_D -> WorkOrder_F
    Operation_D -> Operation_F
    Resource_D -> Resource_F
    Organization_D -> WorkOrder_D
    Organization_D -> Operation_D
    Organization_D -> Resource_D

    Hi,
    Currently the dimension table is taken as a simple logical table in rpd as it does not have have any levels or hierarchy.
    Its a flat dimension. Can you guide me how can I implement a flat dimension in OBIEE? Because this dimension is taken as simple logical table
    I am not able to set appropriate level for fac tables. This dimension does not appear in the list of dimensions.

  • Fact Table and Dimension Tables

    Hi Experts, I'm creating custom InfoCubes for data coming from non-SAP source systems. I have two InfoCubes. Tha data is coming from like 10 tables. I have 10 DataSources created fo this and the data will be consolidated in Standard DSO before it will flow into 2 InfoCubes.
    Now client wants to know before how much data will be there in InfoCubes in Fact table nad Dimension tables in both the InfoCubes. I have the total size of all the 10 tables from the sources given to me by the DBA. I wan not sure how I can convert that info for Fact table and Dimension table as I have not yet created these Infocubes.
    Please help me with this on how I should address this.

    hi,
    The exact data will be hard to give however you can reach at a round figure in your case.
    You are consolidating the data from the tables that means that there is relation between the tables. Arrive at a rough figure based on the relation and the activity you are performing while consolidating the data of the tables.
    For example, let us say we want to combine data for sales order and deliveries in a DSO.
    Let Sales order has 1000 records and Delivery has 2000 records. Both the tables have a common link (Sales Order).In DSO you are combining the data that means the data will be at the most granular level consist of Delivery data, so the maximum no of records which the consolidated DSO can have is 2000.
    regards,
    Arvind.

  • How to have fact and dimension which are not joined in same report

    Hi All,
    suppose we have 2 dimensions and 1 of them is joined to fact but we want to pull both these dimensions along with fact and fact column shows null but we dont want that .how can we have 2nd dimension which is not joined to fact in the same report .Actually i have a column called equipment schedueled time and it can be shown only with equipment dimension but I want to pull both equipment,geography and equipment scheduled time in report if I do it now it shows equipment scheduled time as null because its not joined with geography dimension.I want even if the geography dimension exists it should show the value of equipment scheduled time .
    Can someone please help me.
    Thanks in Advance

    Hi ,
    Can you please tell me what are fact and dimension tables of your requirement.If you want display the data from 2 dims and a fact table ,you have to establish the joins between facts and dimensions.
    mark if helpful/correct...
    thanks,
    prassu

  • Unable to see planning application and dimension menus under navigate tab

    Dears
    Unable to see planning application and dimension menus under navigate tab in EPM 11.1.1.1 workspace.
    Is there any configuration lost?
    All EPM service have already started.
    Thx
    Hector

    I also don't see the menu under "Consolidation" or "Planning" applications. I can naviaget to these applications from Navigate -> Application ->Consolidation or Planning and I see a box with up arrow and down arrow. If I keep my cursor between these two boxes, or the boundary, sometime I could start my applications in these two modules. I checked context.properties in \Hyperion\home\deployment\Tomcat5\workspace\webapps\workspace\conf folder.
    #Appended properties
    #Sat Jan 17 14:14:43 EST 2009
    Reports=Y
    HFM=Y
    HPSU=Y
    HP=Y
    HPS=Y
    Analyzer=Y
    BPMA=Y
    ChangeManagement=Y
    Subcontext.properties file is as follows:
    Analyzer=WebAnalysis
    Reports=hr
    HMB=metrics
    HPS=HPSWebReports
    HFM=hfm
    HP=HyperionPlanning
    BPMA=awb
    HFM-version=9.3.1
    HP-version=9.3.1
    HPS-version=9.3.1
    HMB-version=9.3.1
    BPMA-version=9.3.1
    # Parameters for all contexts
    #KeepAliveInterval=10
    #SessionTimeout=30
    #logon=bpm.logon
    #startup=cds.mode
    I also removed # character in the last four lines but I cannot see my aplications under "Consolidation" or "Planning.
    This is in Hyperion 9.3.1 environment all my websites are working. Workspace, Shared Services etc. I checked all my servies and they are working ok.
    http://lewisxphub:19000/workspace/index.jsp
    Any Ideas?
    Thanks
    Jay
    Edited by: Keerthi on Jan 29, 2009 4:31 AM

  • HELP: Run-time array dimension in LabVIEW formula node

    I need to dimension an array at run time within a formula node as follows:
    int32 i,N;
    N = sizeOfDim(inputArray,0);
    float64 outputArray[N];
    for (i = 0; i outputArray[i] = myfunction(inputArray[i]);
    However, LabVIEW complains "Formula Node: index list expected". On the
    other hand, if I say
    float64 outputArray[1000];
    LabVIEW is perfectly happy. But that's not what I need to do! Is there
    an alternative
    way of accomplishing my goal?
    BTW, I've tried calculating N outside the formula node and then
    presenting it as
    an input with the same results. I've got a bad feeling that run time
    array dimensioning
    just isn't allowed.
    TIA,
    Hugh

    Can't you just use the Initialize Array function outside the formula node and pass that instead?

  • Video quality and dimension always smaller than i make it

    every time that i set the quality and dimensions in the media encoder it look great. i make sure the preview is set to output but when it finishes no matter what the video is always very smal compared to what i put in. when i put the file in after fx it is also less quality then what it was in adobe premiere. any help would be awesome!!!

    You have Zoom on... double tap with three fingers to turn it off.   Go into Settings > General > Accessibility to disable it.
    See p. 243... http://manuals.info.apple.com/en_US/iPhone_iOS4_User_Guide.pdf

  • SSAS 2008 - How to get processing times per dimension / measure group?

    Hi experts!
    SSAS 2008. I am doing analysis and I'm trying to get information (from dmv or log) about processing times per dimension / measure group. Any ideas how to do that?
    Thanks,

    also in DMV there's no column recording the processing time, so we suggest you using SSAS AMO to programmatically get the state and last processed date time. Please see:
    Analysis Management Objects (AMO)
    Hi John,
    Thanks for you info, As
    Simon Suggested there is no DMV columns available.
    You can use below link for more information.
    Programming Administrative Tasks with AMO
    Cube
    partition attributes for last processed (timestamp and status)
    Thanks
    Suhas
    Mark as Answer if this resolves your problem or "Vote as Helpful" if you find it helpful.
    My Blog
    Follow @SuhasKudekar

  • Left outer join on Fact and dimension table.

    Hi all, I have a fact F with account number and few measures as columns.
    I also have a dimension D with account number, account name columns.
    Few account numbers from Fact doesnt exist in Dimension D , but they need to show up in the report though.
    How do I left join Fact and Dimension D in RPD?
    I have this report where I need to show Account Number, Account name, measures.
    If D doesnt have certain account numbers, I need to convert that account number from F as string and show it in the report in account name column.
    Can you pls help.

    Ok. I tried this:
    Driving table : Fact, Left outer join -- didnt work.
    Driving table: Dimension D left outer join -- didnt work either
    In either the case, I see physical query as D left outer Join on Fact F. and omitting the rows.
    And then I tried this -
    Driving table: Fact, RIght outer join.
    Now, this is giving me error:
    Sybase][ODBC Driver]Internal Error. [nQSError: 16001] ODBC error state: 00000 code: 30128 message: [Sybase][ODBC Driver]Data overflow. Increase specified column size or buffer size. [nQSError: 16011] ODBC error occurred while executing SQLExtendedFetch to retrieve the results of a SQL statement. (HY000)
    I checked all columns, everything matched with database table type and size.
    I am pulling Fact.account number, Dimension.account name, Fact.Measures. I am seeing this error each time I pull Fact.Account number.

  • Table as fact and dimension

    Hi,
    Can one table act as a fact in one subject area and act as a dimension in another subject area? Thanks.

    Hi
    I confirm Stijn Gabriels' post.
    You don't have to do an alias in your physical table, otherwise the request will generate an alias in SQL for nothing ! However, in your logical layer, you will create 2 logical table : one for the fact, one for dimension. Both of them will have the same source : your unique physical table.
    Let's take an example : suppose you have only 2 table in your datawarehouse : 1 fact table with degenerate dimension attributes (so a table with fact and dimension data), we'll call it "revenue", and 1 dimension table for... "Time", for example. We'll call it"Time"
    Your conceptual (on paper) is a star schema with 1 fact table (revenue_fact), and 2 dimension table (time, and revenue_carac).
    In your OBIEE physical layer :
    - you import the 2 tables "revenue" and "time" from your database.
    - you link "revenue" with "time"
    In your OBIEE logical layer :
    - you create a logical table called "Dim Time", based on the "Time" physical table and you do what you want with it (hierarchy...)
    - you create a logical table called "Dim Revenue Carac", based on the "revenue" physical table, and you do what you want with attributes
    - you create a logical table called "Fact revenue", based on the "revenue" physical table, and you do you what you want with measures and aggregation
    - you link the 2 logical dimension table with the logical fact table
    And that's all. Now, let's see which kind of SQL OBIEE will generate if you want to display the measure "revenue" with the attribute "revenue_carac" and the attribute "year".
    Select Sum(R.revenue_measure) , R.revenue_carac , T.year
    From revenue R , time T
    Where R.time_id = T.id
    Group by R.revenue_carac , T.year
    If you set alias in your physical layer, the request will be that (and you don't want it) :
    Select Sum(R1.revenue_measure) , R2.revenue_carac , T.year
    From revenue R1, revenue R2 , time T
    Where R1.time_id = T.id
    And R1.id = R2.id
    Group by R2.revenue_carac , T.year
    same results, but useless join between the same physical table

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Oracle Data Compression on SID tables and Dimension Tables

    Hello Community,
    We have had great success with Oracle compression on ODS tables that are no longer loaded.
    We'd now like to move on to other types of BW tables that are very large.
    OSS Note 701235 provides answers to questions concerning the possible use of Oracle compression together with SAP BW.
    But the Note does not give suggestions for (or against) Oracle compression on SID tables or Dimension tables.
    I believe both table types would exhibit the same behaviour : mostly inserts of new SID IDs and new DIM IDs, but few updates to existing SID or Dimension records.  If this is true, then both are good candidates for oracle compression. 
    Do you also agree that this is the typical behaviour for SID tables and dimension tables ?  And that these types of tables are good candidates for Oracle compression in a large BW system ?
    Thanks kindly!
    Keith Helfrich

    Hi all,
    Although this is an old thread I stumbled on during my own investigations I can provide some answers to your questions.
    Table candidates for compression are found by these criteria
           - Table size big enough?
           - Long lifetime of object planned ?
          - No or only rare structural changes for the table   ?
          - u201EUpdateu201C rate low : is your data mostly kind of u201Cread onlyu201D ?
    --               for the wideley used rolling window partition techniques of
                      tables in BW this is not a problem: mostly INSERTu2019s in the
                     current partition not affecting other partitions
    BW tables that can benefit from compression (see SAP notes 105047,701235)
           - PSA tables with data that must be saved for a longer time
           - ODS change log (no Updates of old data, only Inserts of new data)
           - u201Ehistoricalu201C cubes wich get no changes in table structure anymore
    Limitations
             - normal Insert or Update statements are stored ALWAYS in uncompressed
                    format and must be compressed separately ( <= Oracle10g )
             - Slight CPU overhead of compression, butu2026
             --      CPU consumption is more than compensated by doing less I/O as
                   for Bulk loads or parallel processing
             --      SAP BW transformations took a significant amount of CPU for
                       overall load-time into cubes caused by the application server not
                       the database
              - Table must have not more than 255 fields
              - Adding columns with an initial value or dropping columns require
                   uncompression of the complete table (strongest limitation)
    Consider all this above and you can decide that tables that going through UPDATE's are
    not good candidates for compression or tables that can change it's structure (like
    Fact- or DIM tables) .
    Now, my questions to you:
    Wich Oracle version do you use?
    Wich tool do you use for  Oracle compression?
    BRspace (can you give an example ?)
    ALTER ... MOVE COMPRESS
    bye
    yk

  • Difference Between Legal Consolidation and Management Consolidation

    Dear Experts,
    Kindly help me to know
    1) To Differenciate between Legal Consolidation and Management Consolidation Theoritically?
    2) To Differenciate between Legal Consolidation and Management Consolidation Technically in BPC?
    3) What is impact on /Difference on Business Process Flow in BPC in case of Legal Consolidation or Management Consolidation? (What should be done in Management COnsolidation and Legal COnsolidation in BPF in BPC)
    Thanks in Advance
    Ritesh

    True accounting/legal consolidations encompasses the three following elements:
    1.  the general ledgers of the entities
    2. elimination entries between entities (inter entity transactions such as sales/cost of sales, a/r and a/p etc.)
    3. adjustments for partially owned entities
    Using the legal app, rate and ownership apps, and the proper configuration, outlooksoft achieves a legal/accounting  consolidation.  Accounting consolidations also link to a legal hierarchy of entities.
    Any consolidation that is referred to as management or financial consolidation etc. can have differing connotations.   In Outooksoft  software,  management consolidations does not include item 3.
    However, when the terms are used by the client they can have different meanings.  They could mean either :
    A true legal consolidation BUT tied to the  Management hierarchy of entities (as  opposed to a legal hierarchy) OR An aggregation without item 3 above and tied to the Management hierarchy of entities.
    9 times out of 10, clients usually want the first  rather than the second alternative.  Financial statements
    rolled up using legal consol should equal that using management consol.
    However, when clients do planning, they may have their entities plan as if they were fully owned. They could also have their entities provide actals for less than 100% in the case of partially owned entities.   In these cases, you have to adjust appropriately.
    From an Outlooksoft config point of view, you need the legal, rate and ownership app and the related configuration for a legal consolidation.  Management consol depends on client definition and client practice in the data that they collect from the entities.
    In the area of BPFs, the key difference between legal and management COULD be the steps (1-3) and the maintenance of the different hierarchies and ownership values.
    Hope the above helps.

Maybe you are looking for

  • Gf4 ti4200 flickers/skips in 3dmark2001 se

    why is it that when i reach this portion of the benchmark my screen flickers a bit???  (i see cuts that are horizontal) - dragothic - fill rate test and when i reach vertex shader there are times when the characters skip... ??? quite odd right my spe

  • Withhold Tax

    Hi,    In vendor Master  we need to maintain 2 TDS (Withholding  tax types). One is for payment(J1) , and another is for Invoice posting (J2). But at the time of bill passing (MIRO) , It is also displaying 2 TDS (withholding tax types) which we have

  • How to accept certificate authority in HP PPM

    How can we accept the Certificate Authority (similar to VeriSign) of any organization specific instead of just accepting the individual certificate in HP PPM.This enables to be  able to accept any cert from that organization specific instead of havin

  • Should I replace failed hard drive in imac 4,1 or run from external or mac mini?

    My hard drive failed in my iMac 4,1 (early 2006 intel). I am running from an external drive, but slow. Should I replace the hard drive ($300 at Apple), purchase a better external to run from or a mac mini? I'm trying to determine if it makes sense to

  • Live Conferencing, capturing and maybe editing....

    To those who are familiar with this... Here is the script... I work part time for a firm who do audio visuals for Mainly Conferences and events in the Channel Island, UK. Most of these events require Camera work, for live projections with visual desk